The Kinect for XBox One Workshop
Face Tracking
A helpful reference is How to use Kinect V2 Face Basics.
That article states "Kinect provides two ways to access facial characteristics: Face Basics API and HD Face API. The first one lets us access the most common features, such as the position of the eyes, nose, and mouth, as well as the facial expressions. HD Face, on the other hand, lets us access a richer and more complex collection of facial points. We’ll examine Face Basics in this article and HD Face on the next blog post".
The "next blog post" mentioned above is How to use Kinect HD Face.
The Face Basics article presents an application that tracks the face of a single user. Extensions to tracking faces of multiple users are mentioned in the comments.
The Kinect for XBox One sensor offers two face tracking alternatives. The SDK 2.0 documentation refers to these two alternatives as "Face Tracking" and "High Definition Face Tracking".
The Face Tracking documentation provides an overview of the programming model for Kinect for XBox One face tracking. The following topics are presented:- The FaceFrame Object
- FaceFrameResult object
- FaceFrame data
- Bounding box data
- Point data
- FaceRotationQuaternion
- Face properties
- Database files
- Performance considerations
The Face Tracking documentation contains no sample code, but a sample is available in the SDK program samples.
The High Definition Face Tracking documentation describes the programming model that generates a 1000+ point model of a tracked face.
The documentation states that "It is not recommeded that you render the tracking model onscreen, except for debugging".
The High Definition Face Tracking documentation also contains no sample code, but a sample is available in the SDK program samples.
At this point in our work we are interested in the data available using the Face Tracking API, in particular the bounding box, face points, and face rotation data.
Here is a sample program that displays the basic face tracking data for up to 6 users, and their skeletons, drawn above the camera color image. The sample program face tracking code is derivedfrom the SDK Sample FaceBasics-WPF. The completed Visual Studio solution file is here: BasicFaceTracking.zip.
The file MainWindow.xaml contains:
<Window x:Class="BasicFaceTracking.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:BasicFaceTracking" mc:Ignorable="d" Title="BasicFaceTracking" Height="350" Width="525" Loaded="Window_Loaded" Closing="Window_Closing"> <Grid> <Grid.RowDefinitions> <RowDefinition Height="9*"/> <RowDefinition Height="1*"/> </Grid.RowDefinitions> <Grid Grid.Row="0" Background="White"> <Viewbox> <Image Name="camera" Width="1920" Height="1080"/> </Viewbox> <Viewbox> <Canvas Name="canvas" Width="1920" Height="1080"/> </Viewbox> <Viewbox> <Image Name="faceImage" Width="1920" Height="1080"/> </Viewbox> </Grid> <Grid Grid.Row="1" Background="White" > <TextBlock Text="Displaying Color, Skeleton, and Face Data" HorizontalAlignment="Center" VerticalAlignment="Center"/> </Grid> </Grid> </Window>
The file MainWindow.xaml.cs contains:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; using Microsoft.Kinect; using Microsoft.Kinect.Face; using System.Globalization; namespace BasicFaceTracking { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { KinectSensor _sensor; MultiSourceFrameReader _reader; IList<Body> _bodies; // Number of bodies tracked private int bodyCount; //------- Basic Face Tracking members --------------------- // Thickness of face bounding box and face points private const double DrawFaceShapeThickness = 8; // Font size of face property text private const double DrawTextFontSize = 30; // Radius of face point circle private const double FacePointRadius = 1.0; // Text layout offset in X axis private const float TextLayoutOffsetX = -0.1f; // Text layout offset in Y axis private const float TextLayoutOffsetY = -0.15f; // Face rotation display angle increment in degrees private const double FaceRotationIncrementInDegrees = 5.0; // Formatted text to indicate that there are no bodies/faces tracked in the FOV private FormattedText textFaceNotTracked = new FormattedText( "No bodies or faces are tracked ...", CultureInfo.GetCultureInfo("en-us"), FlowDirection.LeftToRight, new Typeface("Georgia"), DrawTextFontSize, Brushes.White); // Text layout for the no face tracked message private Point textLayoutFaceNotTracked = new Point(10.0, 10.0); // Drawing group for body rendering output private DrawingGroup drawingGroup; // Face frame sources private FaceFrameSource[] faceFrameSources = null; // Face frame readers private FaceFrameReader[] faceFrameReaders = null; // Storage for face frame results private FaceFrameResult[] faceFrameResults = null; // Width of display (color space) private int displayWidth; // Height of display (color space) private int displayHeight; // Display rectangle private Rect displayRect; // List of brushes for each face tracked private List<Brush> faceBrush; // Current status text to display //private string statusText = null; //private int colorIndex; private Brush bodyBrush; public MainWindow() { InitializeComponent(); } private void Window_Loaded(object sender, RoutedEventArgs e) { // Obtain the sensor and start it up _sensor = KinectSensor.GetDefault(); // Different than article if (_sensor != null) { _sensor.Open(); } // Specify the required streams _reader = _sensor.OpenMultiSourceFrameReader(FrameSourceTypes.Color | FrameSourceTypes.Body); // Add an event handler _reader.MultiSourceFrameArrived += Reader_MultiSourceFrameArrived; // Get the color frame details FrameDescription frameDescription = _sensor.ColorFrameSource.FrameDescription; // Set the display specifics this.displayWidth = frameDescription.Width; this.displayHeight = frameDescription.Height; this.displayRect = new Rect(0.0, 0.0, this.displayWidth, this.displayHeight); // specify the required face frame results FaceFrameFeatures faceFrameFeatures = FaceFrameFeatures.BoundingBoxInColorSpace | FaceFrameFeatures.PointsInColorSpace | FaceFrameFeatures.RotationOrientation | FaceFrameFeatures.FaceEngagement | FaceFrameFeatures.Glasses | FaceFrameFeatures.Happy | FaceFrameFeatures.LeftEyeClosed | FaceFrameFeatures.RightEyeClosed | FaceFrameFeatures.LookingAway | FaceFrameFeatures.MouthMoved | FaceFrameFeatures.MouthOpen; // create a face frame source + reader to track each face in the FOV this.bodyCount = _sensor.BodyFrameSource.BodyCount; this.faceFrameSources = new FaceFrameSource[this.bodyCount]; this.faceFrameReaders = new FaceFrameReader[this.bodyCount]; for (int i = 0; i < this.bodyCount; i++) { // create the face frame source with the required face frame features and an initial tracking Id of 0 this.faceFrameSources[i] = new FaceFrameSource(_sensor, 0, faceFrameFeatures); // open the corresponding reader this.faceFrameReaders[i] = this.faceFrameSources[i].OpenReader(); } // allocate storage to store face frame results for each face in the FOV this.faceFrameResults = new FaceFrameResult[this.bodyCount]; // populate face result colors - one for each face index this.faceBrush = new List<Brush>() { Brushes.White, Brushes.Orange, Brushes.Green, Brushes.Red, Brushes.LightBlue, Brushes.Yellow }; for (int i = 0; i < this.bodyCount; i++) { if (this.faceFrameReaders[i] != null) { // wire handler for face frame arrival this.faceFrameReaders[i].FrameArrived += this.Reader_FaceFrameArrived; } } drawingGroup = new DrawingGroup(); /* if (this.bodyFrameReader != null) { // wire handler for body frame arrival this.bodyFrameReader.FrameArrived += this.Reader_BodyFrameArrived; } */ } private void Window_Closing(object sender, System.ComponentModel.CancelEventArgs e) { for (int i = 0; i < this.bodyCount; i++) { if (this.faceFrameReaders[i] != null) { // FaceFrameReader is IDisposable this.faceFrameReaders[i].Dispose(); this.faceFrameReaders[i] = null; } if (this.faceFrameSources[i] != null) { // FaceFrameSource is IDisposable this.faceFrameSources[i].Dispose(); this.faceFrameSources[i] = null; } } /* if (this.bodyFrameReader != null) { // BodyFrameReader is IDisposable this.bodyFrameReader.Dispose(); this.bodyFrameReader = null; } */ if (_sensor != null) { _sensor.Close(); _sensor = null; } } // Handles the face frame data arriving from the sensor // <param name="sender">object sending the event</param> // <param name="e">event arguments</param> private void Reader_FaceFrameArrived(object sender, FaceFrameArrivedEventArgs e) { using (FaceFrame faceFrame = e.FrameReference.AcquireFrame()) { if (faceFrame != null) { // get the index of the face source from the face source array int index = this.GetFaceSourceIndex(faceFrame.FaceFrameSource); // check if this face frame has valid face frame results if (this.ValidateFaceBoxAndPoints(faceFrame.FaceFrameResult)) { // store this face frame result to draw later this.faceFrameResults[index] = faceFrame.FaceFrameResult; } else { // indicates that the latest face frame result from this reader is invalid this.faceFrameResults[index] = null; } } } } // Validates face bounding box and face points to be within screen space // <param name="faceResult">the face frame result containing face box and points</param> // <returns>success or failure</returns> private bool ValidateFaceBoxAndPoints(FaceFrameResult faceResult) { bool isFaceValid = faceResult != null; if (isFaceValid) { var faceBox = faceResult.FaceBoundingBoxInColorSpace; if (faceBox != null) { // check if we have a valid rectangle within the bounds of the screen space isFaceValid = (faceBox.Right - faceBox.Left) > 0 && (faceBox.Bottom - faceBox.Top) > 0 && faceBox.Right <= this.displayWidth && faceBox.Bottom <= this.displayHeight; if (isFaceValid) { var facePoints = faceResult.FacePointsInColorSpace; if (facePoints != null) { foreach (PointF pointF in facePoints.Values) { // check if we have a valid face point within the bounds of the screen space bool isFacePointValid = pointF.X > 0.0f && pointF.Y > 0.0f && pointF.X < this.displayWidth && pointF.Y < this.displayHeight; if (!isFacePointValid) { isFaceValid = false; break; } } } } } } return isFaceValid; } // Returns the index of the face frame source // <param name="faceFrameSource">the face frame source</param> // <returns>the index of the face source in the face source array</returns> private int GetFaceSourceIndex(FaceFrameSource faceFrameSource) { int index = -1; for (int i = 0; i < this.bodyCount; i++) { if (this.faceFrameSources[i] == faceFrameSource) { index = i; break; } } return index; } void Reader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e) { // Get a reference to the multi-frame var reference = e.FrameReference.AcquireFrame(); // Open color frame using (var frame = reference.ColorFrameReference.AcquireFrame()) { if (frame != null) { // Do something with the color frame... camera.Source = ToBitmap(frame); } } // Open body frame using (var frame = reference.BodyFrameReference.AcquireFrame()) { if (frame != null) { _bodies = new Body[frame.BodyFrameSource.BodyCount]; frame.GetAndRefreshBodyData(_bodies); //------DRAW THE BASIC FACE HERE ------ using (DrawingContext dc = this.drawingGroup.Open()) { // draw the dark background dc.DrawRectangle(Brushes.Transparent, null, this.displayRect); bool drawFaceResult = false; // iterate through each face source for (int i = 0; i < this.bodyCount; i++) { // check if a valid face is tracked in this face source if (this.faceFrameSources[i].IsTrackingIdValid) { // check if we have valid face frame results if (this.faceFrameResults[i] != null) { // draw face frame results this.DrawFaceFrameResults(i, this.faceFrameResults[i], dc); if (!drawFaceResult) { drawFaceResult = true; } } } else { // check if the corresponding body is tracked if (_bodies[i].IsTracked) { // update the face frame source to track this body this.faceFrameSources[i].TrackingId = _bodies[i].TrackingId; } } } if (!drawFaceResult) { // if no faces were drawn then this indicates one of the following: // a body was not tracked // a body was tracked but the corresponding face was not tracked // a body and the corresponding face was tracked though the face box or the face points were not valid dc.DrawText( this.textFaceNotTracked, this.textLayoutFaceNotTracked); } this.drawingGroup.ClipGeometry = new RectangleGeometry(this.displayRect); faceImage.Source = new DrawingImage(drawingGroup); } //------DRAW THE BASIC FACE HERE ------ //------ DRAW THE SKELETONS HERE ------ canvas.Children.Clear(); for (int k = 0; k < this.bodyCount; k++) { var body = _bodies[k]; bodyBrush = faceBrush[k]; if (body.IsTracked) { DrawSkeleton(body); } } //------ DRAW THE SKELETONS HERE ------ } } } // Draws face frame results // <param name="faceIndex">the index of the face frame corresponding to a specific body in the FOV</param> // <param name="faceResult">container of all face frame results</param> // <param name="drawingContext">drawing context to render to</param> private void DrawFaceFrameResults(int faceIndex, FaceFrameResult faceResult, DrawingContext drawingContext) { // choose the brush based on the face index Brush drawingBrush = this.faceBrush[0]; if (faceIndex < this.bodyCount) { drawingBrush = this.faceBrush[faceIndex]; } Pen drawingPen = new Pen(drawingBrush, DrawFaceShapeThickness); // draw the face bounding box var faceBoxSource = faceResult.FaceBoundingBoxInColorSpace; Rect faceBox = new Rect(faceBoxSource.Left, faceBoxSource.Top, faceBoxSource.Right - faceBoxSource.Left, faceBoxSource.Bottom - faceBoxSource.Top); drawingContext.DrawRectangle(null, drawingPen, faceBox); if (faceResult.FacePointsInColorSpace != null) { // draw each face point foreach (PointF pointF in faceResult.FacePointsInColorSpace.Values) { drawingContext.DrawEllipse(null, drawingPen, new Point(pointF.X, pointF.Y), FacePointRadius, FacePointRadius); } } string faceText = string.Empty; // extract each face property information and store it in faceText if (faceResult.FaceProperties != null) { foreach (var item in faceResult.FaceProperties) { faceText += item.Key.ToString() + " : "; // consider a "maybe" as a "no" to restrict // the detection result refresh rate if (item.Value == DetectionResult.Maybe) { faceText += DetectionResult.No + "\n"; } else { faceText += item.Value.ToString() + "\n"; } } } // extract face rotation in degrees as Euler angles if (faceResult.FaceRotationQuaternion != null) { int pitch, yaw, roll; ExtractFaceRotationInDegrees(faceResult.FaceRotationQuaternion, out pitch, out yaw, out roll); faceText += "FaceYaw : " + yaw + "\n" + "FacePitch : " + pitch + "\n" + "FacenRoll : " + roll + "\n"; } // render the face property and face rotation information Point faceTextLayout; if (this.GetFaceTextPositionInColorSpace(faceIndex, out faceTextLayout)) { drawingContext.DrawText( new FormattedText( faceText, CultureInfo.GetCultureInfo("en-us"), FlowDirection.LeftToRight, new Typeface("Georgia"), DrawTextFontSize, drawingBrush), faceTextLayout); } } // Computes the face result text position by adding an offset to the corresponding // body's head joint in camera space and then by projecting it to screen space // <param name="faceIndex">the index of the face frame corresponding to a specific body in the FOV</param> // <param name="faceTextLayout">the text layout position in screen space</param> // <returns>success or failure</returns> private bool GetFaceTextPositionInColorSpace(int faceIndex, out Point faceTextLayout) { faceTextLayout = new Point(); bool isLayoutValid = false; Body body = _bodies[faceIndex]; if (body.IsTracked) { var headJoint = body.Joints[JointType.Head].Position; CameraSpacePoint textPoint = new CameraSpacePoint() { X = headJoint.X + TextLayoutOffsetX, Y = headJoint.Y + TextLayoutOffsetY, Z = headJoint.Z }; ColorSpacePoint textPointInColor = _sensor.CoordinateMapper.MapCameraPointToColorSpace(textPoint); faceTextLayout.X = textPointInColor.X; faceTextLayout.Y = textPointInColor.Y; isLayoutValid = true; } return isLayoutValid; } public void DrawSkeleton(Body body) { if (body == null) return; DrawClippedEdges(body); // Draw the joints foreach (Joint joint in body.Joints.Values) { DrawJoint(joint); } // Draw the bones DrawLine(body.Joints[JointType.Head], body.Joints[JointType.Neck]); DrawLine(body.Joints[JointType.Neck], body.Joints[JointType.SpineShoulder]); DrawLine(body.Joints[JointType.SpineShoulder], body.Joints[JointType.ShoulderLeft]); DrawLine(body.Joints[JointType.SpineShoulder], body.Joints[JointType.ShoulderRight]); DrawLine(body.Joints[JointType.SpineShoulder], body.Joints[JointType.SpineMid]); DrawLine(body.Joints[JointType.ShoulderLeft], body.Joints[JointType.ElbowLeft]); DrawLine(body.Joints[JointType.ShoulderRight], body.Joints[JointType.ElbowRight]); DrawLine(body.Joints[JointType.ElbowLeft], body.Joints[JointType.WristLeft]); DrawLine(body.Joints[JointType.ElbowRight], body.Joints[JointType.WristRight]); DrawLine(body.Joints[JointType.WristLeft], body.Joints[JointType.HandLeft]); DrawLine(body.Joints[JointType.WristRight], body.Joints[JointType.HandRight]); DrawLine(body.Joints[JointType.HandLeft], body.Joints[JointType.HandTipLeft]); DrawLine(body.Joints[JointType.HandRight], body.Joints[JointType.HandTipRight]); DrawLine(body.Joints[JointType.HandTipLeft], body.Joints[JointType.ThumbLeft]); DrawLine(body.Joints[JointType.HandTipRight], body.Joints[JointType.ThumbRight]); DrawLine(body.Joints[JointType.SpineMid], body.Joints[JointType.SpineBase]); DrawLine(body.Joints[JointType.SpineBase], body.Joints[JointType.HipLeft]); DrawLine(body.Joints[JointType.SpineBase], body.Joints[JointType.HipRight]); DrawLine(body.Joints[JointType.HipLeft], body.Joints[JointType.KneeLeft]); DrawLine(body.Joints[JointType.HipRight], body.Joints[JointType.KneeRight]); DrawLine(body.Joints[JointType.KneeLeft], body.Joints[JointType.AnkleLeft]); DrawLine(body.Joints[JointType.KneeRight], body.Joints[JointType.AnkleRight]); DrawLine(body.Joints[JointType.AnkleLeft], body.Joints[JointType.FootLeft]); DrawLine(body.Joints[JointType.AnkleRight], body.Joints[JointType.FootRight]); } public void DrawJoint(Joint joint) { if (joint.TrackingState == TrackingState.Tracked) { // 3D space point CameraSpacePoint jointPosition = joint.Position; // 2D space point Point point = new Point(); ColorSpacePoint colorPoint = _sensor.CoordinateMapper.MapCameraPointToColorSpace(jointPosition); // Handle inferred points point.X = float.IsInfinity(colorPoint.X) ? 0 : colorPoint.X; point.Y = float.IsInfinity(colorPoint.Y) ? 0 : colorPoint.Y; // Draw an ellipse for that joint Ellipse ellipse = new Ellipse { Fill = bodyBrush, Width = 30, Height = 30 }; Canvas.SetLeft(ellipse, point.X - ellipse.Width / 2); Canvas.SetTop(ellipse, point.Y - ellipse.Height / 2); canvas.Children.Add(ellipse); } } public void DrawLine(Joint first, Joint second) { if (first.TrackingState == TrackingState.NotTracked || second.TrackingState == TrackingState.NotTracked) return; // Joint data is in Camera XYZ coordinates // 3D space point CameraSpacePoint jointFirstPosition = first.Position; CameraSpacePoint jointSecondPosition = second.Position; // 2D space points in XY coordinates Point pointFirst = new Point(); Point pointSecond = new Point(); // Apply COORDINATE MAPPING - Here mappint to ColorSpace ColorSpacePoint colorPointFirst = _sensor.CoordinateMapper.MapCameraPointToColorSpace(jointFirstPosition); ColorSpacePoint colorPointSecond = _sensor.CoordinateMapper.MapCameraPointToColorSpace(jointSecondPosition); // Handle inferred points pointFirst.X = float.IsInfinity(colorPointFirst.X) ? 0 : colorPointFirst.X; pointFirst.Y = float.IsInfinity(colorPointFirst.Y) ? 0 : colorPointFirst.Y; pointSecond.X = float.IsInfinity(colorPointSecond.X) ? 0 : colorPointSecond.X; pointSecond.Y = float.IsInfinity(colorPointSecond.Y) ? 0 : colorPointSecond.Y; // Creat a Line using the ColorSpacePoints Line line = new Line { X1 = pointFirst.X, Y1 = pointFirst.Y, X2 = pointSecond.X, Y2 = pointSecond.Y, StrokeThickness = 8, //Stroke = new SolidColorBrush(Colors.Red) Stroke = bodyBrush }; canvas.Children.Add(line); } // Convert a ColorFrame to an ImageSource private ImageSource ToBitmap(ColorFrame frame) { int width = frame.FrameDescription.Width; int height = frame.FrameDescription.Height; PixelFormat format = PixelFormats.Bgr32; byte[] pixels = new byte[width * height * ((format.BitsPerPixel + 7) / 8)]; if (frame.RawColorImageFormat == ColorImageFormat.Bgra) { frame.CopyRawFrameDataToArray(pixels); } else { frame.CopyConvertedFrameDataToArray(pixels, ColorImageFormat.Bgra); } int stride = width * format.BitsPerPixel / 8; return BitmapSource.Create(width, height, 96, 96, format, null, pixels, stride); } // Convert a DepthFrame to an ImageSource private ImageSource ToBitmap(DepthFrame frame) { int width = frame.FrameDescription.Width; int height = frame.FrameDescription.Height; PixelFormat format = PixelFormats.Bgr32; ushort minDepth = frame.DepthMinReliableDistance; ushort maxDepth = frame.DepthMaxReliableDistance; ushort[] depthData = new ushort[width * height]; byte[] pixelData = new byte[width * height * (format.BitsPerPixel + 7) / 8]; frame.CopyFrameDataToArray(depthData); int colorIndex = 0; for (int depthIndex = 0; depthIndex < depthData.Length; ++depthIndex) { ushort depth = depthData[depthIndex]; byte intensity = (byte)(depth >= minDepth && depth <= maxDepth ? depth : 0); pixelData[colorIndex++] = intensity; // Blue pixelData[colorIndex++] = intensity; // Green pixelData[colorIndex++] = intensity; // Red ++colorIndex; } int stride = width * format.BitsPerPixel / 8; return BitmapSource.Create(width, height, 96, 96, format, null, pixelData, stride); } // Convert an InfraredFrame to an ImageSource private ImageSource ToBitmap(InfraredFrame frame) { int width = frame.FrameDescription.Width; int height = frame.FrameDescription.Height; PixelFormat format = PixelFormats.Bgr32; ushort[] infraredData = new ushort[width * height]; byte[] pixelData = new byte[width * height * (format.BitsPerPixel + 7) / 8]; frame.CopyFrameDataToArray(infraredData); int colorIndex = 0; for (int infraredIndex = 0; infraredIndex < infraredData.Length; ++infraredIndex) { ushort ir = infraredData[infraredIndex]; byte intensity = (byte)(ir >> 8); pixelData[colorIndex++] = intensity; // Blue pixelData[colorIndex++] = intensity; // Green pixelData[colorIndex++] = intensity; // Red ++colorIndex; } int stride = width * format.BitsPerPixel / 8; return BitmapSource.Create(width, height, 96, 96, format, null, pixelData, stride); } // Converts rotation quaternion to Euler angles // And then maps them to a specified range of values to control the refresh rate // <param name="rotQuaternion">face rotation quaternion</param> // <param name="pitch">rotation about the X-axis</param> // <param name="yaw">rotation about the Y-axis</param> // <param name="roll">rotation about the Z-axis</param> private static void ExtractFaceRotationInDegrees(Vector4 rotQuaternion, out int pitch, out int yaw, out int roll) { double x = rotQuaternion.X; double y = rotQuaternion.Y; double z = rotQuaternion.Z; double w = rotQuaternion.W; // convert face rotation quaternion to Euler angles in degrees double yawD, pitchD, rollD; pitchD = Math.Atan2(2 * ((y * z) + (w * x)), (w * w) - (x * x) - (y * y) + (z * z)) / Math.PI * 180.0; yawD = Math.Asin(2 * ((w * y) - (x * z))) / Math.PI * 180.0; rollD = Math.Atan2(2 * ((x * y) + (w * z)), (w * w) + (x * x) - (y * y) - (z * z)) / Math.PI * 180.0; // clamp the values to a multiple of the specified increment to control the refresh rate double increment = FaceRotationIncrementInDegrees; pitch = (int)(Math.Floor((pitchD + ((increment / 2.0) * (pitchD > 0 ? 1.0 : -1.0))) / increment) * increment); yaw = (int)(Math.Floor((yawD + ((increment / 2.0) * (yawD > 0 ? 1.0 : -1.0))) / increment) * increment); roll = (int)(Math.Floor((rollD + ((increment / 2.0) * (rollD > 0 ? 1.0 : -1.0))) / increment) * increment); } // Draws indicators to show which edges are clipping body data private void DrawClippedEdges(Body body) { FrameEdges clippedEdges = body.ClippedEdges; if (clippedEdges.HasFlag(FrameEdges.Bottom)) { Line line = new Line { X1 = 0, Y1 = displayHeight, X2 = displayWidth, Y2 = displayHeight, StrokeThickness = 10, Stroke = new SolidColorBrush(Colors.Red) }; canvas.Children.Add(line); } if (clippedEdges.HasFlag(FrameEdges.Top)) { Line line = new Line { X1 = 0, Y1 = 0, X2 = displayWidth, Y2 = 0, StrokeThickness = 10, Stroke = new SolidColorBrush(Colors.Red) }; canvas.Children.Add(line); } if (clippedEdges.HasFlag(FrameEdges.Left)) { Line line = new Line { X1 = 0, Y1 = 0, X2 = 0, Y2 = displayHeight, StrokeThickness = 10, Stroke = new SolidColorBrush(Colors.Red) }; canvas.Children.Add(line); } if (clippedEdges.HasFlag(FrameEdges.Right)) { Line line = new Line { X1 = displayWidth, Y1 = 0, X2 = displayWidth, Y2 = displayHeight, StrokeThickness = 10, Stroke = new SolidColorBrush(Colors.Red) }; canvas.Children.Add(line); } } } }Here is a screenshot illustrating the program's output:
The face bounding box is drawn along with the skeleton. The face properties are written below the face bounding box. In this application up to 6 user skeletons and their face bounding boxes can be tracked.