The Kinect for XBox One Workshop
Displaying Body Skeletons
For each tracked Body, the skeletal data returned by the Kinect sensor contains a "dictionary" of 25 Joints. The JointType enumeration specifies the names of those Joints. The Joint data contains a Position, an Orientation, and other data. The Joint Positions are used to draw lines representing the bones that comprise the skeleton for that Body.
The Joint data is defined in a right-hand coordinate system having its origin at the Kinect sensor. The positive Z-axis is directed outward from the sensor, the X-axis is positive to the left, and the Y-axis is positive upwards.
As in the previous program, a MultiSourceFrameReader can be used to obtain the data for up to 6 Bodies identified and tracked by the Kinect sensor.
The skeletal data is drawn on a Canvas control named canvas, and that Canvas overlays the Image named camera. Coordinate mapping provided by the Kinect sensor class enables the body skeleton data, stored in 3D CameraSpace coordinates, to be drawn over the corresponding 2D ColorSpace image coordinates.
To build this application, start Visual Studio and create a new C#/WPF Application named BodyStream. The completed Visual Studio solution file is here: BodyStream.zip.
Add a Reference to Assemblies/Extensions/Microsoft.Kinect 2.0.
In MainWindow.xaml add an Image control named camera and Canvas control named canvas.
Here is the code contained in MainWindow.xaml:
<Window x:Class="BodyStream.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:BodyStream" mc:Ignorable="d" Title="Body Stream" Height="350" Width="525"> <Viewbox> <Grid> <Image Name="camera"/> <Canvas Name="canvas"/> </Grid> </Viewbox> </Window>
Here is the code contained in MainWindow.xaml.cs:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; using Microsoft.Kinect; namespace BodyStream { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { KinectSensor _sensor; MultiSourceFrameReader _reader; IList<Body> _bodies; public MainWindow() { InitializeComponent(); // Obtain the sensor and start it up _sensor = KinectSensor.GetDefault(); // Different than article if (_sensor != null) { _sensor.Open(); } // Specify the requires streams _reader = _sensor.OpenMultiSourceFrameReader(FrameSourceTypes.Color | FrameSourceTypes.Body); // Add an event handler _reader.MultiSourceFrameArrived += Reader_MultiSourceFrameArrived; } void Reader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e) { // Get a reference to the multi-frame var reference = e.FrameReference.AcquireFrame(); // Open color frame using (var frame = reference.ColorFrameReference.AcquireFrame()) { if (frame != null) { // Do something with the color frame... camera.Source = ToBitmap(frame); } } // Open body frame using (var frame = reference.BodyFrameReference.AcquireFrame()) { if (frame != null) { // Do something with the body frame... canvas.Children.Clear(); _bodies = new Body[frame.BodyFrameSource.BodyCount]; frame.GetAndRefreshBodyData(_bodies); foreach (var body in _bodies) { if (body.IsTracked) { DrawSkeleton(body); } } } } } public void DrawSkeleton(Body body) { if (body == null) return; // Draw the joints foreach (Joint joint in body.Joints.Values) { DrawJoint(joint); } // Draw the bones DrawLine(body.Joints[JointType.Head], body.Joints[JointType.Neck]); DrawLine(body.Joints[JointType.Neck], body.Joints[JointType.SpineShoulder]); DrawLine(body.Joints[JointType.SpineShoulder], body.Joints[JointType.ShoulderLeft]); DrawLine(body.Joints[JointType.SpineShoulder], body.Joints[JointType.ShoulderRight]); DrawLine(body.Joints[JointType.SpineShoulder], body.Joints[JointType.SpineMid]); DrawLine(body.Joints[JointType.ShoulderLeft], body.Joints[JointType.ElbowLeft]); DrawLine(body.Joints[JointType.ShoulderRight], body.Joints[JointType.ElbowRight]); DrawLine(body.Joints[JointType.ElbowLeft], body.Joints[JointType.WristLeft]); DrawLine(body.Joints[JointType.ElbowRight], body.Joints[JointType.WristRight]); DrawLine(body.Joints[JointType.WristLeft], body.Joints[JointType.HandLeft]); DrawLine(body.Joints[JointType.WristRight], body.Joints[JointType.HandRight]); DrawLine(body.Joints[JointType.HandLeft], body.Joints[JointType.HandTipLeft]); DrawLine(body.Joints[JointType.HandRight], body.Joints[JointType.HandTipRight]); DrawLine(body.Joints[JointType.HandTipLeft], body.Joints[JointType.ThumbLeft]); DrawLine(body.Joints[JointType.HandTipRight], body.Joints[JointType.ThumbRight]); DrawLine(body.Joints[JointType.SpineMid], body.Joints[JointType.SpineBase]); DrawLine(body.Joints[JointType.SpineBase], body.Joints[JointType.HipLeft]); DrawLine(body.Joints[JointType.SpineBase], body.Joints[JointType.HipRight]); DrawLine(body.Joints[JointType.HipLeft], body.Joints[JointType.KneeLeft]); DrawLine(body.Joints[JointType.HipRight], body.Joints[JointType.KneeRight]); DrawLine(body.Joints[JointType.KneeLeft], body.Joints[JointType.AnkleLeft]); DrawLine(body.Joints[JointType.KneeRight], body.Joints[JointType.AnkleRight]); DrawLine(body.Joints[JointType.AnkleLeft], body.Joints[JointType.FootLeft]); DrawLine(body.Joints[JointType.AnkleRight], body.Joints[JointType.FootRight]); } public void DrawJoint(Joint joint) { if (joint.TrackingState == TrackingState.Tracked) { // 3D space point CameraSpacePoint jointPosition = joint.Position; // 2D space point Point point = new Point(); ColorSpacePoint colorPoint = _sensor.CoordinateMapper.MapCameraPointToColorSpace(jointPosition); // Handle inferred points point.X = float.IsInfinity(colorPoint.X) ? 0 : colorPoint.X; point.Y = float.IsInfinity(colorPoint.Y) ? 0 : colorPoint.Y; // Draw an ellipse for that joint Ellipse ellipse = new Ellipse { Fill = Brushes.Red, Width = 30, Height = 30 }; Canvas.SetLeft(ellipse, point.X - ellipse.Width / 2); Canvas.SetTop(ellipse, point.Y - ellipse.Height / 2); canvas.Children.Add(ellipse); } } public void DrawLine(Joint first, Joint second) { if (first.TrackingState == TrackingState.NotTracked || second.TrackingState == TrackingState.NotTracked) return; // Joint data is in Camera XYZ coordinates // 3D space point CameraSpacePoint jointFirstPosition = first.Position; CameraSpacePoint jointSecondPosition = second.Position; // 2D space points in XY coordinates Point pointFirst = new Point(); Point pointSecond = new Point(); // Apply COORDINATE MAPPING - Here mapping to ColorSpace ColorSpacePoint colorPointFirst = _sensor.CoordinateMapper.MapCameraPointToColorSpace(jointFirstPosition); ColorSpacePoint colorPointSecond = _sensor.CoordinateMapper.MapCameraPointToColorSpace(jointSecondPosition); // Handle inferred points pointFirst.X = float.IsInfinity(colorPointFirst.X) ? 0 : colorPointFirst.X; pointFirst.Y = float.IsInfinity(colorPointFirst.Y) ? 0 : colorPointFirst.Y; pointSecond.X = float.IsInfinity(colorPointSecond.X) ? 0 : colorPointSecond.X; pointSecond.Y = float.IsInfinity(colorPointSecond.Y) ? 0 : colorPointSecond.Y; // Creat a Line using the ColorSpacePoints Line line = new Line { X1 = pointFirst.X, Y1 = pointFirst.Y, X2 = pointSecond.X, Y2 = pointSecond.Y, StrokeThickness = 8, Stroke = new SolidColorBrush(Colors.Red) }; canvas.Children.Add(line); } // Convert a ColorFrame to an ImageSource private ImageSource ToBitmap(ColorFrame frame) { int width = frame.FrameDescription.Width; int height = frame.FrameDescription.Height; PixelFormat format = PixelFormats.Bgr32; byte[] pixels = new byte[width * height * ((format.BitsPerPixel + 7) / 8)]; if (frame.RawColorImageFormat == ColorImageFormat.Bgra) { frame.CopyRawFrameDataToArray(pixels); } else { frame.CopyConvertedFrameDataToArray(pixels, ColorImageFormat.Bgra); } int stride = width * format.BitsPerPixel / 8; return BitmapSource.Create(width, height, 96, 96, format, null, pixels, stride); } // Convert a DepthFrame to an ImageSource private ImageSource ToBitmap(DepthFrame frame) { int width = frame.FrameDescription.Width; int height = frame.FrameDescription.Height; PixelFormat format = PixelFormats.Bgr32; ushort minDepth = frame.DepthMinReliableDistance; ushort maxDepth = frame.DepthMaxReliableDistance; ushort[] depthData = new ushort[width * height]; byte[] pixelData = new byte[width * height * (format.BitsPerPixel + 7) / 8]; frame.CopyFrameDataToArray(depthData); int colorIndex = 0; for (int depthIndex = 0; depthIndex < depthData.Length; ++depthIndex) { ushort depth = depthData[depthIndex]; byte intensity = (byte)(depth >= minDepth && depth <= maxDepth ? depth : 0); pixelData[colorIndex++] = intensity; // Blue pixelData[colorIndex++] = intensity; // Green pixelData[colorIndex++] = intensity; // Red ++colorIndex; } int stride = width * format.BitsPerPixel / 8; return BitmapSource.Create(width, height, 96, 96, format, null, pixelData, stride); } // Convert an InfraredFrame to an ImageSource private ImageSource ToBitmap(InfraredFrame frame) { int width = frame.FrameDescription.Width; int height = frame.FrameDescription.Height; PixelFormat format = PixelFormats.Bgr32; ushort[] infraredData = new ushort[width * height]; byte[] pixelData = new byte[width * height * (format.BitsPerPixel + 7) / 8]; frame.CopyFrameDataToArray(infraredData); int colorIndex = 0; for (int infraredIndex = 0; infraredIndex < infraredData.Length; ++infraredIndex) { ushort ir = infraredData[infraredIndex]; byte intensity = (byte)(ir >> 8); pixelData[colorIndex++] = intensity; // Blue pixelData[colorIndex++] = intensity; // Green pixelData[colorIndex++] = intensity; // Red ++colorIndex; } int stride = width * format.BitsPerPixel / 8; return BitmapSource.Create(width, height, 96, 96, format, null, pixelData, stride); } } }
Here is a screen shot of the result:
In this application, the 3D body skeleton data is available - that data can be visualized as in this application, or written to a file for later use, perhaps to be read by a Matlab script.
Up to 6 bodies can be tracked. In that case it can be helpful to use a different color to draw each skeleton. One possibility is to use each bodies' TrackingId property to select a color. In this application each skeleton is drawn using the color red.