Kinect v2 Speech Recognition sample–open sourced!

Good news.

If you are new to Kinect for Windows v2 development, I posted my Speech Recognition Sample code on Github

The sample demonstrate Kinect for Windows v2 Speech Recognition capabilities. It Shows how to set up Kinect Speech Recognition Initializers, Add Grammar and how to perform an action when a speech is recognized.

You can find the Repository on Github

You can find the sample video below

Windows Store 8.1 apps, Instagram API and Callback URL

While I was looking to use Instagram API in Windows Store 8.1 app I had to deal with the WebAuthenticationBroker and Callback URL. Now before I go any further back in the Windows Store 8 apps the callback URL could simply be http://localhost appended to the end was your token that is required to make call to any API usually in the form of http://localhost?token=XXXXX

In the Windows Store 8.1 this has changed which means http://localhost no longer works, in order to get a WebAuthenticatioBroker to redirect properly your app needs to be registered in Windows Dev Center to get a valid Package SID, this can be achieved by going Windows Dev Center under App Name click Live Services site


After creating the new App Secret you get a Package SID


Your Package SID is in the form of ms-app://s-XXXXX

Now this ms-app:// needs to be registered as Callback URL in, while registering a new Client


After you have registered your Client app with Instagram, you will see a screen like this with all the information.


Then all you need to do is call your WebAuthenticationBroker like this


This will get your Client app an access_token required to make subsequent calls to Instagram API.

Did you find it helpful? Let me know in the comments.

XAML Spy–A Must Have Utility For Windows phone Developers (and Windows Store, Xamarin.Forms)

I am just writing to let you know how great XAMLSpy is, it’s a must have tool if you are developing for Windows Phone, WPF and other platforms. Recently I was developing a Windows phone application and when I went looking for a utility to help me find those little user interface details on the phone or in emulator, nothing matched the awesomeness of XAMLSpy.

In addition to the great number of features I was blown away by great support from XAMLSpy team on twitter who quickly extended my trial license and sent me a new key within minutes (thank you @kozw)

It is fully integrated in Visual Studio which makes it really easy to use, it is also quick and saves you a lot of time navigating those long visual tree of user interface elements in your markup at runtime.

Go to to download


Kinect for Windows v2 Deep dive–Part 2

In my previous K4W deep dive post I draw Body and joints in WPF and overlay them on top of Color stream. In that post notice that I am using two Image control, one to render Color and other for Body info.

   1:   <Grid Margin="10 0 10 0">
   2:          <Grid.RowDefinitions>
   3:              <RowDefinition Height="Auto" />
   4:          </Grid.RowDefinitions>
   5:          <Image Source="{Binding ImageSource}" Grid.Row="0" Width="1200" Height="700" Stretch="UniformToFill" />
   6:          <Image Source="{Binding BodyImageSource}" Grid.Row="0" Stretch="UniformToFill" Width="1200" Height="700"/>
   7:  </Grid>

Also in that post I talked about a hack to position body tracking drawings ‘properly’ over the color stream.

  jointPoints[jointType] = new Point(depthSpacePoint.X, depthSpacePoint.Y - 80);

In this post however I fix the above two problems, which means I don’t need to use 80px offset hack from the previous post and use only one Image control to render both Color and Body data.

Fixing Body tracking offset – Using MapCameraPointToColorSpace

First thing first, in the previous post I used a method of the CoordinateMapper class in the Kinect for Windows v2 SDK that uses camera point and maps them to depth space


The problem is that this method does not accurately translates points to the color frame that we receive from the Kinect sensor so I can up with the hack. Fortunately the CooridnateMapper class has another method that works perfectly for this.


So here’s how it looks like when the body frame has arrived

   1:  foreach (JointType jointType in joints.Keys){
   2:     ColorSpacePoint colorSpacePoint = this.coordinateMapper.MapCameraPointToColorSpace(joints[jointType].Position);
   3:     jointPoints[jointType] = new Point(colorSpacePoint.X, colorSpacePoint.Y);
   4:  }

This time I use the ColorSpacePoint and use that to get X and Y coordinates from the camera.

Merge Color and Body frames to use one Image control

Alright so I got the Body info mapped properly to the Color frames, but I have not solved one problem, how to merge these two frames and use single image control to render them.

To do that we need to use RenderTargetBitmap function to be able to render our body frame drawing data on the WriteableBitmap we use for the Color stream, here’s how I declare them

   1:   this.bitmap = new WriteableBitmap(frameDescription.Width, frameDescription.Height, 96.0, 96.0, PixelFormats.Bgr32, null);
   2:   this.drawingGroup = new DrawingGroup();
   3:  _bodySourceRTB = new RenderTargetBitmap(displayWidth, displayHeight, 96.0, 96.0, PixelFormats.Pbgra32);
   4:   rootGrid = new Grid();
   5:  _colorWriteableBitmap = BitmapFactory.New(frameDescription.Width, frameDescription.Height);
   6:  _bodyWriteableBitmap = BitmapFactory.New(frameDescription.Width, frameDescription.Height);

In the above I declare a RenderTargetBitmap for rendering our body joints and lines to a bitmap, a drawing group used to draw those joints and two WriteableBitmaps, for Color and Body streams, take the Color writeablebitmap as a destination and use the Body bitmap as source to merge. I am initializing the WriteableBitmaps with the BitmapFactory.New() method of the library I describe below.

Also declaring a dynamic Grid control which is used to hold an Image control, the image is used below to hold the body joints and lines drawings.

   1:  this.drawingGroup.ClipGeometry = new RectangleGeometry(new Rect(0.0, 0.0, this.displayWidth, this.displayHeight));
   2:  bodyImage = new Image { Source = new DrawingImage(drawingGroup), Width = this.displayWidth, Height = this.displayHeight };
   3:  rootGrid.Children.Clear();
   4:  rootGrid.Children.Add(bodyImage);
   5:  rootGrid.Measure(new Size(bodyImage.Width, bodyImage.Height));
   6:  rootGrid.Arrange(new Rect(0, 0, bodyImage.Width, bodyImage.Height));
   7:  _bodySourceRTB.Clear();
   8:  _bodySourceRTB.Render(rootGrid);
   9:  _bodySourceRTB.CopyPixels(this.bodyBytespixels, displayWidth * this.bytesPerPixel,
  10:      0);
  11:  _bodyWriteableBitmap.FromByteArray(this.bodyBytespixels);

In the code above I am using bodyImage to hold my drawings, I add them to a Grid control, add the grid to the RenderTargetBitmap control. I use RTB’s CopyPixels method to fill a byte[] bodyBytespixels. I use bodyBytespixels as a source to get a WriteableBitmap.

The above is used when the body frame is available to read. In the Color frame arrived event I use the pixels byte array that is available to me to fill the color WriteableBitmap.

   1:   _colorWriteableBitmap.FromByteArray(this.pixels);
   3:  var rec = new Rect(0, 0, frameDescription.Width,frameDescription.Height);
   4:  using (_colorWriteableBitmap.GetBitmapContext())
   5:  {
   6:      using (_bodyWriteableBitmap.GetBitmapContext())
   7:      {
   8:          _colorWriteableBitmap.Blit(rec, _bodyWriteableBitmap, rec, WriteableBitmapExtensions.BlendMode.Additive);
   9:      }
  10:  }

Also in the above code I am merging the two writeable bmps using this helpful open source library  WriteableBitmapExtensions which has many helpful extensions but the method I am using is Blit.

In Xaml I have only one image control stretched to the full 1920×1080 pixels.

   1:  <Window x:Class="BodyColorSource.MainWindow"
   2:          xmlns=""
   3:          xmlns:x=""
   4:          Title="Color Basics"
   5:          Height="1080" Width="1920"
   6:          Loaded="MainWindow_Loaded"
   7:          Closing="MainWindow_Closing">
   8:      <Window.Resources>
   9:          <Style TargetType="{x:Type Image}">
  10:              <Setter Property="SnapsToDevicePixels" Value="True" />
  11:          </Style>
  12:      </Window.Resources>
  13:      <Grid Margin="0 0 10 0">
  14:          <Grid.RowDefinitions>
  15:              <RowDefinition Height="Auto" />
  16:          </Grid.RowDefinitions>
  17:          <Image Grid.Row="0" Stretch="UniformToFill" Name="Image"  />
  18:      </Grid>
  19:  </Window>

I then use the Color WTB as source of my Image control

   Image.Source = _colorWriteableBitmap;   

Here’s the video

Kinect for Windows v2 deep dive
Also see:

KINECT for Windows v2 SDK Deep dive

Update: I just posted the 2nd part of this series

Few weeks ago I received my Kinect for Windows version 2 and private SDK so I finally got to try it out, the new Kinect ships with many improvements from v1 such as an Full HD Camera, thumb and hand open/close detection, better microphone, improved infrared and several applications can use the sensor at the same time.


Deep dive

In this blog post I will show how to read Body source and draw Bones, Hands and Joints over the Color source received from the Kinect Sensor.


“This is preliminary software and/or hardware and APIs are preliminary and subject to change.”

In the constructor of our WPF app the code reads two important sources, Body frame, Color frame and Width and Height from the Depth Sensor. It then opens both readers to start receiving frames.

   1:  FrameDescription frameDescription = this.kinectSensor.ColorFrameSource.FrameDescription;
   2:  FrameDescription bodyFrameDescription = this.kinectSensor.DepthFrameSource.FrameDescription;
   3:  this.displayWidth = bodyFrameDescription.Width;
   4:  this.displayHeight = bodyFrameDescription.Width;
   5:  this.bodies = new Body[this.kinectSensor.BodyFrameSource.BodyCount];
   6:  this.colorFrameReader = this.kinectSensor.ColorFrameSource.OpenReader();
   7:  this.reader = this.kinectSensor.BodyFrameSource.OpenReader();

The MainWindow.xaml contains a Grid with two Image elements for Color and Body data.

   1:   <Grid Margin="10 0 10 0">
   2:          <Grid.RowDefinitions>
   3:              <RowDefinition Height="Auto" />
   4:          </Grid.RowDefinitions>
   5:          <Image Source="{Binding ImageSource}" Grid.Row="0" Width="1200" Height="700" Stretch="UniformToFill" />
   6:          <Image Source="{Binding BodyImageSource}" Grid.Row="0" Stretch="UniformToFill" Width="1200" Height="700"/>
   7:  </Grid>

We subscribe to FrameArrived events of both readers in the Loaded event of our app

   1:  private void MainWindow_Loaded(object sender, RoutedEventArgs e)
   2:          {
   3:              if (this.colorFrameReader != null)
   4:              {
   5:                  this.colorFrameReader.FrameArrived += this.ColorFrameReaderFrameArrived;
   6:              }
   8:              if (this.bodyFrameReader != null)
   9:              {
  10:                  this.bodyFrameReader.FrameArrived += this.BodyFrameReaderFrameArrived;
  11:              }
  12:          }

The color frame arrived event handler acquires a frame and validates it before converting it to Byte Array and writes to WriteableBitmap which is used by Image element in our Xaml to display color stream.

   1:  private void ColorFrameReaderFrameArrived(object sender, ColorFrameArrivedEventArgs e)
   2:          {
   3:              ColorFrameReference frameReference = e.FrameReference;
   5:              try
   6:              {
   7:                  ColorFrame frame = frameReference.AcquireFrame();
   9:                  if (frame != null)
  10:                  {
  11:                      // ColorFrame is IDisposable
  12:                      using (frame)
  13:                      {
  14:                          FrameDescription frameDescription = frame.FrameDescription;
  16:                          // verify data and write the new color frame data to the display bitmap
  17:                          if ((frameDescription.Width == this.bitmap.PixelWidth) && (frameDescription.Height == this.bitmap.PixelHeight))
  18:                          {
  19:                              if (frame.RawColorImageFormat == ColorImageFormat.Bgra)
  20:                              {
  21:                                  frame.CopyRawFrameDataToArray(this.pixels);
  22:                              }
  23:                              else
  24:                              {
  25:                                  frame.CopyConvertedFrameDataToArray(this.pixels, ColorImageFormat.Bgra);
  26:                              }
  28:                              this.bitmap.WritePixels(
  29:                                  new Int32Rect(0, 0, frameDescription.Width, frameDescription.Height),
  30:                                  this.pixels,
  31:                                  frameDescription.Width * this.bytesPerPixel,
  32:                                  0);
  33:                          }
  34:                      }
  35:                  }
  36:              }
  37:              catch (Exception)
  38:              {
  39:                  // ignore if the frame is no longer available
  40:              }
  41:          }

The reader frame arrived event handler is the most interesting one, the code uses a DrawingContext to draw a rectangle, our Body frame data will be written within this rectangle. We then get the Body data from the Kinect sensor. Because the Kinect can detect up to 6 bodies at the same time, the code loops through each body object to check if it can read body joints information from the sensor before it can do something useful with it.

If the Kinect is able to track a body it loops through each Joint and uses a CoordinateMapper to give us X and Y coordinates for each joint which it then uses to draw Body and Hand joints. Note that I cheat a little bit on line 36 to fix the vertical position of my drawing by subtracting 80px from the Height.

   1:  private void BodyFrameReaderFrameArrived(object sender, BodyFrameArrivedEventArgs e)
   2:          {
   3:              BodyFrameReference frameReference = e.FrameReference;
   5:              try
   6:              {
   7:                  BodyFrame frame = frameReference.AcquireFrame();
   9:                  if (frame != null)
  10:                  {
  11:                      // BodyFrame is IDisposable
  12:                      using (frame)
  13:                      {
  14:                          using (DrawingContext dc = this.drawingGroup.Open())
  15:                          {
  16:                              // Draw a transparent background to set the render size
  18:                              dc.DrawRectangle(Brushes.Transparent, null, new Rect(0.0, 0.0, this.displayWidth, this.displayHeight));
  20:                              // The first time GetAndRefreshBodyData is called, Kinect will allocate each Body in the array.
  21:                              // As long as those body objects are not disposed and not set to null in the array,
  22:                              // those body objects will be re-used.
  23:                              frame.GetAndRefreshBodyData(this.bodies);
  25:                              foreach (Body body in this.bodies)
  26:                              {
  27:                                  if (body.IsTracked)
  28:                                  {
  29:                                      IReadOnlyDictionary<JointType, Joint> joints = body.Joints;
  31:                                      // convert the joint points to depth (display) space
  32:                                      Dictionary<JointType, Point> jointPoints = new Dictionary<JointType, Point>();
  33:                                      foreach (JointType jointType in joints.Keys)
  34:                                      {
  35:                                          DepthSpacePoint depthSpacePoint = this.coordinateMapper.MapCameraPointToDepthSpace(joints[jointType].Position);
  36:                                          jointPoints[jointType] = new Point(depthSpacePoint.X, depthSpacePoint.Y - 80);
  37:                                      }
  39:                                      this.DrawBody(joints, jointPoints, dc);
  41:                                      this.DrawHand(body.HandLeftState, jointPoints[JointType.HandLeft], dc);
  42:                                      this.DrawHand(body.HandRightState, jointPoints[JointType.HandRight], dc);
  43:                                  }
  44:                              }
  46:                              // prevent drawing outside of our render area
  47:                              this.drawingGroup.ClipGeometry = new RectangleGeometry(new Rect(0.0, 0.0, this.displayWidth, this.displayHeight));
  48:                          }
  49:                      }
  50:                  }
  51:              }
  52:              catch (Exception)
  53:              {
  54:                  // ignore if the frame is no longer available
  55:              }
  56:          }

Notice that for both hands Kinect sensor SDK gives us a state that the code uses to draw a Red/Green ellipses

   1:  private void DrawHand(HandState handState, Point handPosition, DrawingContext drawingContext)
   2:          {
   3:              switch (handState)
   4:              {
   5:                  case HandState.Closed:
   6:                      drawingContext.DrawEllipse(this.handClosedBrush, null, handPosition, HandSize, HandSize);
   7:                      break;
   9:                  case HandState.Open:
  10:                      drawingContext.DrawEllipse(this.handOpenBrush, null, handPosition, HandSize, HandSize);
  11:                      break;
  13:                  case HandState.Lasso:
  14:                      drawingContext.DrawEllipse(this.handLassoBrush, null, handPosition, HandSize, HandSize);
  15:                      break;
  16:              }
  17:          }

That’s all I had to do to draw body joints over a camera stream from the Kinect sensor.

Also see:

How to paste code in your blog post with Visual Studio Dark Color Theme

I use a dark theme in Visual Studio, search for it



and I also post code from VS to this blog, I use Windows Live Writer for blogging which is an awesome tool, it comes with plugins too. There are a many Code plugins for Windows Live Writer but surprisingly none of them supports the Dark theme.


I am using this plugin and let’s assume I have following C# code snippet that I want to post on this blog.


Using the above plugin it will render markup like the following, the good thing is that it embeds css styles in the markup which makes it easier to change, the default styles are



And this is what I need to do to get Dark theme, sort of.


I have changed font-size, color and background-color properties in .csharpcode pre style that the Insert Code plugin generated for me, I also changed the .kwrd and .str for to a different color. I chose to embed these styles into the cascading stylesheet of this blog, I did this because I post code snippets very often.

I also want to add a horizontal scroll for my code so I added a style attribute with overflow-x set to scroll on the container div element.


So here’s the final code snippet with the dark theme that I’m interested in.

   1:  public void On_commands_requested(SettingsPane sender, SettingsPaneCommandsRequestedEventArgs args)
   2:          {
   3:              var cmd = new SettingsCommand("RSSReader", "Manage Categories", x =>
   4:              {
   5:                  var settingsPanel = new Settings();
   6:                  settingsPanel.Show();
   7:              });
   9:              var privacyCmd = new SettingsCommand("PrivacyPolicy", "Privacy Policy", x =>
  10:              {
  11:                  var privacyPolicyFlyout = new PrivacyPolicy {Width = 400};
  12:                  privacyPolicyFlyout.Show();
  13:              });
  15:              args.Request.ApplicationCommands.Clear();
  16:              args.Request.ApplicationCommands.Add(cmd);
  17:              args.Request.ApplicationCommands.Add(privacyCmd);
  18:          }

Which plugins do you use dear reader?

Kinect for Windows v2 Development Kit program

Excited as I write this to tell you. I am chosen for the upcoming Kinect for Windows v2 developer kit program, this means access to alpha sensor, pre-release Software Development Kit (SDK) and final release sensor when it is launched.

Here’s the full details of the program.

Kinect for Windows development kit program

Next to the new SDK they’ve announced the Kinect for Windows development kit program for a fee of $399. Developers who are selected for the program are granted early access to everything they need to start developing for Kinect for Windows v2.

The program includes these things –

  • Direct access to the engineering team via private forum & exclusive webcasts
  • Early SDK access (alpha, beta, …, until final release)
  • Private access to the API & sample documentation
  • Pre-release/alpha sensor
  • Final release sensor when it gets launched

$399 might seem a lot but if you take a look at what you get and know that K4W v1 hit the shelves for $250 makes a lot of sense and next to that you get a head start.
I apply to the program and hope to get my hands on one so I can provide some tutorials on how to use the new sensor, I’ll keep you posted!

You need to apply here before 31th of July and the program will begin in November of 2013along with the launch of Xbox One. Kinect for Windows v2 will hit general availability in 2014.

Note that it is not guaranteed that you get into the program, you have to be selected.


Retargeting Windows 8 RSS Reader to Windows 8.1

Recently I installed Windows 8.1 RTM and Visual Studio 2013 Release candidate and decided to upgrade my @Win8RSSReader open source app to Windows 8.1. I checked in the first batch of changes to the source code, though it’s a long way from the finish line.

There are a number of changes to Windows 8.1 including Page life cycle improvements, Visual States and Snapped View, see Windows App builder blog. There is also a retargeting guide on MSDN

In the first step, I retargeted the start page ItemsPage.xaml so here are few things I did.

Opening up the solution in Visual Studio 13 you get this option to retarget, notice the Solution Explorer says (Windows 8), selecting an option from right-click ‘Retarget to Windows 8.1’ modifies few required settings in Package.appxmanifest and .csproj files 

Retarget to Windows 81

Next up remove dependency on the older C++ Runtime package and reference the latest package

Remove SDK

Imported NavigationHelper, ObservableDictionary and RelayCommand types from a blank HubApp project template that Visual Studio 13 RC generated for me


Other most important things before launching the app

That’s about it, see the source code on Codeplex.

Developing apps for Agent SmartWatch in C#

Not long ago SecretLabs – creator of Netduino boards launched a project on to raise funding – The Agent SmartWatch.

C# developers can write apps and watch ‘faces’ for this using .NET Micro Framework and Visual Studio 2012, it comes with its own Emulator to create and debug apps so you don’t need a watch to begin writing apps.

To get started this is all you need.
Download Visual Studio Express 2012
Download .NET Micro Framework SDK v4.3
Download AGENT SDK v0.1.1 (June 2013, Preview Release)

Scott Hanselman gave a sneek peek in a June post, @roguecode wrote a 3 part series to get started. There is also a post by @mikehole for communicating with Bluetooth devices then there is also a Tetris clone from Watch App and Watch Face Showcase and a Analog watch face from Ali Sufyan.

Agent comes with its own Big Digits Watch Face sample code


After installing the bits Visual Studio File > New > Project gets few more options


Click OK to create and debug a Watch Face application and built-in emulator fires up with a ‘Hello World’


If you have not written a .NET Micro Framework app before perhaps the API feels a bit low-level but fortunately there is a open source project that helps.


Hosted on github it gives you few helpers that I liked such as Button and Drawing, using this and Tiny Font Tool GUI I wrote a custom watch face that show a new message on clicking Middle Right button in emulator

image image

So get started writing your own apps for your Agent SmartWatch.

Windows 8.1: Building Windows store apps with new Windowing modes

If you are building Windows store apps targeting Windows 8 platform, one of the features you can add is support for Snapped View.

What is a Snapped View in Windows 8?

Here’s how MSDN defines it

Snapped view: View states, including snapped view, enables two apps to run simultaneously side by side, so users can truly multitask and stay productive with your app all the time.

Screenshot (15)

To achieve the above in a XAML Windows store app you’d use a VisualStateManager to create Visual states, a state defines how your application ‘reacts’ to the size changes when user snaps it to left or right edge of the screen. When an app is in Snapped view, Windows runtime looks in the ‘Snapped’ Visual State for commands.

Here’s the snapped visual state from my open source Windows store RSS Reader app. I set a SnappedPageHeaderStyle style to Back button and Title, narrowing margins on Categories and All Items buttons and toggling visibility of Grid and List view controls.

                <VisualState x:Name="Snapped">
                        <ObjectAnimationUsingKeyFrames Storyboard.TargetName="backButton" Storyboard.TargetProperty="Style">
                            <DiscreteObjectKeyFrame KeyTime="0" Value="{StaticResource SnappedBackButtonStyle}"/>
                        <ObjectAnimationUsingKeyFrames Storyboard.TargetName="pageTitle" Storyboard.TargetProperty="Style">
                            <DiscreteObjectKeyFrame KeyTime="0" Value="{StaticResource SnappedPageHeaderTextStyle}"/>
                        <ObjectAnimationUsingKeyFrames Storyboard.TargetName="btnCategories" Storyboard.TargetProperty="Margin">
                            <DiscreteObjectKeyFrame KeyTime="0" Value="135,12,0,55"/>
                        <ObjectAnimationUsingKeyFrames Storyboard.TargetName="TitleAllItems" Storyboard.TargetProperty="Margin">
                            <DiscreteObjectKeyFrame KeyTime="0" Value="10,5,0,33"/>
                        <ObjectAnimationUsingKeyFrames Storyboard.TargetName="itemListScrollViewer" Storyboard.TargetProperty="Visibility">
                            <DiscreteObjectKeyFrame KeyTime="0" Value="Visible"/>
                        <ObjectAnimationUsingKeyFrames Storyboard.TargetName="itemGridScrollViewer" Storyboard.TargetProperty="Visibility">
                            <DiscreteObjectKeyFrame KeyTime="0" Value="Collapsed"/>

In the above code sample I did not include FullScreenLandscape, Filled and FullScreenPortrait visual states definitions for brevity but you can see the source code.

Here’s how the app looks like when snapped to the left edge of the screen

Screenshot (14)

No Snapped View in Windows 8.1 but..

Windows 8.1 no longer has default 320 pixels wide fixed Snapped view so users can resize their apps they way they want, the default minimum width is 500 pixels but this can be set to 320 pixels. There can also be more than one app on a screen. For more info about this see new Windowing Modes preview documentation


Adding Windowing Modes in Windows 8.1 store apps with Visual States

To show this in an example in this blog post I’ll do the following

  • Create a XAML/C# Windows 8.1 Hub store app from the built in Blend for Visual Studio 2013 preview template
  • Subscribe to SizeChanged event and use ApplicationView class in Windows.UI.ViewManagement namespace to know when user sets the app in Portrait orientation or Left/Right edge of the screen
  • Create three visual states to set Visibility=”Collapsed” on the Hub control and one ‘Normal’ Fullscreen visual state
  • Call visual states in each windowing mode to show/hide the Hub control

With Blend for Visual Studio 2013 Preview, File > New Project > Hub App (XAML) name it HubApp and click OK. This gives a empty template app which follows the Hub Design pattern.

Screenshot (16)

More info on creating Hub Pages with Visual Studio 13 preview

Creating Visual States

See that huge Hub placeholder? I’ll hide it when the app is not running full sreen, to do it I created a VisualStateGroup then two visual states Left and Fullscreen. Selected Left visual state, selected Hub Control and set it’s Visibility to Collapsed.

imageRight-clicked on Left visual state select Copy to State > New State twice, renamed the new visual states to Portrait and Right, Now I have four visual states.


Using Codebehind to detect window resize to switch visual states

Open HubPage.xaml.cs in Visual Studio 2013 preview and add two namespaces

using Windows.UI.Core;
using Windows.UI.ViewManagement;

Subscribe and unsubscribe from the SizeChanged event with an event handler

   protected override void OnNavigatedTo(NavigationEventArgs e)
            Window.Current.SizeChanged += WindowSizeChanged;


        protected override void OnNavigatedFrom(NavigationEventArgs e)
            Window.Current.SizeChanged -= WindowSizeChanged;


Create the WindowsSizeChanged event handler to do the work

private void WindowSizeChanged(object sender, WindowSizeChangedEventArgs e)
            ApplicationView currentAppView = ApplicationView.GetForCurrentView();
            if (currentAppView.AdjacentToLeftDisplayEdge)
                VisualStateManager.GoToState(this, "Left", false);
            if (currentAppView.AdjacentToRightDisplayEdge)
                VisualStateManager.GoToState(this, "Right", false);
            if (currentAppView.IsFullScreen)
                VisualStateManager.GoToState(this, "Fullscreen", false);

            if (currentAppView.Orientation == ApplicationViewOrientation.Portrait)
                VisualStateManager.GoToState(this, "Portrait", false);

To know the current size I use ApplicationView class which is in Windows.UI.ViewManagement namespace in the WindowSizeChanged event handler. Using the Orientation and new AdjacentToLeftDisplayEdge, AdjacentToRightDisplayEdge properties of this class I check when user has snapped the app in Left or Right edge or put it between two apps in a Portrait mode, I then switch the visual states I created earlier which hides the Hub control.

The Result

Hub app on the Left and Right edges of the screen, Portrait mode is shown above

image image