Tuesday, April 6, 2010

Why won't WPF controls work with touch manipulations?

I recently tweeted asking for some scenarios using WPF Touch, and I got an email from Josh Santangelo (@endquote) with few interesting touch challenges. I created some sample code and sent him back some solutions to his challenges, but I figured they would be useful for the community.

In this blog, I'll answer the question: "Why won't WPF controls work with manipulations?" (This is my own phrasing, not Josh Sanangelo's.)

Problem: 
You have a container with some manipulations (perhaps like a ScatterView) and this container has some standard WPF controls like buttons or checkboxes. You can use touch to manipulate the container, and you can use the mouse to click the controls, but touch mysteriously doesn't affect the controls. (The mouse also can't manipulate the container, but that is another post.)

Reason:
When you touch the screen, WPF generates touch events, such as TouchDown, TouchMove, and TouchUp. These are all routed events, which means that first the PreviewTouchDown event is fired on the root of the visual tree, then the next element down the hierarchy, then the next, all the way down the source element that the touch event occured over, as long as the event is not handled along the way.

Once it reaches the source element, the TouchDown event is fired starting from the source element and proceeding up the visual tree to the root. At any point if any of the visual elements set e.IsHandled = true, then the event propagation stops.

On the other hand, if the event propagation reaches all the way up to the root, then the touch event is promoted to a mouse event. At this point, PreviewMouseDown and MouseDown is fired down and up the visual tree.

In our case, when you use touch and there is no manipulation, the touch events return unhandled, then the mouse events are fired and the button handled that and calls your Click event handler. The WPF controls such as button only listen for mouse events (exception: ScrollViewer when PanningMode is set.) This process is illustrated in figure 1.

Figure 1. (Click for larger size.) The touch event flow with no manipulations. The touch events are unhandled, so WPF promotes the event to the mouse equivalent. The button is listening for mouse events and handles that, then calls your Click event handler.

When you do have manipulations enabled in the visual tree, something different happens, as illustrated in figure 2.
Figure 2. (Click for larger size.) When the border has IsManipulationEnabled = true, the manipulation processor handles the TouchDown event and captures the touch device. All future events go directly to the Border, and the ManipulationProcessor handles the rest.

In the case where the Border has a manipulation, the touch event flow is never promoted to mouse events, so the button doesn't even have any idea what is going on. Button didn't get the memo.

Solution:
Of course, it doesn't have to be this way. Sometime soon, the Surface Toolkit for Windows Touch will be released and you'll be able to use the SurfaceButton and the other controls which are designed for touch and handle the touch events. Figure 3 shows a list of the WPF controls the Surface toolkit optimizes for touch.

Figure 3. (Click for larger size.) Surface Toolkit for Windows Touch offers touch-optimized controls that can replace most of the common WPF controls. This is only a partial list of what the Surface Toolkit offers.


But that doesn't help you now. Suppose you have this XAML inside of a container with manipulations:

   1:  <TextBox Name="txtCounter"
   2:           Text="0"
   3:           Margin="10"
   4:           HorizontalAlignment="Center" />
   5:  <Button Content="Native Button Won't work"
   6:          Margin="10"
   7:          Height="40"
   8:          Click="button_Click" />

and this code in the code behind:

   1:  private void button_Click(object sender, RoutedEventArgs e)
   2:  {
   3:      IncrementCounter();
   4:  }
   5:   
   6:  private void IncrementCounter()
   7:  {
   8:      int number = int.Parse(txtCounter.Text) + 1;
   9:      txtCounter.Text = number.ToString();
  10:  } 

This button will not work and the event flow will look like figure 2. Instead, update the button XAML to this:

   1:  <Button Content="Will work with TouchDown/Up"
   2:          Margin="10"
   3:          Height="40"
   4:          Click="button_Click"
   5:          TouchDown="button_TouchDown"
   6:          TouchUp="button_TouchUp" />

and add these methods in the code behind:


   1:  private void button_TouchDown(object sender, TouchEventArgs e)
   2:  {
   3:      FrameworkElement button = sender as FrameworkElement;
   4:      if (button == null)
   5:          return;
   6:              
   7:      button.CaptureTouch(e.TouchDevice);
   8:   
   9:      e.Handled = true;
  10:  }
  11:   
  12:  private void button_TouchUp(object sender, TouchEventArgs e)
  13:  {
  14:      FrameworkElement button = sender as FrameworkElement;
  15:      if (button == null)
  16:          return;
  17:   
  18:      TouchPoint tp = e.GetTouchPoint(button);
  19:      Rect bounds = new Rect(new Point(0, 0), button.RenderSize);
  20:      if (bounds.Contains(tp.Position))
  21:      {
  22:          IncrementCounter();
  23:      }
  24:              
  25:      button.ReleaseTouchCapture(e.TouchDevice);
  26:   
  27:      e.Handled = true;
  28:  }

Now your button will work inside of the container with both mouse and touch. A little explanation:
  • Line 7: We capture the touch so that the TouchUp and other touch events will be sent to this button, even if it occurs somewhere else.
  • Line 9: We must set this event to handled, otherwise the TouchDown event will continue to bubble up and the border's manipulation processor will steal the capture, depriving the button of a TouchUp event.
  • Lines 18-20: We check to make sure the TouchPoint is still within the bounds of the button. The user could have touched the button, but changed his or her mind and moved outside of the button to release the touch. This is consistent with the mouse behavior of buttons.
  • Line 22: We just call the same function that the button_Click() event handler called to get the same effect.
  • Lines 25-27: We release the capture and handle this event as well. Technically we might be able to get away without these lines in this scenario but we should do it anyway. In a more complicated scenario there might be an unintended side-effect if we do not do this.

You can apply this technique to the other non-touch aware WPF controls if necessary, although it may get a little tedious. I almost wrote a Blend behavior (part of the Blend 3 SDK and usable in more than just Blend) for this, but figured that the Surface Toolkit will be out soon enough anyway.*

(* No I don't have a date for the release of Surface Toolkit. Sorry!)

You can download the source code for this project below:

Thanks to Ryan Lee for Gesturecons, where I got the hand icon used in the figures.

NUIs reuse existing skills (updated NUI definition)

Last month I posted an excerpt from my book with a section that contained a definition of natural user interface.
A natural user interface is a user interface designed to use natural human behaviors for interacting directly with content.
It sparked some great conversation both in the comments of that post and some other blogs. Richard Monson-Haefel suggested on his blog that I consider changing the "use natural human behaviors" part to match how I talk about natural in terms of innate abilities and learned skills. Those discussions were derived from some of Bill Buxton's thoughts on NUI.

At the time, I was on the fence about this change. I wrote in a comment that I liked using the simpler vocabulary words since it will be understandable by more non-technical folks, and that I liked the term behavior for a rhetorical reasons since in the beginning of chapter two I had an interesting set of phrases that needed it to say "behavior" to make sense. (I know that isn't a very good reason and actually I since deleted those pages with that particular discussion.) So I was leaning towards my initial definition but I was keeping my mind open for more convincing arguments.

Well as it turned out, I was convinced by Bill Buxton at MIX10. He wasn't talking directly to me about this, but in his keynote he described how a saxaphone input device is a natural interface for someone who has years of saxaphone training, and he proceeded to use his electronic saxaphone (a Yamaha WX7 wind controller) to create flute and electric guitar sounds. Below is a screenshot, but if you want to watch his part of the keynote, skip to mark 96:45 of the MIX10 Day 2 Keynote video.
Bill Buxton uses a Yamaha WX-7 wind controller and his existing saxaphone skills to create music that sounds like a variety of non-saxaphone instruments.

When I heard him describe this, I immediately understood that my existing definition ("use natural human behaviors") excluded advanced NUIs designs for people who were already experts with advanced skills. I had already prepared my NUI presentation for day 3 including the original definition, so directly after the keynote I went and changed it. In my session video you can see I used a revised definition of NUI, and in the next revision of chapter 1 (now updated in the MEAP) I also updated the definition.

Here is the updated book excerpt:

There are several different ways to define the natural user interface. The easiest way to understand the natural user interface is to compare it to other type of interfaces such as the graphical user interface (GUI) and the command line interface (CLI). In order to do that, let's reveal the definition of NUI that I like to use.

A natural user interface is a user interface designed to reuse existing skills for interacting directly with content.
There are three important things that this definition tells us about natural user interfaces.

NUIs are designed


First, this definition tells us that natural user interfaces are designed, which means they require forethought and specific planning efforts in advance. Special care is required to make sure NUI interactions are appropriate for the user, the content, and the context. Nothing about NUIs should be thrown together or assembled haphazardly. We should acknowledge the role that designers have to play in creating NUI style interactions and make sure that the design process is given just as much priority as development.

NUIs reuse existing skills

Second, the phrase "reuse existing skills" helps us focus on how to create interfaces that are natural. Your users are experts in many skills that they have gained just because they are human. They have been practicing for years skills for human-human communication, both verbal and non-verbal, and human-environmental interaction. Computing power and input technology has progressed to a point where we can take advantage of these existing non-computing skills. NUIs do this by letting users interact with computers using intuitive actions such as touching, gesturing, and talking, and presenting interfaces that users can understand primarily through metaphors that draw from real-world experiences.

This is in contrast to GUI, which uses artificial interface elements such as windows, menus, and icons for output and pointing device such as a mouse for input, or the CLI, which is described as having text output and text input using a keyboard.

At first glance, the primary difference between these definitions is the input modality -- keyboard verses mouse verses touch. There is another subtle yet important difference: CLI and GUI are defined explicitly in terms of the input device, while NUI is defined in terms of the interaction style. Any type of interface technology can be used with NUI as long as the style of interaction focuses on reusing existing skills.

NUIs have direct interaction with content

Finally, think again about GUI, which by definition uses windows, menus, and icons as the primary interface elements. In contrast, the phrase "interacting directly with content" tells us that the focus of the interactions is on the content and directly interacting with it. This doesn't mean that the interface cannot have controls such as buttons or checkboxes when necessary. It only means that the controls should be secondary to the content, and direct manipulation of the content should be the primary interaction method.
I decided to say just "reuse existing skills" and not include the innate abilities part to keep it simpler and also because the thing that convinced me to change it was the fact that some NUIs may use advanced skills, so innate abilities may not always play into NUI. The core thing that did make it natural was reusing existing skills.

I do need to thank the commenters on my last blog, Ben and Laurent, and particularly Richard, who had also put himself out there trying to define NUI but still had great comments about my thoughts. Richard pointed me in the right direction and with some time and thought I realized that he was correct. It's you, my readers, who will help me make this book totally awesome.

Please keep the feedback coming and feel free to call me out if you think I could improve something.