About this BookEdit
This book was created to provide a relatively easy way to create functional software prototypes for a large range of interactive applications — including non-standard multi-touch interfaces with multiple simultaneous multi-touch gestures. The rationale is that easier prototyping permits more prototyping and more prototyping is likely to result in better prototypes. On the other hand, prototypes that use non-standard interfaces and other exotic features of technologies often require some form of programming; thus, this wikibook has a strong focus on how to program certain common features of interactive applications.
This approach has several advantages:
- It supports many popular platforms (desktop web browsers, browsers on mobile devices, iBooks widgets, etc.).
- It supports mouse and multi-touch interaction, animated graphics, and sound.
- The limitation to a single canvas 2D context allows us to completely avoid CSS syntax, most HTML syntax, and many dependencies on browser extensions.
- The 2D graphics programming can be simplified by focusing on rendering bitmap images (instead of vector graphics).
- The approach allows us to use a single entry point (for each page) for all rendering and event processing.
Who Should Read this Book?Edit
How to Read the Book?Edit
Which Programming Paradigm is Employed?Edit
- The canvas 2D context specification includes an example (at the end of Chapter 12) with a function
drawCheckbox(context, element, x, y, paint)where
paintis a boolean flag determining whether the function actually draws the checkbox. If the flag is
false, it still sets the current path of the checkbox, which is used in the event handler to determine whether the checkbox has been clicked. This approach requires identical calls to
drawCheckboxin the redraw functions and the event handlers while it is preferable to use only a single function for redrawing and event processing.
- The 2D GUI system of the game engine Unity also employs the same function (
OnGUI) for rendering and event processing. However, in Unity, this function is called every frame (usually at 60 or 30 frames per second) while it is often beneficial to call the function only when needed. Furthermore, Unity appears not to separate rendering from event processing, which makes it more difficult to process multiple events per frame (e.g. for multi-touch devices).
Since the rendering is only based on the GUI's state variables (including the current time), the event processing can be considered an implementation of the transition of the GUI's state (by changing the state variables). Thus, the function for rendering and event processing corresponds to the step of an automaton in automata-based programming. Furthermore, this function may call subroutines for rendering and event processing of contained GUI elements (e.g. widgets), which may again call further subroutines, etc. In that case, the hierarchy of contained GUI elements is reflected by the call hierarchy of the program. This allows us to define and reuse standard widgets, which would be difficult in automata-based programming with a single switch instruction to distinguish between all states of the GUI.
Since the GUI is always rendered from scratch based on the GUI's state variable, there are also similarities to reactive programming because the render function specifies how the GUI is constructed from its state variables, which is what would be specified in reactive GUI programming.