Models and Theories in Human-Computer Interaction/Framework: Computer as Human, Human as Computer: Perception-Input, Thinking- Processing, Action- Output

Human-Computer Model - Useful, but Limited edit

In cognitive models, there is a strong mapping between human and computer processing. Computer input is mapped to humans in ways of perception; processing and memory in computers are aligned with human contemplation; the machine's output with human actions and behaviors. It is a useful model for simplifying the complexities of humans by comparing our behaviors to that of machines.

The input and output terminals in the human-computer model are understandable, but the processing chain, the steps between input and output, are vastly different.

It is hard to replicate human processing with hardware edit

Information processing in humans is deviously more complicated than in any computer system, as we have recently discovered. Researchers have been able to simulate a single second of human brain activity with 82,944 processors in 40 minutes, consuming 1 Petabyte of system memory. It is also worth noting that this process only created an artificial neural network of 1.73 billion nerve cells compared to our brain's 80-100 billion nerve cell network.[1]

Emotions affect our cognitive processing edit

Humans emotions affect every aspect of our cognitive processing. For example, emotional recall is the basis of method acting and used to recall a particular feeling from a person's physical surroundings[2], and memory for odors is associated with highly emotional experiences. [3] The heavy influence of emotions on human behavior is one reason why there is immense value in qualitative research. The complete human-computer experience cannot be strictly defined by response times; the level of enjoyment and perceived effectiveness must also be measured in order to create better user experiences.

Research on progress bars has determined that backwards moving animations seem faster to users than forward moving animations.[4] Humans do not perceive the concept of time the same as a computer as illustrated by the research on progress bars. In the study, users perceived the loading time to be 11% longer when the progress bar was forward moving despite the fact the loading times were the same in each trial. Users experience events analyzed through perceptions in the mind; it is not a binary experience as in a computing machine.

Humans have bodies edit

Like emotions, we often overlook the affect our physical self has on information processing. We use our physical surroundings as cues in our memory and accelerated functioning. Unlike computers, we do not store every pixel of visual information about our surroundings. Instead, we rely on a our additional senses to construct visual representations of our surroundings. [5] Intuitively, we have a host of sensorary devices and inputs that cannot be accounted for in a machine. Touch, smell, sound, as well as visuals, all have a place in the memory recollection process.

Role of Context in Perception edit

In his book Carroll describes three stages of visual system.

  • Stage 1: color and shape analysis
  • Stage 2: segmentation of regions and patterns
  • Stage 3: object detection

I think these stages describe Bottom-Up processing of information in which we recognize objects by their appearance: colors, shapes, patterns. However, we know that there is also Top-Down processing of information in which we recognize objects by the context in which the object presented. The context is terribly missing in the Carroll’s description of visual perception.

My favorite example of recognition by context is a harvest moon (for example

Here is what happens if we follow Carroll’s stages looking at the moon:

  • Stage 1: We sense yellowish/orange colors
  • Stage 2: We detect round shape in the sky
  • Stage 3: Combining information from previous stages we would recognize yellow/orange round shaped object like….fruit orange? the sun? beach ball?

When including context in the picture we see that the yellow/orange object is in the sky, so it is not a fruit. The sky is dark, so it is not the sun. Finally, we perceive this object as the moon.

I agree that we do go through Carroll’s stages of visual system. However, what Carroll describes sounds to me more like sensation which is using senses to capture presented picture, but not perceiving an object and especially not object recognition. Cognitive processing and the context of the object are essential factors of human perception. With that said, to make computers able to perceive (analyse and understand input information), we should provide it with a context and a reach base of knowledge.

Perception and biological factors of human edit

The book ‘HCI Models, Theories, and Frameworks (Carroll, J. M., 2003)’ discussed many design guidelines based on perception which assumed to be relatively permanent and are often accepted across different cultural. Perception often influenced greatly by external design. Display design such as low contrast, clutter, disorganized layout, unnecessary detail, ineffective color combinations, absence of saliency, lack of differentiation between foreground and background can impact human perception. However in the book ‘HCI Models, Theories, and Frameworks (Carroll, J. M., 2003)’ failed to include some biological factors that influence perception such as lower cognitive abilities in old age and gender variability. Old age often considered major impediment as it diminishes sensory receptivity and cognitive capacity. Studies confirmed that emotional perceptions differs in old and the young age.

Perception in context of age and gender edit

A research by National Institute of Health suggests “older adults perceived pictures differently than younger adults. Older adult rated positive pictures as more arousing than negative or neutral pictures, and more arousing than younger adults (Neiss, M. B., Leigland, L. A., Carlson, N. E., & Janowsky, J. S., 2009).” The same research also found gender bias in women as “they rated positive pictures more positively and negative pictures more negatively.” Several researches also offer support for impact of gender differences on human perception. A research by Dae-Young Kim, Xinran Y. Lehto, Alastair M. Morrison suggested that males and females differ in their attitudes toward web travel information sources and information search behavior in on and offline sources. It also explored the underlying cognitive dimensions of website information attitudes and preferences, and assessed gender differences within the context of these dimensions. They found significant differences between genders on perception of Website functionality and online information search behavior. This research suggests that even though most websites may be gender-neutrally designed, women are better off using them both in terms of functionality and scope of content than men. To be successful in the current competitive e-environment it would be a prudent choice to create gender specific websites. Another research by Diane F. Halpern and Mary L. LaMay explains male and female do not differ in intellect however their cognitive abilities are different. Males have an upper hand in spatial perception, mental rotation, spatial visualization, mathematical reasoning and generation and maintenance of spatial image. On the other hand females are good at recovering information from long-term memory and using verbal information. This suggests that there is no segregation of genders based on smartness; they simply differ in different areas of expertise in cognitive function. Therefore study of human perception should also include biological factors like age and gender.

References: edit

1. Carroll, J. M. (2003). HCI Models, Theories, and Frameworks. San Francisco: Morgan Kaufmann Publishers. 2. Neiss, M. B., Leigland, L. A., Carlson, N. E., & Janowsky, J. S. (2009). Age differences in perception and awareness of emotion. Neurobiology of Aging,30(8), 1305–1313. doi:10.1016/j.neurobiolaging.2007.11.007 3. Dae-Young Kim, Xinran Y. Lehto, Alastair M. Morrison, (2007). “Gender differences in online travel information search: Implications for marketing communications on the Internet”. Tourism Management 28;423–433. 4. Diane F. Halpern and Mary L. LaMay, (2000) “The Smarter Sex: A Critical Review of Sex Differences in Intelligence.” Educational Psychology Review, Vol. 12, No. 2.

The Perception gap – humans and computers edit

In his book - HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science [6], Carroll alludes to the approach of Information Psychophysics. According to Carroll, this approach allows humans to perceive elementary information patters such as images, graphs, and cluttered items (Carroll, 2002). And, the next step in HCI is to engineer this approach on a computer display. Moreover, the big question is - how far computer algorithms can go to unlock information psychophysics approach on a computer display screen? It is undeniable fact that computers have remarkable ability to work through complex computations – some that are beyond human capabilities. They assist in some of our complex decision making process. But computer circuitry is long way from reaching the equivalency of a human brain with regards to cognitive processing. The ability to provide independent analysis based on emotions is still beyond the dominion of computers. For example, sometimes we can look at stock chart numbers and make decisions based on information psychophysics and sometimes via emotional factors. The replication or engineering this human approach on a computer circuitry may be difficult.

The Road ahead edit

Nonetheless, computers have come a long way. Progress has been made bridge the gap between computer circuits and human brain. The factor to overcome is the ability for computers to master the Information Psychophysics – regardless of the sensory input. The ability to be spontaneous is critical. As of today, Computers and human brains today are on two parallel paths – both complementing each other. In spite of the limitations, computers are inching towards mapping the inner working of the human brain on a cognitive level. Researchers at University of California (Lister, 2015)[7] have created computer circuit to closely map with the circuit of the brain. They successfully created the computing equivalent of a network of 100 neurons, which compares to around 100 billion neurons in the human brain [8]. The results showed that computers recognized pictures just as humans do.

Nevertheless, a day will come when the computers can drive a car without human intervention or cognitive input. Well, it is already here!

Perception in Stages edit

When we are first born, we do not have built up long term memory to be able to recognize objects and shapes. Everything we see is literally a new experience for the growing mind. We have no context of what the color green is, or what shape a ball is. All we see are features that will eventually turn into patterns, and then object recognition inside the brain, as state in Carrol's theory of perception. We can experience a similar situation as what newborns experience by looking at Magic Eye puzzles. At first glance all we see is simple features like color and contrast. Upon closer inspection we can start to see patterns emerge from the puzzle, and if we focus enough of our attention, we can finally see objects appear.

With this understanding of how the human mind uses inputs from the visual system to interpret and recognize the world around us, we can develop human interfaces to machines much more effectively. We can combine this knowledge to what we know about the human eye. The human eye has cones, which see color, and rods, which see black and white. When designing displays, the primary colors, red, green, blue, and yellow should be used to stimulate the cones in our eyes effectively. If we use off colors that take more brain processing power to interpret, and we may not find the display easy to read or use. Similarly, we can reason out to not use confusing patterns, like the magic eye puzzles, in our displays that have the potential to confuse, or block information. These rules are especially important for information displays monitoring a factory floor. If a machine breaks down and sends an alarm signal to a display, we as humans should be able to identify the fault and stop the assembly line without much critical thinking. A red, flashing light is common for alarms or emergencies because it can be seen easily in environments with varying luminosities. The feature of red color, and simple pattern of flashing the light on and off, are both easily interpreted as a warning or alarm by the working memory of the brain.

Perception – Pattern Perception in Agriculture edit

Carroll describes three stages of the visual system; stage 1 concerns color and shape analysis. Stage 2 is a middle level of perception and involves recognition of patterns. Stage 3 is a more sophisticated object detection stage. Stage 2, recognition of patterns and the actions that result from this, are significant to many segments of society. In particular, in the agricultural community there is a drive towards pattern perception via field images captured by a quad-copter. Today, computer system decision-making can be based on the recognition of patterns captured by the current quad-copter technology. The way the agricultural community maintains our croplands could change drastically as these visual systems continue to evolve.

The action today is a notification to a human that specific patterns exist. The human must then take more specific steps. But, as this technology evolves, actions can be automated based on recognized patterns such as common regions and proximity. Once a problem, for example; lack of moisture or insect/disease infestations, is detected thru patterns captured by a quad-copter, notification can be sent to a human and action can be taken. The amount of acreage a quad-copter can cover, and record patterns of, is much greater than humans can visually inspect on a regular basis. The computer can then recognize patterns and communicate this information in time for a farmer to save a crop from stress or insects. This is a very practical example of the application of a visual perception system.

Perception is in the eye of the beholder edit

The HCI Models, Theories, and Frameworks book introduces information psychophysics, which suggests patterns in computer displays are perceived using the same mechanisms that allow humans to recognize the world. Different levels of visual processing are used as a basis for design principles supporting theories of pattern perception. (Carroll, 2003) I agree with the science of perception outlined in chapter two and believe HCI professionals should understand and apply visual guidelines to create designs that match users’ expectations and perceptual capabilities.

A user’s perception of how a system works is informed by: color usage; Gestalt laws (e.g., proximity, continuity, similarity); Preattentive Processing Theory that is used to distinguish an element of a display from other components with the use of color, shape, motion, or depth; affordances that imply a possible action. (Carroll, 2003). Practitioners possessing a basic understanding of these theories and principles related to perception should consider these guidelines when designing a user interface.

In addition to applying perception-based standards, HCI professionals may build upon this foundation by establishing familiar and consistent displays through the use of design patterns. According to Smashing Magazine, “A design pattern refers to a reusable and applicable solution to general real-world problems.” (Gube, 2009) Design patterns often include the look, behavior, and desired usage related to different components within a user interface such as navigation, breadcrumbs, list builders, and form-related elements. See the Welie pattern library for examples. Over time users recognize interface patterns. Familiarity with patterns contributes to the idea that, “Past experiences turn into current expectations”. (Bedford, 2015)

Incorporating visual and design patterns in a user interface promotes consistency; I believe it is essential to providing a predictable user experience that reduces cognitive load and facilitates a user's interpretation of how a system works.

Works Cited

Bedford, A. (2015, May 10). Don't Prioritize Efficiency Over Expectations. Retrieved from Nielsen Norman Group:

Carroll, J. M. (2003). HCI Models, Theories, and Frameworks. San Francisco: Morgan Kaufmann Publishers.

Gube, J. (2009, June 15). 40+ Helpful Resources on User Interface Design Patterns. Retrieved from Smashing Magazine:

Human Brain vs Computer Brain edit

The human brain operates on a whole different level than a computer. A computer processor cannot mimic a human brain. It took over 80,000 processors and over a petabyte of system memory to replicate one second of human brain activity, and it still took 40 minutes of processing to match it. But, does that make the human brain better? If the human brain is capable of processing information at rates so much higher than a computer, why then is a computer capable of doing something like mathematical calculations faster and more accurately than the human brain. Is it that, while the human brain is processing more information, it is not using that processing power as efficiently? If so, does that make the human brain better?

The problem with this question is that it depends on what you're trying to get each of them to do. Yes, the computer can do calculations faster, but that is because math is a logical process. But try to get a computer to mimic the complexity of a conversation, and you would see that it is nearly impossible to program a computer to so effectively. You can only program the computer to notice certain queues in conversation, but there are too many nuances, slang, illogical behavior, emotional responses, etc. However, if we only wanted to measure how powerful each is, how would one go about measuring that?

The problem is that, when you set a computer about a certain task, it can direct all or most of its resources on that task. The human brain however, will always be doing other tasks, like keeping the heart pumping and other such life support processes. So if you were going to pit the two against each other, would you only want to measure the power of the prefrontal cortex?

Vision provides the channel for most computer output? edit

In his book (chapter 2), Carroll describes primarily on human visual systems and neglects the other sensory systems such as auditory, kinesthetic, touch, teste, and smell. I agree with the author that the vision system provides the channel for most computer output and to make it clearer he focused only on vision system. However, number of studies has shown that the human-computer interaction multimedia system with the capability of process speech (auditory system) provides a richer and more robust environment. Playing fast-action video game or watching movie is a very general example that uses audio feedback exclusively to aid in rapid interaction. Imagine if there is no audio feedback in such application, it drops the performance completely and might become more boring as well. Especially, when you get familiar with the sounds while playing video games, you will associate a sound not only with its respective button, but also with its function. You will get to know what action you performed without having to look at the button visually. Hence the audio system is more (or equally) important than visual system.

Author Carroll would have explained about the auditory systems and its design implications. The process of attaining awareness of all sensory systems and its capabilities/limitations of the human body can help to design better interactive systems.

Early Stage Processing / Preceptual Theory edit

Carroll’s chapter on Design as Applied Perception discusses the perceptual theory. The perceptual theory says to ‘always display detail with luminance contrast’. A diagram example in the book shows that black text on a white background is easier to perceive than black text on a gradient background due to the luminance contrast. I agree with this theory and it is inline with other studies that find increased reading performance and accuracy of participants reading text when there is a light background with dark colors.

Based on my online research, there are users that prefer the black background with white text, stating that it doesn’t tire or hurt their eyes as much, especially after viewing the monitor for 12+ hours a day. Most browsers even have add-ons (Deluminate for Chrome) that will inverse the luminance contrast. However, I find that when I read websites with light text on a black background that it has a similar affect of looking directly at a bright light, in that the images are burned into my eye. I also tend to leave the website as quickly as possible.

I haven’t found any studies yet that have evidence that dark display (white text on black) is easier to perceive, even over longer viewing periods. I’m interested what the results would be in a study that compared a dark text over an off white or cream colored background to participants that prefer the white text on black background.

It is important to design informational displays (applications, websites, etc) in a way the easy on the user's eyes, especially if they are utilizing it for numerous hours a day. Also, in the case of a website that you want to draw users to visit to sell to (i.e. amazon), you don't want a display that makes them want to leave the site. So, in the meantime, I’ll stick to designing text driven displays with what studies have found most effective of dark text on a light background.

Works Cited: Bauer, D., & Cavonius, C., R. (1980). Improving the legibility of visual display units through contrast reversal. In E. Grandjean, E. Vigliani (Eds.), Ergonomic Aspects of Visual Display Terminals (pp. 137-142). London: Taylor & Francis

Piepenbrock, Cosima ; Mayr, Susanne ; Buchner, Axel. Human factors 2014, Vol.56(5), pp.942-51

Carroll, J. M. (2003). HCI Models, Theories, and Frameworks. San Francisco: Morgan Kaufmann Publishers.

Information Psychophysics vs. Affordance Theory edit

Carroll argues that affordance theory, as developed by J.J. Gibson, "has little direct relevance" to information psychophysics. The basis of this view is based on the fact that Gibson's theory states affordances are provided by physical properties of the environment directly perceived. This definition contrasts with information psychophysics, according to Carroll, because the objects are not physical objects and cannot be directly experienced, only indirectly via an input device and some sort of display.

I do not totally agree with Carroll's strong argument that these areas of research are distinctly different. For one, as the chapter places a strong emphasis on visual processing, it seems odd that Carroll would deemphasize the visual aspect of affordances and emphasize the physical differences. In other words, the affordance of an object may largely be determined by visual processing alone. Second, if the objects on screen are very similar in shape, color, and texture to their real-life counterparts, then they should be easy to recognize and to develop a working cognitive model. I think it's easy to see how virtual objects fit well into the object stage of human visual processing. For example: an input text field on a web page provides the features and a pattern that will easily bring forth the memory of the object of a text field, as well as the affordance understood by the blinking cursor when clicking the box. This leads to my third issue with Carroll's position: indirectly interacting with a virtual environment is not that different from physically touching something. Granted, a virtual interaction provides less information than touch, but it does provide information, and it would be a mistake to ignore it as a source of information used to determine affordances.

Something's touching me! edit

Skin is the largest human organ (Imaeda, 2000] and as such, the largest network interface of the human body. The ability to sense pressure, temperature, tactile stimuli, proprioception, and pain make it a reasonable candidate for devising HCI Modeling frameworks. Two advantages of incorporating the somatosensory system in the mainstream study of HCI is 1) the haptic feedback loop uses less cognitive load than visual or audio processes (Shaffer, 2003) and 2) sensory cues can operate as positive interference to override visual and audio stimulus in stressful or distracting situations. As we are living more and more in a world where we literally and figuratively take our eyes off the road or away from the screen, we can turn to haptic technologies to help compensate common and new user stories.

Working from the precept that sensory memory is the gateway to short-term memory and also is continuously overwritten (Dix, 2004), haptic stimuli can be woven into the user experience. Within the design process of software modeling, haptic input could serve as an alternate flow within a task to excite quick responses and then allow the user to immediately resume the primary flow without residual distraction. It can also be used to signal users to switch systems on an as-needed basis, such as in the case of self-driving cars where the passengers are totally passive when on smart-roads, but need to take control upon entering traditional roads.

In reviewing Dix’s models for Long Term Memory: Frames, Scripts, and Production Rules (Dix, 2004), we have an opportunity to incorporate the somatosensory system into our design process by simply adding the concept as a checklist item when we develop these models: are there any facts or experiences relative to touch? If so, add it to the model. Then, when looking for opportunities to create solutions, we will have a cue to remind us to think beyond per usual audio-visual solutions. We can ask ourselves: is there a haptic experience that would benefit the user?

Also, as technology continues to develop the incorporation of tactile experiences from smooth surfaces (PBS, 2013), haptic operators will become an increasingly viable inclusion for the GOMS KLM approach.

Works Cited

Imaeda, S. (2000). Skin, Our Largest Organ, Presents and Protects Us. In Yale Health Care (Vol. 111, No. 2).

Shaffer, D., Doube, W., & Tuovinen, J. (2003). Applying cognitive load theory to computer science education. In Workshop of the Psychology of Programming Interest Group (pp. 333-346).

Dix, A., Finlay, J., Abowd, G.D., Beale R. (2004). Human-Computer Interaction ( ; chapter 3).

PBS Newshour. (2013).

Pattern-finding machines edit

Carroll argues that humans have sensory systems that operate as pattern-finding machines, and that these abilities are deeply ingrained in our native architecture, forming a large aspect of our intelligence (Carroll, 2003).

He refutes the suggestion that our systems are infinitely adaptable and that cognition, is like any other system, where a set of capabilities can be broadly assumed in all humans. As romantic as the opposing notion is, I agree with Carroll, because I have routinely established these baselines in perception when conducting user tests. Those baselines have helped to establish design conventions.

The visual system, has three distinct stages we can apply those conventions to.

Stage 1, which is pre-attentive and associated with color, motion and form, is not only important in UI design for determining what features can be perceived rapidly, but also for what can mislead the user, as this stage occurs prior to any conscious attention. Designing with data is one example where the user can be misled, with rapid pre-attentive processing of information displayed.

Colors & shapes independently are also pre-attentively processed, but if you combine them and ask the user to determine both color and shape, that shifts the cognitive load into pattern perception or Stage 2.

Object recognition, or Stage 3 of the visual system, has been defined as the ability to rapidly recognize objects over a range of ‘identity-preserving transformations’, such as position, angle, size and background (DiCarlo et al., 2013). These variations in appearance are solved in the brain, whereby viewing the same object, albeit transformed, can still establish an equivalence of the various patterns, and not confuse any of them with other possible objects. (Decal et al., 2013). This is significant for UI design, as we embark on an era of 3D, VR and 4D.

One area where I diverge from Carroll’s thinking about perception as it pertains to HCI, is with regards to Gibson’s theory of affordances. Carroll states that because objects on a computer screen can only be interacted with indirectly, Gibson’s theory is problematic (Carroll, 2003). I acknowledge that at the time of writing, modern touch gestures were not possible, and the mouse and trackpad were the primary sources of interaction. However, I would argue that with the advancement of touch, there are physical affordances in an interface and hence Gibson’s theory is now more relevant, if not exactly how he intended it to be applied.


Carroll, J. M. (2003). HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science (Interactive Technologies) Elsevier Science. Kindle Edition.

DiCarlo, J., Zoccolan, D., & Rust, N. (n.d.). How does the brain solve visual object recognition? Retrieved June 6, 2015, from

Universal Design is Becoming a Civil Rights Issue edit

Now more than ever, day-to-day personal activities rely on interaction with different technology (phones, computers, cars). According to Dix et al, "Universal Design is designing systems so that they can be used by anyone in any circumstance" (2004). This is of vital importance as technology expands on a global level and the same technology or software solution becomes accessible by users from different cultures and of different ages, some of which must overcome physical or cognitive challenges. "Technology impacts the shape of our lives – it influences the people we stay in contact with, the people we date (and marry), the type of information we consume, the way we consume it, and what we do with it" (Barton, 2013).

People bank from their phone or computers, use video and text to communicate with others, or tell their car to make a call. It is essential that these solutions be able to help anyone regardless of physical or cognitive challenges. In many cases, disabilities or impairments can be overcome or rendered unimportant with smart design and multi-modal solutions.

A few ways in which technology is adapting to help everyone, include:

  • Displays and applications accounting for colorblindness and allowing visually impaired to see larger text or hear the text read to them.
  • Homebound users finding new ways to communicate through video chat, online ordering, and textual chat.
  • Users with speech impediments, such as a stutter, find seamless communication through textual chat.
  • Swipe keyboards and voice commands (Siri and Cortana) helping users with fibromyalgia or extreme arthritis to text or make calls.
  • Users with auditory challenges can use Skype or Google Hangouts to sign to each other through video conference.

Global design is not just necessary to sell products to a larger group of users. It is becoming a civil rights issue, especially with how integrated technology has become with a person’s daily life. Just as all buildings must provide a ramp for those unable to use the stairs, eventually, technological solutions will be expected to provide ways to ensure all users can comfortably access their banking information, health profiles, and communication tools regardless of any physical or cognitive challenges.

Works Cited

Barton, R. (2013, January 22). Technology's Explosion: The exponential growth rate. Retrieved June 6, 2015, from

Dix, A., Finlay, J., Abowd, G.D., Beale R. (2004). Human Computer Interaction. Retrieved June 6, 2015, from

Blossoming Conversationalists edit

In Carroll's book "HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science", I was struck by the poetic nature of Suchman's photocopier usability study in 1987. Likening a situation with bad usability to a conversation where both parties continue to misunderstand each other realizes one of likely many amazing metaphors for the reality of human computer interactions - a simple chat. I agree wholeheartedly that this metaphor is quite a useful and profound one when considering human-computer interaction. Computers process information in ways very similar to us, but if we were speaking as conversationalists, they can be much like Drax from Guardians of the Galaxy - unable to comprehend "colloquialisms" or nuances and subtleties in language - those little things we believe are understood, but could easily be foreign to someone from another area.

A Tower of Babel edit

Many times usability issues boil down to this disparage between the two - the computer "believes" (in the way its code has processed) that it is giving the user the information they need to navigate or to correct errors, but something is still left amiss for the user, whose vocabulary and background differs. Without being able to resolve this communication issue in some way (help features come to mind), the user feels as though the computer is speaking French when they only know Spanish - an impasse with no real means to move forward.

The Conversation Doesn't Have to End at a Language Barrier edit

In a conversation between two parties who speak different languages, there is still cases where communication can continue. Gestures can be a way to muddle through a communication barrier - a computer may "gesture" in signalling red error messages or providing call to action buttons, while a user may "gesture" (if the interface is designed in a way that "accepts" the gestures the user is giving) for help (navigating to contact us, FAQs, etc. - this gesture is either "accepted" if the user is guided somewhere they would like to go and relevant information/steps provided, or rejected if the "conversation" progress stagnates). If "gestures" from both parties are largely unrecognized or misinterpreted, then the use exchange becomes similar to a conversation over the telephone in two different languages - near impossible to follow.

Input and Output: Speaking and Listening edit

While this was a small portion of Caroll's chapter to get hung up on, the metaphor certainly stuck with me as I continued to mull it over in my mind. Computer coding is called "programming languages" and this at times - the use of the word "language" - can seem to be no small coincidence. Human-computer interaction involves both parties being able to "speak" via input and "listen" via output to each other, and well designed computers will be designed in ways that hope to understand the myriad of "slang", colloquialisms, or quirks that its varying conversation partners may have.

Works Cited

Carroll, J. M. (2003). HCI Models, Theories, and Frameworks. San Francisco: Morgan Kaufmann Publishers.

Gunn, James, dir. Guardians of the Galaxy. Marvel Studios, 2014. Film.

CMN-GOMS: It Just Makes Sense edit

As you may have guessed from the title, the CMN-GOMS model makes the most sense to me. This method, presented in Card, Moran, and Newell, “predicted operator sequences and execution times for text-editing tasks…” among others. The very-detailed example presented in Figure 4.7 shows how easily a programmer (or trainer; more on that concept coming up) can see difficulties with certain methods and propose new ideas to solve issues and create faster ways to accomplish tasks.

As an IT Trainer edit

As an IT trainer, I am always on the lookout for ways to make life easier for end users. Although I have never gone into as much detail as this technique describes, I would be very interested in trying it out. For example, I am often told that users have to, “click too many times” to get to where they need to go. Although my reaction may be accompanied by an eye roll, it becomes evident that this can be a real problem when a user is repeating a task several hundred times each day; an extra two clicks each time could equate to time better-spent dealing with more important ideas.

Documentation is the Key edit

I am a huge fan of documentation, so the fact that this model is in program form intrigues me. Following the steps in the model give you the exact same result every time, and the path can be modified depending on what the user is doing to obtain a different result. Although my employer is most likely years away from obtaining an eye-tracking instrument (if ever), I would be very interested in seeing how much time a user spends scanning a page for where to click next, and if indeed a more efficient use of his/her time can be achieved. It is very difficult to argue with the results presented by this model. It takes away any type of bias and returns concrete, indisputable facts. And when you are dealing with end users who are only interested in how quickly they can get their work done, those facts are the best way to prove whether or not their jobs can be improved.


Carroll, J. M. (2003). HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science. p.75 and p. 77 (Figure 4.7).

Perception issues in different age groups edit

Because different age groups perceive information differently, it may causes some perception gaps for certain age groups such as elder user group and teenagers. Based on Carroll's theory, some of problem is raised by different information they received in the first two stages (Features in early vision and pattern perception). Others is caused by different ways they map the information in stage three.

Older people’s color vision declines with age, and they become worse at distinguishing between similar colors. In particular, shades of blue appear to be faded or desaturated. It means, in stage one and two of Carroll's model, they may get totally different information due to their vision issue. Besides, old people who didn’t use mobile devices and computers might have never seen the icons we take for granted. They may never seen the floppy disk shape so it is impossible to have them link the image to the meaning of "Save".

For teenagers are grow up with smart phones, they basically doing everything on phones. However, existing icon design on mobile phones doesn’t really consider how they interpret the meanings. For examples: Address book icon, voice mail icon and TV icon.

Those icons are designed base on objects which are 15- 30 years old. Most teenagers don't have living experience with those objects. The first time they saw an address book may through mobile phones. Then they have to learn it and memory it. But still they may wondering why does the TV icon have “rabbit ears” on the top and two dots on the right?

Designers don’t have much troubles for now because young people are quick learners and they are good at memory things compare with other age groups. But basically they are ignored regarding to how they will perceive the information.

The Endurance of Patterns edit

In chapter 2 of HCI Models, Theories, and Frameworks we learn that human intelligence is broadly characterized as the ability to identify patterns and that our visual system predominates our perceptual systems (Carroll, 2003). I agree that using patterns and design principles that are based on the science of human perception creates rules grounded in a well-established theory. These principles have served interface design well and allowed HCI practitioners to create useable, functional designs.

In the future, I believe that identifying patterns will be key to future interface developments that HCI is now exploring. For example, researchers at MIT’s Media Laboratory are working to understand how to interpret human emotional states (Design Mind, 2014). These researchers are gathering data on facial expressions, body posture and gestures, and speech patterns, such as tone and inflection. Uncovering patterns in these data will allow computer programs to identify human emotion so that designers can create interfaces that go far beyond functional and usable—they become pleasurable and seamless.

Patterns are essential to human success, so much so that artificial intelligence innovator Ray Kurzweil is teaching artificially intelligent machines to think, based on the incremental refinement of patterns (Basulto). Humans have been called pattern-recognition machines, so it shouldn’t be surprising that the fields of psychology, neuroscience, and HCI will continue to mine them to create more effective products and interfaces. We can see patterns being used to improve Google driverless cars, medical diagnosis machines, and wearable devices.

Works Cited

Basulto, D. (n.d.) Humans are the World’s Best Pattern-Recognition Machines, But for How Long? Big Think. Retrieved from

Carroll, J. M. (2003). HCI Models, Theories, and Frameworks. San Francisco: Morgan Kaufmann Publishers.

Design Mind (2014). Design and the (Ir)Rational Mind: The Rise of Affective Sensing. Retrieved from

What is Cybernetics? edit

Cybernetics derived from a Greek word, which means to steer. Technological system has a property to steer through different directions therefore self-correct to reach a goal. Cybernetics is the most powerful language, describes a system that has goals. It is also an exceptional way of describing how the world works and help developing effective organization, of control and communication in animals and machines. This system also helps us filter out what is irrelevant to a situation and consider only which are relevant. An example of Cybernetics would be opening an email account with Google. Here the goal is to successfully open an account, which may involve several self-correcting steps such as selecting a gender, using a correct area code for mobile phone number, making sure to pick a user name not previously taken by someone else, selecting proper format of password acceptable by system and more. The system will give feedback to the user whenever there is a wrong submission of a required/important field such as mobile number or country selection etc. Finally it will steer the user to successfully open a new email account.

Cybernetics is comprehensive in nature therefore includes physical, technological, biological and social system into account. Many individuals hand tried to define Cybernetics in different ways therefore it could legally have multiple definitions without contradicting itself.

Reference: 1. Cross, J. (2010, September 21). What is Cybernetics? Part 1 of 2 Interview with Paul Pangaro. Retrieved June 21, 2015, from