Computer Graphics/GUI

GUI (pronounced GOO-EE) stands for Graphical User Interface. It is a graphical communication interface made for software users. The GUI is the only way for users to communicate with the software system (without invoking the functions in code manually, as opposed to software libraries or command line invocation). A GUI has several windows. Each window has widgets. A widget is any form of communication element that either takes in user data, changes its state based on the data, displays something important for the user to know, or a combination of these. A widget may give rise to a window, when something needs to be done hierarchically. Each window is associated with a virtual surface. When placing widgets on windows, it is necessary for the programmer to specify which surface/window the widget belongs to. Widgets are typically static, although, it is possible to create widgets that can be dragged by mouse or finger/stylus (in case of touchscreen device).

A widget can be thought of as interactive images. For example, a circuit simulator application might have several components whose looks are specified by images under resources the application requires.

A GUI works by continuously listening to events. Most likely, the events are mouse events. But depending only on one device like mouse is not great for accessibility, in case the mouse fails for some reason or the user is unable to use mouse. So keyboard events like pressing the Tab and Enter key may also be listened to. Sophisticated applications also employ keyboard shortcuts, so keyboard events are used irrespective of accessibility concerns. Listening to events, capturing events when they occur, and handling them is how GUI works. Handling the events usually involve making changes to the graphical output of the GUI. To continue the previous example, if the GUI detects a click and hold event on one component, it would handle the event by creating a copy of the component in the location of the mouse and move the image of the component along with the mouse. Then when it detects the click release event, it would 'place' that component at that location. This simply means drawing the component image at that location and not changing its position anymore based on mouse position. GUI creates the illusion of actually holding the component, even though all that is happening is the component image being redrawn at a different location.