Foundations of Computer Science/What is Computing

What is Computing

edit

In this course, we try to focus on computing principles (big ideas) rather than computer technologies, which are tools and applications of the principles. Computing is defined by a set of principles or ideas, which underlies a myriad of technologies that are created based on the principles. Technologies can be complex and constantly evolving but principles stays the same. In the second half of the course, we will study various technologies to demonstrate the power of computing and how principles are applied.

In addition to principles of computing and technologies there are practices of computing - what professionals do to advance computing.

The chart to the right illustrates the difference between principles of computing and practices of computing. Principles underlie technologies and practices. A consumer exploits the power of computing through the applications built for them for various tasks. We believe everyone needs to know the principles of computing because such principles are widely applicable. As professionals in the field of computing we need to know the two ends and everything in the middle - the practices (activities and skills that make computing useful and effective).

 

We will use the terms computing and computation interchangeably throughout the book.

Principles of Computing

edit

Computing is fundamentally about information processes. One of the big ideas of computing is that information processes can be carried out purely mechanically via symbol manipulation. The agent that does the computing, whether a thinking human being or a machine (computer), does not matter. Toward the end of the book we will see this is true for all modern computers - digital computers manipulate two symbols (zero and one) blindly according to instructions.

An Analogy

edit

The following analogy from the "Thinking as Computation" book [1] illustrates the idea. Imagine that we have the following table of symbols.

a b c d e f g h i j
a aa ab ac ad ae af ag ah ai aj
b ab ac ad ae af ag ah ai aj ba
c ac ad ae af ag ah ai aj ba bb
d ad ae af ag ah ai aj ba bb bc
e ae af ag ah ai aj ba bb bc bd
f af ag ah ai aj ba bb bc bd be
g ag ah ai aj ba bb bc bd be bf
h ah ai aj ba bb bc bd be bf bg
i ai aj ba bb bc bd be bf bg bh
j aj ba bb bc bd be bf bg bh bi

The symbols can be any set of symbols, we pick the letters from the English alphabet for simplicity. We can define a procedure P that takes two symbols ('a' through 'j') as the input and produces two symbols in the same set as the output. Internally, the procedure uses the first input symbol to find a row that starts with the same symbol, then uses the second input symbol to find a column with the same symbol at the top, and then report/return the symbols at the cross point. It is not hard to imagine such a table lookup procedure can be done purely mechanically (blindly) by a simple agent (e.g. a device or a machine). Of course a human being can do it but this type of symbol manipulation requires no human intelligence. Two conclusions can be drawn from this thought experiment:

  • Symbol manipulation can be done mechanically.
  • The machine that performs the manipulation does not need to know the meaning of the symbols nor the purpose of the manipulation.

This procedure can be meaningful if we know how to interpret the symbols. For example, if the symbols 'a' through 'j' represent quantities of 0 through 9 respectively, this procedure performs single decimal digit addition. For instance, p(d, f) = p(3, 5) = ai = 08, which is the correct result of 3+5. The following table is essentially the same as the previous one except that it uses symbols that are meaningful to humans.

0 1 2 3 4 5 6 7 8 9
0 00 01 02 03 04 05 06 07 08 09
1 01 02 03 04 05 06 07 08 09 10
2 02 03 04 05 06 07 08 09 10 11
3 03 04 05 06 07 08 09 10 11 12
4 04 05 06 07 08 09 10 11 12 13
5 05 06 07 08 09 10 11 12 13 14
6 06 07 08 09 10 11 12 13 14 15
7 07 08 09 10 11 12 13 14 15 16
8 08 09 10 11 12 13 14 15 16 17
9 09 10 11 12 13 14 15 16 17 18

Now that we have a simple procedure P that can instruct a simple agent to add two single digit decimal numbers, we can devise a new procedure P1 that can add three single-digit decimal numbers as shown in the following chart.

 

The new procedure P1 employs three instances of procedure P to add three decimal digits and return two digits as the result. We can view the procedures as machines with inputs and outputs and the lines are pipes that allow the symbols to go from one place to another place. It is not hard to imagine that an agent that can carry out P can carry out P1 as P1 is entirely made up of P. Note that the dotted rectangle represents the new procedure P1 made up of instances of P and the answer given by P1 for the sample inputs is correct. Again the symbols used in the process can be any set of symbols because internally simple table lookups are performed.

Now imagine that we could use P1 to construct more complex procedures, for example procedure P2 in the following chart.

 

P2 uses P1 to add two double-digit numbers, in fact we can simply add more P1s to the design to deal with any numbers of digits.

By now we can make the following observations:

  • Whatever machine that can perform P can perform P1, P2, and etc.
  • We have made procedures that perform seemingly intelligent activities by making them more complex and at the same time kept them doable by simple machines.

If we follow the same line of reasoning, it is not hard to imagine we can create increasingly more complex procedures to instruct the simple machine to do progressively more intelligent things, such as

  • integer subtraction
  • compare two integers (subtraction and check the sign of the result)
  • integer multiplication (repeated addition)
  • represent fractions using a pair of integers and do arithmetic on them
  • use matrices of integers to represent systems of equations and solve them using matrix operations
  • use systems of equations to model complex physical systems and perform numerical simulations of these systems

In summary, from this example we can see that simple symbolic operations can be assembled to form larger procedures to perform amazing activities through computational processes. Such activities are not limited to numerical calculations. If we can represent abstract ideas as symbols (as we represent abstract quantities as concrete numbers) and device procedures to manipulate the symbols according to the relations among the ideas we can model reasoning as computational processes. This is what computer science is fundamentally about - information processes with two essential components: representations and a sequence of rules for manipulation of the representations. Note that it has nothing to do with electronics or physics. The machine that carries out such processes does not need to know the meaning of the symbols and why the process yields correct results. The machine only needs to follow the procedures (a set of rules) blindly.

As an example, you can read about a mechanical computer (difference engine) designed by Charles Babbage that can tabulate polynomial functions:

 

Another Analogy

edit

Richard Feynman used another similar analogy (file clerk) to explain how computers work from the inside out in his Computer Heuristics Lecture (1 hour 15 mins): http://www.youtube.com/watch?v=EKWGGDXe5MA

History

edit

Now that we have learned computing is, in essence, a certain manipulation of symbols. A computer's ability to perform amazing tasks depends on its ability to manipulate symbols according to well defined rules. In fact, digital computers only manipulate two symbols - zeros and ones. The intelligence of computing lies in the design and implementations of the rules/programs.

In the future, when talking about computer machinery, we will see computers are constructed using such principles.

You may wonder where the ideas come from. Many people in history made significant contributions to ideas of computing and computers. Gottfried Leibniz (1646–1716), a German philosopher, is considered the first person to dream of reducing reasoning to calculation and building a machine capable of carrying out such calculations. He observed that in arithmetic we represent abstract quantities using symbols and manipulate the symbols to get useful results according to rules. He dreamed that we could represent abstract ideas using symbols and reason with the ideas according to the logics between the ideas via similar concrete symbol manipulation as we do in arithmetic. Such manipulations give us correct results not because whoever does the manipulation is intelligent but because the rules of manipulation mirrors relationships between quantities and logics between ideas.

Because of Leibniz’s dream, now we have computer science and universal machines called computers. A computer is fundamentally a physical device that can manipulate symbols following very simple logic rules. Almost all computers are electronic because it happens to be cheaper and easier to build that way. Computer science is fundamentally about the information process (dealing with abstract ideas) that takes place through symbol manipulation, which follows a recipe (a set of rules). Such recipes are also known as algorithms. No wonder so many computer programming books are called cookbooks :) In computer science we study how to represent information and how to design and apply algorithms to get meaningful results. There are usually many ways to perform the same task. Comparing algorithms for evaluation purposes is called algorithm (complexity) analysis. Communicating an algorithm (recipe) to a computer is called programming/software development. The languages we use for such communication are called computer programming languages. The artifacts of programming are computer programs or software. The engineering disciplines we try to apply in the software development process to produce quality software is called software engineering. So computer science is more about problem solving than computers. Computing science is probably a more appropriate name for this discipline.

Practices of Computing

edit

Principles are fundamental ideas that permeate all aspects of computing. Practices are not principles but are very useful to identify because they identify the central practices of computing professionals. Practices, sometimes called "know-hows", define someone's skill set and the level of competency: beginner, competent, and expert. The four core practices of computing are identified in the Great Principles of Computing project:[2]

  • Programming (including multilingual programming practice)
  • Systems and systems thinking
  • Modeling, validating, testing, and measuring
  • Innovating

Programming is an integral part of computer science because it allows us to explore abstract ideas in computer science in concrete ways. It is also an exciting creative process, which brings a great deal of satisfaction when we can make computers do useful things. In this course we will program in a very high-level graphical programming environment to explore ideas in computer science.

Donald Knuth regards programming like composition: well-written programs are a pleasure for others or yourself to read. He believes that programming is triply rewarding :

  • beautiful code (aesthetic)
  • do useful work (humanitarian)
  • get paid (economic)

Programming a computer is essentially teaching the computer how to do things. As we mentioned previously computers are simple machines that strictly follow orders. For a computer to do the right task the instructions in our program must be correct and logical. Programs that are executable on a computer are software - serving as the brain of the computer. Software with errors is called buggy (see for the history of this name) software. Testing software on an actual computer can help catch most bugs in the software. Testing provides almost immediate feedback to the quality of our programs so that we can fix bugs and improve it. Because of this, we believe programming makes us better thinkers and learners. We will see why it is hard to prove the correctness of programs.

 

References

edit
  1. Levesque, Hector,Thinking as Computation, Hector J. Levesque, ISBN 9780262016995
  2. Denning, Peter, The Great Principles of Computing, http://denninginstitute.com/pjd/GP/GP-site/welcome.html