A computing system is a dynamic entity, used to solve problems and interact with its environment. Note how the term is used separately from the word computer. A computer is a device. As such, a computing system is composed of hardware, software, and the data that they manage. Computer hardware is a collection of physical elements that make up a machine and its related pieces; the casing, circuit boards, electronic chips, wires, disks, keyboards, monitors, the list goes on. Computer Software is the collection of programs that provide the instructions that a computer can carry out. At the very heart of a computer system is the information that manages it.

Planning and System Installation edit

Installing a New System edit

 
A screenshot of FileZilla installing, a typical scenario associated with the word "installation."

You might typically associate the word "installation" with that of a program or application, such as the one on the right, that you have downloaded off the internet. You typically see some sort of loading screen, indicating the progress of the installation and have a variety of options that you can set in order to get the most out of the application.

A new system can be thought of as similar to that of the installation of a program on your computer, in that it is placed in a new environment. However, the two must not be confused. Installing a computer program is not the same as installing a new system, as the latter is about the development of a new system, the focus of this section, rather than just unpackaging files for execution.

The System Environment edit

1.1.1 Identify the context for which a new system is planned.

In its most general sense, an organisation is the collective goal which linked to its environment, or "context". The environment, therefore, defines the limits of an organisation. When systems are built, they are defined by the needs of their creators, and as such, systems are not independent entities but exist in an environment. This environment affects the functioning and the performance of the system. Sometimes, the environment may be thought of as a system in its own right, but more generally, it consists of a number of other systems which interact with each other.

Example: Spreadsheet Tools edit

When a new system is planned, it is generally the result of a change in the environment. An outdated or archaic system might exist and, as technology progresses, a more efficient and dynamic entity may replace this system. An example of such a case arose with the development of spreadsheet tools in the late 80s and early 90s. During the beginning of the century, lists, data, and various other records were maintained through the use of physical charts and tables. Today, similar records can be made simply through the use of desktop software, and maintained and edited through the dynamics of these new computer systems.

Example: IPv6 edit

For years a version of Internet Protocol (IP) known as IPv4 (IP version four) was used to create and put in place the network layer on the world wide web. The web has rapidly outgrown the 32 bit internet addressing system set by IPv4, so in order to solve this problem as well as establish other useful tools like multicasting, a new version called IPv6 has been established. IPv6 uses 128 bits and this has been accepted as a viable IP, so now conversion from IPv4 to IPv6 is taking place. It is expected that 32 bit addresses will be extinct by 2026.

It is important to keep in mind the extent and limitations of new systems as they are introduced. Consider the internet's rollover to IPv6. While the benefits include more IP addresses, integration of additional security features (IPsec) and enough addresses for each unique device connected to the internet, including servers, laptops, phones, or smartmeters, the switchover has some inherent limitations. First, switching over to an IPv6 stack may be quite expensive and time consuming for larger companies. Additionally, the process required the training of IT staff and the replacement of a lot of archaic equipment. As of late 2014, the switchover is still a work in progress.

Changes in Environment edit

1.1.2 Describe the need for change management.

Moore's law describes the natural progression of technology, which essentially indicates that there is a natural development of new systems. These new systems will be developed in response to a problem or to replace an older system. In the case of the latter, these new systems will require a fluid transition from one system to another, requiring management. This management will require the technical ability to oversee this transition and have a say in how it is best implemented.

Problems with Change edit

1.1.3 Outline compatibility issues resulting from situations including legacy systems or business mergers.

A whole host of errors or incompatibilities may stem from transition of software, and the insightful knowledge of an external party or internal expert may reduce the friction between the implementation of the newer systems.

Legacy Systems edit

Legacy systems refer to outdated computer systems that have been superseded by newer technologies. Such systems have many disadvantages. First, costs for maintaining older systems may be higher as the systems may fail more frequently with age. Second, technical support for these systems may no longer be available. Additionally, legacy systems tend to be more vulnerable to security threats due to a lack of security updates.

Business Mergers

Business mergers refer to the combining of two or more business entities, usually to save costs or expand. During business mergers, one common system needs to be adopted for all entities. 4 ways of integration are:

  1. Keep both information systems and develop them to have the same functionality (high maintenance cost)
  2. Replace both information systems by a newer common solution (high initial cost of installation and training employees for usage)
  3. Keeping the best information systems from each organisation (employees may be unfamiliar to working with other entities' information systems)
  4. Keeping only one of the systems (may be restricted by organisation policy)
Compatibility Issues edit

When a system changes, there are repercussions. These repercussions can be felt locally by the users who use the system on a day-to-day basis, or internationally by those who rely on it. During the design process of a new system, developers must take into account the influence of this change. Following the earlier example of the spreadsheet revolution, people who began to use the software had to undergo a learning curve. This is the most basic and fundamental sense of the term "compatibility issue." If they could not use it, they could not work or had to continue using their old system, only to see it become more and more obsolete.

Compatibility issues these days can have greater repercussions, especially when systems are relied upon. If a system is developed and does not take into account the fact that other users, perhaps even third-parties, rely on it, then it might experience a whole host of problems. Going back to the spreadsheets, if an organisation began using spreadsheet software to manage the orders for a particular good, and it begins to co-operate with another organisation who uses ye old pen-and-paper, they are going to have great difficulty working together. The new system did not take into account users who still managed their orders in this manner, and does not allow these people to easily transfer the details. Because the software is not bespoke and is developed generally, some end-users may find it difficult to utilise the new system. Whilst a solution may be to provide comprehensive and easy-to-read documentation so much so that it eliminates obscurities found by people, in some cases, data can no longer transfer between the two parties.

This is a particularly extreme example of a compatibility issue and illustrates the fact that systems need to be designed with others, older systems, and the all users in mind. Other typical examples include web-application APIs, desktop software updates, and physical hardware upgrades (such as Cassettes to CDs).

Software incompatibility is a situation where different software entities or systems can not operate satisfactorily, cooperatively or independently, on the same computer, or on different computers linked by the local or wide area network.



1.1.4 Compare the implementation of systems using a client's hardware with hosting systems remotely.

 
Netflix is a classic example of SaaS

Solutions to Change edit

Software-as-a-Service (Saas) edit

Software as a service (SaaS; pronounced /sæs/[1]) is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted.[2][3] It is sometimes referred to as "on-demand software".[4] SaaS is typically accessed by users using a thin client via a web browser. SaaS has become a common delivery model for many business applications, including office and messaging software, payroll processing software, DBMS software, management software, CAD software, development software, gamification, virtualization,[4] accounting, collaboration, customer relationship management (CRM), management information systems (MIS), enterprise resource planning (ERP), invoicing, human resource management (HRM), talent acquisition, content management (CM), antivirus software, and service desk management.[5] SaaS has been incorporated into the strategy of nearly all leading enterprise software companies.

Unlike locally hosted software, which has to be updated manually, SaaS programs are centrally updated by the vendor. Additionally, SaaS has no maintenance costs for the client, as well as lessening the need for server technicians. SaaS often does not exclude (or is able to take into account) older systems.

On the other hand, SaaS downtime is out of control of the customers. Perhaps more importantly, SaaS presents very dangerous security problems. As the data is stored remotely, the vendor has full access to all customer data. For some situations (such as hospitals or human rights NGOs) this compromise is unacceptable (or even unlawful, see [1]).

Network-as-a-Service(Naas) edit

Describes services for network transport connectivity.[1] NaaS involves the optimization of resource allocations by considering network and computing resources as a unified whole.[2] This concept can be appealing to new business owners because it saves them from spending money on network hardware and the staff it takes to manage a network in-house. In essence, the network becomes a utility, paid for just like electricity or water or heat. Because the network is virtual, all its complexities are hidden from view. NaaS isn't a new concept, but its deployment has been hindered by some of the same concerns that have affected other cloud computing services -- especially questions about the provider's ability to guarantee high availability (HA). Other concerns include dealing with service level agreements (SLAs), compliance issues related to data sovereignty and the possibility of vendor lock-in.

Alternative Installation Processes edit

1.1.5 Evaluate alternative installation processes.

Parallel running edit

A system that gets implemented along side the existing system. This can only occur for a changeover and not for initial implementation.

With this method both the original system (legacy system) and the new system will run over a period of time. All the new data will be inputted into both the new and the old system, and the results and outputs of the new system will be compared to the old one in order to make sure that the new system is working properly. As soon as you can be sure the new system is functioning the way it is supposed to be, the old one is going to be shut down. This method is the most secure and least vulnerable as even in the worst case of the new system failing completely, no data will be lost. However this requires a lot of work from both the person putting the system in place, and the staff that have to input all data twice.

Big Bang/Direct Changeover

In this method of implementation, all of the existing systems with be retired or decommissioned and will be replace with a new system. This method is quite risky because if the new system fails, all of the data is lost. Another requirement for this method is that all the staff need to be trained before the installation of the new system. Direct changeover/implementation or Big-Bang implementation is when a new system gets put in place immediately with a minimal time for change over. This requires the smallest amount of work; however, it can cause serious problems to the extent of the whole system becoming useless if the system does not function properly.

Pilot Running edit

This method is a way to try out the new system on a small scale before completely implementing it. Usually there will be one department of the organisation chosen to be the pilot project. This department is then going to changeover to the new system. If the implementation in the first department is successful, the rest of the organisation will follow. This is a good approach for large scale companies, and it is fairly secure because a fault in the system won't have such massive effects. It is also fairly cheap to complete.

Phased Conversion edit

Phased conversion or phased implementation means that the system is implemented in several phases/stages over a certain period of time. This makes it less vulnerable to total failure than a direct implementation. A phased conversion requires a long time to be put in place and a significantly higher amount of work

1.1.6 Discuss problems that may arise as part of data migration.

 
Cassette players are an excellent example of a technology that proved tricky to migrate data from
Data Migration edit

One issue that users may find is that of the language barrier. Being Lost in Translation certainly has its drawbacks when utilising software. Not only can it be expensive to translate between two or more languages depending on the size of the audience, but there exists a more fundamental computer related concept: character encoding. The issue of the international characters has been addressed from the early 1990s[1] and has expanded not only for characters, but also to international data conventions, such as timezones and currencies. However, being able to recognise, address, and utilise such standards is important in maintaining compatibility on this front. Expanding a system's audience or simply making it a little bit more user-friendly should always be a priority.

File formats are another worry when concerning the issue of data migration and compatibility. Having the ability to read and write data in the same manner as before, following an improvement or change in a system, is almost certainly the fundamental aspect which is taken into consideration.

It can be argued that it is necessary to adopt a new file format as an improvement on an older system. In this case, it may be appropriate to adopt the new file format. However, the older data structures need not be forgotten; for example, when you update your favourite music player, you would still expect to play your same saved playlists. It is this idea of expectation and convenience that drives new software forward, and issues addressing incompatibility in file formats should always be a priority. Improvements should be made, but they should not neglect the past.

Additionally, you should be aware of discrepancies in data validation rules. For example, a program designed for American users (who use the MM/DD/YY format) but installed in Germany (which uses DD/MM/YY format) may reject "21/09/14" as an input.

Finally, one must be aware that a complete data transfer is required. Should some disks fail during the transfer process or should some data be corrupted during the transition, the client could suffer serious losses.

1.1.7 Suggest types of testing.

Testing edit

Testing is investigating a computer system to find flaws, discrepancies or errors. There are various types of testing that can be applied to various parts of the development process.

Alpha Testing edit

Alpha testing involves offering an early development version of the program to other developers in-house and receiving feedback from them on improving the product.

Beta Testing edit

 
Beta version of the open source OS, Ubuntu 11.04

After alpha testing, the company may choose to provide a version of the product to a select outside group (closed beta) or to the public (open beta), with the expectation that the users will provide feedback and report bugs to the developers.

While beta testing allows individuals to experiment with software before the final version is released, it is not a systematic method of testing. Reports by the public may be of low quality and many duplicate bugs may be reported.

Dry-Run Testing

Is a check to ensure there are no errors in the algorithm or the logic of the system. Is conducted by engineer on pen and paper.

Unit Testing edit

Unit tests involve small, individual tests for sub modules of the program that can often be automated. These may involve regression tests or tests that ensure bug fixes don't accidentally break other parts of the program.

Integration Testing

All components are tested together to ensure system works as a whole.

User Acceptance Testing edit

In this type of testing, the product is shown to a group of clients as a final check before releasing to the market. This provides the developers with critical information to better understand the target audience. The developers can easily collect realistic feedback with this method.

Debugging edit

Debugging is a systematic process of finding and correcting the bugs (minor errors) in a computer program. It is important to mention that there are computer programs that can automatically test other programs. This makes the testing process faster and cheaper.

User Focus edit

1.1.8 Describe the importance of user documentation.

User documentation supports computer system users, including both hardware and software. Good user documentation can ensure that users are quickly able to adapt to a new system. Documentation is an important part of software engineering. Types of documentation include:

  • Requirements - Statements that identify attributes, capabilities, characteristics, or qualities of a system. This is the foundation for all that is implemented.
  • Architecture/Design - Overview of software. Includes relations to an environment and construction principles to be used in design of software components.
  • Technical - Documentation of code, algorithms, interfaces, and APIs.
  • End User - Manuals for the end-user, system administrators, and support staff.
  • Marketing - How to market the product and analysis of the market demand


1.1.9 Evaluate different methods of providing user documentation.

User documentations can include online help, an FAQ section on the website, and video tutorials.

Comparison of different methods of user documentation: edit

Help Files edit

Help files are easy to access by the user and cheap to create. Also, the users will not lose a help file, whereas a printed manual can be misplaced. In comparison to online documentation, it does not require an internet connection in order to function properly.

Online documentation edit

Online documentation requires an internet connection, which can restrict access for the user. However, it is usually easier to use and search through. There is also the option to update the documentation afterwards.

Printed manuals edit

Printed manuals used to be the main method of user documentation, but now they are being replaced by digital manuals to reduce production costs and be more environmentally friendly. The advantage of printed manuals is that they can be accessed at any time, even if the system has not yet been installed, so they can help if there is a problem with the installation.

1.1.10 Evaluate different methods of delivering user training.

Users can be taught through formal classes or online training or they can learn by themselves.

Comparison of different methods of user training edit

Self instruction edit

Self instruction means that the users learn how to use the system by themselves. This method is especially effective with common use systems that are used by many people who can’t all be personally trained to use the system. This method is the easiest; however, it is not particularly effective. It will only work if the program is easy to use and there is appropriate user documentation.

Onsite training edit

Onsite training requires you to come to the premises where the system is being used and show it to the users personally. This is probably the most effective way of training, as the users can directly ask questions and you can make sure the system is suitable for the local conditions. However, this is the most expensive type of training because of travel costs. Also, providing training to each employee in a large company can take a long time and some employees might not be present while the training occurs. Additionally, if the training is outside the employees' normal working time, they are likely to be disinterested, disengaged, and not ask for answers to their questions.

Remote training edit

Remote training is easier to organize than onsite training, which also makes it a lot cheaper. However, it might not be as effective as onsite training. An advantage of remote training is that it is really easy to include new employees of the company.

1.1.11 Identify a range of causes of data loss.

  1. Accidental Deletion
  2. Administrative Errors
  3. Poor Data Storage Organisation System
  4. Building Fires/Natural Disasters
  5. Closing program without saving file
  6. Continued use after signs of failure
  7. Data Corruption
  8. Firmware Corruption
  9. External Deletion/Stealing of Data
  10. Physical Damage to storage device
  11. Power Failure

1.1.12 Outline the consequences of data loss in a specified situation.

Data loss is not desirable in any condition and needs to be prevented. It can have serious repercussions such as deletion of a patient's medical records from a hospital's data base.

1.1.13 Describe a range of methods that can be used to prevent data loss.

System Backup edit

Data loss can be caused by many reasons, such as natural disasters or other external factors, theft of data, destruction by malicious software, or corruption by a system failure. However, the main risk to data is the user himself, who might delete the data or save a new file under the same name, overriding the old data.

Data loss is always a huge problem to any company or organisation; however, in certain businesses it can be more fatal than others. Data loss has the most serious effects in the medical business, as medical records of patients can sometimes mean a difference between life and death. This is why even today, there is always a hard copy of every patient's data in case of a computer system failure. Other businesses that are highly data dependent are internet based companies that perform all their business through computer systems, such as website host providers. If these providers lose data, they will likely lose many customers as well.

The most important method to prevent data loss is making regular backups of all important data that are stored at a different geographical locations. Online storage can help to prevent data loss, as a huge company's servers that are used for commercial online storage are less likely to have faults. In addition, such companies will usually make backups for you. The problem with this is that an active internet connection is needed in order to access the data. Another problem with online storage is a possible breach of the data protection act if the information is stored in another country. The privacy of your information might also be in danger if you were to send it over unencrypted internet protocols.

1.1.14 Describe strategies for managing releases and updates.

Software Deployment edit

 
Bad update mechanisms such as this Trojan-infected fake Java update cause undue risks to end-users.

No software is perfect. Complex software, such as the Linux kernel or the Windows network stack, often boasts millions of lines of code. As such, bugs and security vulnerabilities are found in critical parts of the OS after release. An automated update mechanism (such as Linux software repositories or Apple's update servers) is therefore crucial to ensure the good performance and security of a computer system.

References edit