Читайте также: |
|
Lecture 1
The lecture introduces fundamental programming concepts.
We begin with explaining how a computer works and defining some important notions, such as bit or byte. However, issues related to computer systems in general are described briefly, with respect to programming.
Before all we want to understand what is programming. We are going to find out a little about the programming languages, but the main idea is to realize, that programming does not amount exclusively to mastering syntactic rules of a programming language, but - in the first place - is the art of translating problems into algorithms solving them. Only after such translation one may code an algorithm in a programming language.
Topics covered here are surely known to those, who have some computer-science experience. The course in Java programming language is delayed to lecture 4.
1.1. How a computer operates?
As is well known, typical PC consists of CPU (Central Processor Unit), various types of memory (placed on the mainboard) and peripheral devices, some of which might be integrated into the mainboard (for instance graphics card, hard disks, sound card, network adapter, modem, monitor, etc.). These computer elements are called hardware. Are they enough to use a computer for such common tasks as:
The answer is NO. The hardware (even the most modern) isn't enough.
We need software to do that.
A program is encoded sequence of instructions for CPU or other hardware devices.
Writing and editing of texts may be carried out only by means of adequate software - editor or text processor. Such programs are complicated, often very big, but in general amount to sequences of simple instructions for the hardware (mainly CPU).
It's common knowledge that instructions are executed by the application. However, to be precise, one should remember that instructions are passed on to the CPU to execute.
While sitting before a blank page of a document and wondering how to start writing, some part of executing editor application monitors keyboard (waiting for input data to appear), and after pressing a key it executes instructions resulting in displaying the adequate character on the display device (monitor).
Thus the execution of a program consists in passing instructions on to the CPU. Program does not execute all the time. What happens when it does not execute? What has to occur to execute an application?
Programs (as stored sequences of instructions) reside on hard disk (or other mass storage device). Starting up of a program consists in loading it to memory and transferring control to its first instruction. This is accomplished by dedicated software. Without such software a computer could not work. It would neither communicate with a user nor execute - on his behalf - any program.
Operating system is a collection of programs managing computer resources, file system, CPU time sharing, interprocess communication, program execution and interaction with user.
Thus we can state, that operating of the computer consists in execution of programs (system or user ones).
All the programs communicate with CPU in dedicated language. Words of this language are simply numbers. Some of them have special meaning - codes of instructions (very simple ones; complex programs are made of big number of these simple instructions). Other numbers denote additional information which is mostly needed by the execution of instructions. The additional information constitutes program's input data.
The digital representation of instructions, understandable by the CPU, is called machine code or machine language.
Sample sequence of machine instructions of a CPU may look like this:
10001010 10001000 00000001 11001010.....
What are these weird numbers made up of nothing but zeros and ones?
This is nothing other than a record of information (program's code in this case) in the machine language as a sequence of digits in binary system. Information understood by the computer must be encoded this way, because commonly used digital devices have definite sets of elementary states, each of which may characterized as "ON" (1) or "OFF" (0).
This implies that
the smallest amount of information, which may be processed by the computer, is digit "0" or "1" in binary representation of a number.
This quantity is called a bit.
Important role in computer science plays byte.
Byte = 8 bits
(although it was not always the case.... see...here...)
Machine word is the smallest addressable space in random-access memory. It is, as well, corresponding to this amount size of registers of the CPU (very fast inner memory in which a CPU stores temporary data for currently executed instructions).
A CPU operates on units called machine words, consisting of an integer number of bytes.
A byte is the smallest piece of machine word accessible for the CPU. Codes of instructions for the CPU are made up of one or more bytes. Characters are also encoded as numbers made up of one (ASCII, EBCDIC) or more bytes (DSCB or Unicode).
Just because half of byte may be presented as a hexadecimal digit, the hexadecimal system is more convenient (than binary or decimal) form of presenting machine code of a program.
A piece of machine code is shown on the picture. At the left one can see column of relative addresses of program's address space followed by the sequences of instructions (their codes and data).
Of course one can write a program in machine language, but it is extremely laborious. However, at the early stages of computer science programs where written this way.
Introduction of assemblers - languages of symbolic representation of machine code came to programmers' aid.
Since that time, a program which had to be written this way:
5830 D252 5A30 D256 5030 D260
could then be rewritten in much simpler and understandable manner:
Assembler instruction | Description |
L 3, X | load the number located in memory at the address denoted by X to register 3 |
A 3, Y | the number stored in memory at the address denoted by Y add to value stored in register 3 |
ST 3, Z | save value held in register 3 in memory location denoted by Z |
Such notation (better understandable by a human) is however incomprehensible for a CPU. Programs written in symbolic assembler have to be translated into machine language (sequence of binary digits). This work is accomplished by dedicated programs - translators (also called assemblers).
Anyway, programming in assembler language is burdensome. Worse still, it burdens a programmer with a duty of remembering various technical details (registers, their numbers, memory addressing), requires writing a lot of very simple instructions not allowing to concentrate on the logic of the problem (solved by the program).
In fact, the above program adds two numbers (denoted by X and Y) and stores the result as Z.
Why couldn't one simply write:
Z = X + Y?
Here come into being higher level languages, which consolidate in their instructions, syntax and semantics many simple assembler instructions, hiding its technical details. These languages are (unlike assemblers) independent of underlying CPU. Using such language one may write: Z = X + Y, not bothering with registers, relative memory addressing and the set of instructions for the CPU.
Using of higher level languages requires however advanced tools for translating program's source code into instructions comprehensible for a CPU: compilers and/or interpreters.
Дата добавления: 2015-11-16; просмотров: 66 | Нарушение авторских прав
<== предыдущая страница | | | следующая страница ==> |
ОСНОВНЫЕ ПРИНЦИПЫ КЛАССИФИКАЦИИ ГЛАСНЫХ | | | Algorithms and programming languages |