Студопедия
Случайная страница | ТОМ-1 | ТОМ-2 | ТОМ-3
АрхитектураБиологияГеографияДругоеИностранные языки
ИнформатикаИсторияКультураЛитератураМатематика
МедицинаМеханикаОбразованиеОхрана трудаПедагогика
ПолитикаПравоПрограммированиеПсихологияРелигия
СоциологияСпортСтроительствоФизикаФилософия
ФинансыХимияЭкологияЭкономикаЭлектроника

Е.И. Курашвили, И.И. Кондратьева B.C. Штрунова 12 страница



With computer simulation you can have a series of problems in which you can figure out all the ramifications, all the permutations and combinations, and do it very quickly and know the different combinations that are at stake. So you can use it really as a means of controlled experiment. You can get a computer model of a city and play out all the different effects, so that if you decide, for example, to relocate traffic in one way you can trace out very quickly, on the model, the effects on industry locations, residential densities, and the like. And more important, when you have alternative plans ofthis kind you can then choose, and that is the fundamental aspect of all such notions of planning. It allows you to have a sense of wider choice, to see therefore the consequences of it and say, I prefer this scheme rather than another.

 

TEXT 4

• Read the passage carefully. Choose the key word or sentence that best sums up the main idea of each paragraph.


There are two types of computers, the analogue and the digital. Basically, today's analogue computer is a device for measuring such physical quantities as lengths and voltages and, through a mechanical linkage, exhibiting the measurement as a numerical value. However, the analogue computer is limited to special classes of problems and when most people say 'computer' today, they mean the digital computer, which is a marvel of precision and accuracy, for it works with specific units rather than approximations.

The modern electronic digital computer counts with incredible speed using only two numbers - the one and zero of what mathematicians call the binary system. The counting ability of the computer is used to feed it information. But first the information is translated into a code.

The information is then stored in a memory bank made of magnets. The direction in which electrical signals run through the magnets means one or zero, yes or no, off or on. Each magnet contains one piece ofinformation called a bit. A large computer system can store hundreds of millions of such information bits.

But information by itselfis useless. The computer must be told what to do with it — to add, subtract, multiply, ordividc the coded pulses stored in its memory. Parts of that memory contain instructions, prepared by a human brain, that provide the computerwith the road to follow in orderto solve a problem. These instructions are called the program.

What makes the computer different from an adding machine is that the computercan modify its instructions.

If a problem cannot be solved by following one route, the computer can search its memory for another set of instructions until a solution is found. And it docs all this at superhuman speeds. The on-off switching of the computer's logic circuits has been clocked at a billionth ofa second. That is to one second what one second is to thirty years.

But the computer cannot actually think. It performs all ofits functions by route. Once an answer is achieved, another program within the memory tells the computer how to display the solution, to type it out on paper, display it as pictures or words on a television screen, or perhaps even to speak the answer in words a man can hear.

In every field ofhuman endeavourthe body of knowledge is being swollen to the bursting point by a flood of new facts, which by their existence help to generate still more facts until the mass ofinformation threatens to engulf us. But the problem can be controlled and reduced to usable proportions by the computer. AH that is required is a human mind at one end of the system with enough sense to say "Halt! I've learned just about all I want to know about wickets." This will become the touchstone to the computerized library ofthe 21st century, in which requests for information will be answered instantly and as fully as the user wants.


• Translate the passage at sight.

retrieve v отыскивать

justify v подтверждать

schedule v составлять график, таблицу

setting n id. окружение

rule of thumb практическое, эмпирическое правило

 

ARTIFICIAL INTELLIGENCE

Expert systems are a class of computer programs that can advise, analyse, design, diagnose, explain, explore, forecast, form concepts, identify, interpret, justify, learn, manage, monitor, plan, present, retrieve, schedule, test and tutor. They address problems normally thought to require human specialists for their solution. Some of these programs have achieved expert levels of performance on the problems for which they were designed.



Expert systems are usually developed with the help of human experts who solve specific problems and reveal their thought processes as they proceed. If this process of protocol analysis is successful, the computer program based on this analysis will be able to solve the narrowly defined problems as well as an expert.

Experts typically solve problems that are unstructured and ill-defined, usually in a setting that involves diagnosis or planning. They cope with the lack of structure by employing heuristics, which are the rules of thumb that people use to solve problems when a lack of time or understanding prevents an analysis of all the parameters involved. Likewise, expert systems employ programmed heuristics to solve problems.

Experts engage in several different problem-solving activities: identify the problem, process data, generate questions, collect information, establish hypothesis space, group and differentiate, pursue and test hypotheses, explore and refine, ask general questions, and make decisions.

As researchers of the domain point out, a robust expert system that can explain, justify, acquire new knowledge, adapt, break rules, determine relevance and behave as human experts do, will have to use a multitude of knowledge representations, that lie in a space whose dimensions include deep and surface (representations), qualitative/quantitative, approximate/exact, specific/general and descriptive/prescriptive representations.

Expert systems, like human experts, can have both deep and surface representations of knowledge. Deep representations are causal models, categories abstractions and analogies. In such cases, we try to represent an understanding of structure and function. Surface representations are often empirical associations. With surface representations, all the system knows is


that an empirical association exists; it is unable to explain why, beyond repeating the association. Systems that use knowledge represented in different forms have been termed multilevel systems.

Work isjust beginning in building such multilevel systems, and they will be a major research topic for this decade. Work needs to be done in studying and representing in a general way the different problem-solving activities an expert does. When you build expert systems, you realize that power behind them is that they provide a regimen (управлеи ие) for experts to crystallize and codify their knowledge, and in the knowledge lies the power.

 

 

TEXT 6

 

• Read the passage and answer the question:

How many and what steps were there in the computer technology development?

 

GREAT STRIDES IN COMPUTER TECHNOLOGY

Still faster means of getting at computer-stored information must be developed. The problems of communicating with the computer are becoming increasingly apparent. Punch cards, typewriter terminals, and paper tapes all demand special codes and computer languages. Such a situation can no longer be accepted, for computers already calculate at a blinding pace, and their speeds are steadily increasing.

The great leap forward in computer technology was attained in 1947 with the development of the transistor. Transistors can perform all ofthe functions of vacuum tubes but are flea-sized by comparison and require only a fraction as much power to operate. The transistor is made of a semiconductor, a crystal that conducts electricity better than glass, though not as well as metal. The manufacture of a transistor starts with a single pure crystal of semiconductor, such as germanium. The addition of very small amounts of a chemical impurity such as arsenic introduces excess electrons into the crystal lattice. These electrons can move easily to carry electricity. Other atomic impurities such as boron soak up electrons from the lattice and thus create deficiencies, or holes, where there are no electrons. The hole, in effect, is a positive charge, the opposite of the negatively charged electron. Both holes and electrons skip through the material with ease.

Arsenic- and boron-doped crystals arc sliced into wafers and then sandwiched together so that alternating layers containing either free electrons or holes face each other. Holes and electrons, carrying opposite electrical charges, are attracted to each otherand a few drift across the junction, creating an electrical field.


ti \:. И. Курншнили



By adding electrical contact points to each ofthe layers in the sandwich, a transistor is created.

Current flowing between two of the contact points can be controlled by sending an electrical signal to a third point. The signal can thus be amplified from fifty to forty thousand times. Moreover, the current keeps step with the incoming signal, so that when it is pumped back out again, the signal is a precisely amplified image ofthe original signal.

By 1955, the transistor was replacing the vacuum tube in computers, shrinking their size and increasing their speed. The transition from vacuum tubes to transistors was but the first step, however. Integrated circuits that combine both amplifiers and other electrical components on slivers of material far smaller than even transistors arc shrinking the size of the computer still further. The integrated circuits (1С) conserve space, and they also save time and the effort of linking up individual components. This means that a quarter-inch chip containing five orsix complete circuitscan move information across its route faster than a transistorized circuit because every element within it is closer than are the elements of transistors. On the horizon is yet another shrinkage, which will be made possible by a process, still undeveloped, called large-scale integration, or LSI. An LSI chip will be only a tenth of an inch square and will carry as many as one hundred circuits. The difference between an LSI chip and an 1С chip may seem like hairsplitting, but on such negligible differences are built great strides in computer technology.

The limiting speed on computers is the speed of light. Computer engineers used this fact to create a standard measure — the light-foot — by which to clock computer speeds. It is defined as the distance, about twelve inches, that light travels in a billionth of a second. Miniaturization will narrow the gap between circuits and so reduce the number of light-feet that must be traversed through the logic circuits. But there are still other limitations that must be overcome before computer processing will be rapid enough to satisfy the demands of perfectionists.

 

TEXT 7

 

• Read the article carefully and answer the following questions:

1. What is the principle of the action of an optical switch?

2. How far are wc from an optical computer?

 

EUROPE LEADS RESEARCH IN OPTICAL COMPUTING

Until now, the switches inside computers have been electronic. European scientists are going to demonstrate the world's first optical computer. This demonstration will come 22 years after the theory behind optical computers was first predicted by researchers from the computer company IBM. However, there is still a large gap between what theoretical physicists believe can be done, and what electronic engineers know is possible.

In theory, optical switches leave their electronical counterparts standing. It is like comparing the speed of light with the speed of electricity. Optical switches are so fast and yet so small that an optical device of one square centimetre can resolve I ()7 separate spots of light and each can be switched on and off at a speed of 30 nanoseconds. This means that an optical device one square centimetre in area could, in theory at least, handle 3 x lO14 bits per second. This rate is equivalent to everybody in the world having a telephone conversation at the same time.

The optical switch works on the principle of optical "bistability". Usually, when a beam of light is passed through a transparent material, the relationship between the intensities of light entering and the light leaving is linear. However, under certain circumstances a non-linear relationship occurs. A small increase in the intensity of light entering the material leads to a much greater increase in the intensity of light leaving the material.

In optical switches, the material is placed inside a resonant cavity. In practice, this means that the edges of the material arc highly polished and parallel to each other. With such materials some of the light entering becomes "trapped" inside as it bounces back and forth against each polished surface. In other words, it resonates. This changes the refractive index of the material, with the result that for a given intensity of light entering the switch there are two possible intensities of light leaving it.

In other words, there is the equivalent to an "off" position and an "on" position because there arc two stable states and the material shows optical bistability. Up to now a switching speed of 1013 seconds has been achieved, although the power needed to generate this is in the kilowatt range. A speed of one nanosecond (109) is possible in the milliwatt power range.

 

 

TEXT 8

 

• Read the passage carefully and answer the question:

How many and what means of increasing computer speeds are mentioned in the text?

 

THE FUTURE OF COMPUTERS

During the past decade development work for extremely powerful and cost-effective computers has concentrated on new architectures. In place of "scalar" processors, the emphasis has moved towards "vector" and "parallel" processors, commonly referred to as "supercomputers". These machines are now in fairly widespread use in many branches of science.


Vectorization of quark field calculations in particle physics has improved performance by factors of ten or twenty compared with the traditional scalar algorithms.

Computers must still be programmed for every action they take, which is a great limitation. How quickly the programmer can tell it what to do becomes a major drag on computer speeds. The time lag can be shortened by linking up different computers and designing more efficient devices to jam information in and pull it out ofthe machine, but the basic limitation of the step-by-step program remains.

A means around this roadblock is called parallel processing. Instead of solving a problem by following step-by-stcp instructions ofthe program the arithmetic and memory units will break the main problem down into a number of smaller problems that will be solved simultaneously. Parallel processing was introduced into the fourth generation computer called 1LLI AC IV named forthe University of Illinois, where it was designed.

The incredibly rapid speeds we arc approaching will be of little value without a corresponding increase in the speed with which we can get at the computer-generated information. One new approach, called graphics, uses the cathode-ray tube — the picture tube of your TV set — to display the information pictorially. A light pen — actually an electronic pointer — can be touched to the screen, and conversation between man and machine can be accomplished. For example, the computer can flash a series of options on its screen. The scientist selects the one he wants by touching it with a light pen. The great advantage of these so-called graphic computers is in solving design problems and in coping with any trial-and-crror situation.

The graphic computer offers the most flexible means of communication between man and machine yet developed. For example, the designer can draw a car roof on the screen with his light pen. The computer will do the mathematics required to straighten out the lines and, in effect, present a draftman's version of the designer's idea. The computer will then offer a variety of options to the designer— "front view", "rear view", "cross section", and so on. All the designer needs to do is to touch his light pen to the appropriate choice, and the computer does the rest. Similarly, the designer can circle any part of the drawing on the screen with his pen and request a blow-up — a large-scale drawing of just that part he has circled.

The end product of this man-machine design team is not a series of drawings on paper but a set of equations that precisely define every point of design. Eventually, these symbols will be fed to the production line machinery, which will translate the symbols into steel and glass forms of automobiles.


• What is your idea of computers-translators? Is the problem feasible today or not? Read the following passage and say whether the author is optimistic or sceptical about it? Find the facts to prove your idea.

 

COMPUTERS-TRANSLATORS

Foreign-language translation may prove to be just a bit more than computer can handle. From the Tower of Babel (вавилонское столпотворение) on there have been countless examples of man's inability to understand man. What hope is there then for a machine to understand man, or even another machine? Machines translators would be an enormous boon (благо), especially to science and technology. A machine translator would obviously be a great aid.

In the 1980s a machine was developed that can optically scan the written characters and print out the translation. It has a program that translates Chinese into English and English into Chinese. At a press demonstration the programmer asked for a phrase to translate and a reporter said: "Out of sight, out of mind". The phrase was dutifully fed into the computer, which replied by printing out a string of Chinese characters. "There," said the programmer, "that means 'out of sight, out of mind'."

The reporter was sceptical. " I don't know Chinese and I don't know that that means 'out of sight, out of mind'."

"Well," replied the engineer, "it's really quite simple. We'll ask the other program to translate the Chinese into English."

And so once again a string of characters, this time Chinese, was fed into the computer. The translation was typed out almost immediately and it read: "invisible idiot".

In order to make communication between man and machine as painless and easy as possible, the computer is being taught not only to speak but also to listen. The Autonetics Corporation has built a system completed with audio analysers and all of the complex electronics needed to give a computer "ears" that will actually hear the words spoken into its microphone. The vocabulary is still limited.

During a demonstration, the engineer spoke slowly and distinctly a handful ofthe computer's words, and the latter dutifully typed them back. But on one word it failed. While counting "one, two, three," the computer typed back, "one, two, four." Whereupon the demonstrator snapped "idiot," and the computer, in a veritable machine version of British aplomb, calmly replied, "Not in vocabulary."

• Read the passage carefully. Describe the design and operation of a synchrotron (see the figure) on p. 167.

 

THE NEXT GENERATION OF PARTICLE ACCELERATORS

The limit to the maximum practical energy of a linear accelerator is the cost of the thousands of accelerator cavities and their associated radio-frequency power supplies. The way to avoid that cost is to employ only a few cavities but to make each particle passthrough them many times. Undcrthe influence of a magnetic field an electrically charged particle follows a curved trajectory. By arranging many magnets in a ring the particle can be made to follow a circular orbit, or any other closed curve. A bunch or cluster of particles can circle the ring several million times, passing through the radio-frequency cavities and gaining energy on each revolution. An accelerator built in this way is called a synchrotron.

All the large new accelerators that are now planned or under construction are synchrotrons. It is therefore worthwhile to consider their operation in somewhat greater detail. The magnets that make up the ring are of two kinds. The dipole magnets, which have two poles (one north and one south), generate a uniform magnetic field; they accomplish the bending ofthe particle trajectories. Quadruple magnets, which give rise to a field with two north poles and two south, focus them into a narrower beam, acting much like lenses. Interspersed among the magnets are the radio-frequency cavities, where the actual acceleration takes place. Specialized magnctsand electrodes must also be provided for injecting the particles into the ring and for extracting them from it.

The synchrotron operates in cycles. When a bunch of particles is first injected, the fields of the bending magnets are adjusted so that the particles precisely follow the curvature of the vacuum tube. As the energy of the particles increases on each revolution the field strength in the bending magnets must also be smoothly increased. When the maximum energy is reached, the beam is extracted; then the magnetic field is allowed to fall to its original level in preparation for the next bunch of particles. The accelerator is called a synchrotron because the particles automatically synchronize their motion with the rising magnetic field and the rising frequency of the accelerating voltage.

The highest energies are not attempted with a single machine; instead, several machines arc lined up in series. Each one augments the particle energy by a factor of 10 or even 100, then passes the beam on to the next accelerator in the sequence. In several instances older accelerators serve as injectors or preliminary stages for newer and for more powerful machines.


In a proton accelerator the first stage is most often a device ofthe kind built in 1928 by John D. Cockcroft and Ernest T.S. Walton at the Cavendish Laboratory ofthe University of Cambridge. It is a large transformer and rectifier that generates a potential of about a million volts between an inner electrode and an outer shell. Protons, obtained by ionizing hydrogen atoms, arc released at the inner electrode; when they emerge (through a hole in the shell), they have an energy of about 1 McV.

The next stage is often a linac, which typically raises the energy per proton to 50 or even 200 Mev. From the linac the protons can be injected into a synchrotron, which may be the final link in the chain or may serve merely to boost the energy ofthe protons for injection into a larger synchrotron.

 

PEP STORAGE RING

SPEAR

NEUTRINO MESON


 
 

STORAGE RING


• Read the passage carefully and find arguments to prove that construction of a Tevatron is a very difficult task even with the present day technology.

 

TEVATRON

Building the Tevatron has turned out to be a challenging task even today. The most obvious problem is that of cooling almost 1,000 magnets, strung out over 4.5 degrees Kelvin, the temperature where the special conductors of the magnet windings lose all resistance to the flow of electricity. In order to maintain that temperature a river of liquid helium is pumped through the ring. Twenty-four small refrigeration plants are spaced around the tunnel, and the central helium liquefler is the largest in the world, with a capacity of 4,000 liters per hour.

The windings in the magnets are formed of a niobium-titanium alloy embedded in a copper mat rix. Almost 19,000 miles of this cable will be required to complete the ring. At maximum field strength the superconductors will carry a current of 4,600 amperes, and when a magnet is "quenched", or loses the superconducting property, the energy stored in the field (about half a million joules per magnet) must be dissipated without destroying the windings.

A particularly taxing problem has been the need to maintain the uniformity ofthe magnetic field to an accuracy of better than one part in 1,000. Because the windings are immersed in their own field they are subjected to a reactive force of about a ton per linear inch. The coils cannot be allowed to move even a thousandth of an inch, however, because the movement would distort the field and might also give rise to too much frictional heat. The windings are immobilized by laminated collars of stainless steel. The alignment of the magnets is also complicated by thermal contraction when the ring is cooled to its operating temperature; a magnet six meters long contracts by about two centimeters.

It is worthwhile pausing to consider just how much energy 1 TeV per proton is. In units more commonly applied to macroscopic objects, a 1 -TeV particle has an energy of 1.6 ergs, which is roughly the kinetic energy of a flying mosquito. At full intensity the Tevatron will accelerate 5xl013 protons at a time, which will give the total beam an energy of eight million joules. That is comparable to the energy of a 100-pound artillery shell. If the beam should evergo out of control, it could melt the walls of the vacuum chamber and destroy the surrounding magnets; obviously such an accident must be avoided.

When the particles in a synchrotron have reached their full energy, they are nudged out of their orbit by a special magnet and deflected into an external beam line. Eventually, they strike a target. Interactions of the protons with


the target can be studied directly, and it is also possible to create beams of secondary particles knocked out of the target. There are separate areas for experiments with protons (the primary particles), with mesons (particles of intermediate mass, such as the pion), with neutrinos and muons, and with photons.

 

 

TEXT 12

 

• Gamma-Ray-Line Astronomy is a young science. Skim the passage rapidly (3 min.) and find facts concerning its origin, age and field of research. Reproduce the passage using the diagram on p. 170.

 

GAMMA-RAY-LINE ASTRONOMY

The nature ofthe universe has been deduced almost entirely from the photons, or quanta of electromagnetic energy, that arrive in the vicinity of the earth. Until half a century ago astronomers could detect only photons with energies of between 1.5 and 3.5 electron volts: the photons of visible light. Then they began to extend the photon energy range downward into parts of the infrared and radio regions of the electromagnetic spectrum and upward into the near ultraviolet. With the advent of rockets, high-altitude balloons and artificial satellites they were able to extend it much farther upward to the energy range of photons that cannot penetrate the earth's atmosphere: the photons of the far ultraviolet, X-rays and gamma rays.

Gamma rays are the most energetic form of electromagnetic radiation; the energy of their photons is measured in millions of electron volts (MeV), and in principle it has no upper limit. Gamma-ray photons from space were first detected some two decades ago. The early detectors simply recorded the arrival ofthe photons without being able to analyse their energies, as the photons of light are analysed into spectral lines by a spectrograph. Now, however, instruments have been developed that can detect gamma-ray spectral lines. They are beginning to yield information on the high-energy processes and objects that command the attention of modern astronomers, such as supernovas, neutron stars and phenomena at the center of galaxies.

Whereas the lines in the optical spectrum arise from transitions between the energy levels of electrons in atoms, lines in the gamma-ray spectrum arise from transitions between the energy levels of atomic nuclei.

In the eight years that have passed since those first observations were made several research grciips have been flying gamma-ray telescopes mostly on balloons but sometimes in satellites, in attempts to raise the instruments above nearly all ofthe earth's'atmospherc and detect gamma-ray lines of astrophysical origin. The field is still in its infancy.



TEXT 13

 

• Read the passage and give a title to each paragraph.

 


Дата добавления: 2015-08-28; просмотров: 38 | Нарушение авторских прав







mybiblioteka.su - 2015-2024 год. (0.033 сек.)







<== предыдущая лекция | следующая лекция ==>