Студопедия
Случайная страница | ТОМ-1 | ТОМ-2 | ТОМ-3
АвтомобилиАстрономияБиологияГеографияДом и садДругие языкиДругоеИнформатика
ИсторияКультураЛитератураЛогикаМатематикаМедицинаМеталлургияМеханика
ОбразованиеОхрана трудаПедагогикаПолитикаПравоПсихологияРелигияРиторика
СоциологияСпортСтроительствоТехнологияТуризмФизикаФилософияФинансы
ХимияЧерчениеЭкологияЭкономикаЭлектроника

Swarm intelligence

The inspiration for this approach stems from the collective behaviour of bee/ant colonies, bird flocks, animal herds, and other social insects/animals. What we are attempting to exploit using such techniques is a system in which the whole is greater than the sum of the parts, in the sense that whereas individual members are relatively unintelligent, the collective behaviour of the colony/flock/herd exhibits “intelligent” behaviour.

Swarms differ from GAs in that there is no direct influence from one generation to the next; the focus is rather on how present -generation members affect the behaviour of others; “peer pressure” in a sense. Such influence is indirect, and often takes the form of general, “broadcast” messages, rather than “peer-to-peer” communication, as it were. A good example of this is the depositing of chemical (pheromone) trails which mark the path from a hive to a food source, say.

Apart from such reliance on indirect rather than direct communication between population members, a couple of other constraints apply to swarms, these being:

 

Intra generational learning only (i.e., no inter generational learning), and

• Individual swarm members are assumed to have identical form, function, and status, and hence are interchangeable.

 

Actually there are several variants of swarms in com-mon usage, including Swarm Intelligence (SI) (Bonabeau, Dorigo, & Theraulaz, 1999), Particle Swarm Optimization (PSO) (Kennedy & Eberhardt, 1995), Ant Colony Optimi-

zation (ACO) (Dorigo, Maniezzo, & Colorni, 1996), and Autonomous Nanotechnology Swarms (ANTS), the latter having been proposed by NASA for future space exploration projects (Hinchey, Sterritt, & Rouff, 2007).

Hendtlass (2004) applied both Ant Colony and Particle Swarm Optimization (PSO) to the Travelling Salesman Problem (TSP). His ACO algorithm can be paraphrased as follows:

 

• Initialise pheromone levels on each path segment and randomly distribute Nants among C cities;

Repeat

- Repeat

- Each ant decides which city to move to next (provided it does not re visit a city)

- Until it returns to its starting city

• Each ant calculates the length of this most recent tour and updates information about the shortest tour found to date;

• The pheromone levels on each path segment are updated (refreshed);

• All ants having completed a predefined maximum number of tours “die off” & are replaced by new ants at randomly selected cities;

Until termination condition met (e.g., shortest path < predefined threshold, or maximum number of tours made).

 

Khosla, Kumar and Aggrawal (2006) combined PSO and the Taguchi Method to derive optimal Fuzzy Models of a Ni-Cd battery charger. Sharkey and Sharkey (2006) combined swarms and software agents in their work with collective/swarm robotics.

 

dna, Immune-Based, and

membrane-based computing

 

The three approaches considered so far, while being inspired by nature, are realized in practice by way of software simulations on (silicon-based) digital computers. With the emerging field of DNA computing, we turn our attention to carbon-based

computing, or so-called “wetware,” a “computer-in-a-test tube,” as it were. Classical algorithms are employed, rather than the data-driven approaches characteristic of ANNs, GAs and swarms.

The potential we are attempting to exploit using such an approach is the massive parallelism which results from the simultaneous reactions of large numbers of DNA molecules within a single test tube, despite the computation times of individual reactions being quite slow (a similar phenomenon to that previously encountered in relation to biological neural networks; in other words, fast overall behaviour results

from the relatively slow computations that take place within individual neurons).

Not surprisingly, one of the biggest challenges with DNA computing–just as with Quantum Computing, as it happens–is Input/Output. More specifically, how does one first encode the problem of interest into DNA strand form? Next, having done so, how does one decode the result of the chemical reaction(s) into an intelligible form (and moreover, one that relates back to the problem at hand)?

Watada (2008) demonstrated how DNA computing can be applied to scheduling problems, in particular synchronizing the movements of multiple elevators in a multistorey (high-rise) building.

Immune-based computing (IBC) utilizes “antibodies” to discriminate between “self” (good cells) and “nonself” (bad/cancerous cells) and to affect self-repair. Ishida (2008) applied IBC to the so-called “stable marriage problem,” and further showed how IBC could be extended to a general problem solver (the latter, by way of interest, was one of the traditional goals of AI).

The allied field of membrane-based computing is inspired by the so-called “reaction rules” between objects located within the compartments defined by a membrane structure. Not only are objects able to react with each other, they also on occasion pass through the membrane; also, the membrane itself can change shape, divide, dissolve or alter its permeability. Membranes are thus in a constant state of (nondeterministic) transition. Sequences of such transitions are the mechanism whereby we are able to realize parallel computations (Paun, 2002).

 


Дата добавления: 2015-07-10; просмотров: 189 | Нарушение авторских прав


Читайте в этой же книге: Управление по SMS | INTRODUCTION (INFORMATION | BACKGROUND | SYSTEMS |
<== предыдущая страница | следующая страница ==>
EVOLUTIONARY ALGORITHMS| KEY TERMS

mybiblioteka.su - 2015-2024 год. (0.007 сек.)