Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Instruction stream

The memory organization of DSPs are also different from ordinary processors because (1) Memory is typical static RAM and virtual memory support is totally absent (2) Several machines separate data and instruction streams (Harvard Architecture) (at the cost of extra pins). Additionally, modular arithmetic address modes have been added to most processors. This mode finds particular utility in filter coefficient pointers, ring buffer pointers and, with bit reversed addressing, FFTs. One further difference is the use of loop buffers for filtering. Although often called instruction caches by the chip manufacturers, they are typically very small (for example, the AT T DSP-16 has 16 instructions) and furthermore, the buffer is not directly interposed between memory and the processor. [Pg.126]

The Instruction Processing Units primary function is to access, interpret and route instructions to the pipelines. The Load Look-Ahead feature provides the IPU with advance address information to facilitate uninterrupted instruction processing. Inter-dependent data structures are recognized by the hardware scan of the instruction stream so that independent operations can be distributed to separate AUs. [Pg.71]

MIMD Multiple instruction, multiple data use of multiple program instruction streams that operate on different data sets. [Pg.286]

During the early part of this decade, there was a vivid debate between proponents of MIMD (multiple instruction stream, multiple data stream) and SIMD (single instruction stream, multiple data stream) type parallel computers. This taxonomy, which stem from Flynn s classification [25,26] of possible computer architectures, can now be said to be mostly of historical and academic interest. [Pg.237]

The single-instruction, single-data (SISD) architecture has a single instruction stream and a single data stream. A single processor in a personal computer serves as an example of this type of archifecfure. The instruction stream and... [Pg.17]

The multiple-instruction, multiple-data (MIMD) architecture permits multiple instruction streams to simultaneously interact with their own data stream. While MIMD machines composed of completely independent pairs of instruction and data streams may be of use for trivially parallel applications, it is generally necessary to use a network to connect the processors together in a way that allows a given processor s data stream to be supplemented by... [Pg.18]

SCHUETTE, M. SHEN, J. Processor control flow monitoring using signature instruction streams. IEEE Transactions on Computer, [S.I. s.n ], 1987, v. 36, n. 3, pp. 264 76. [Pg.105]

The introduction of an intermediate abstraction level formalizing data-driven execution of instruction streams... [Pg.24]

THE SPECIFICATION MODEL DATA-DRIVEN EXECUTION OF INSTRUCTION STREAMS... [Pg.30]

Modern processor architectures exploit the parallelism inside the instruction stream by executing independent instructions concurrently using multiple functional units. This independence relation can be computed from the program in the situation of pure floating-point arithmetic instructions considered in this paper, it can be inferred from the program text, and there is a trade-off between compiler- and hardware-measures to exploit it. As soon as data-dependencies across load-store instructions are considered, data- dependencies can only be computed at run-time. In a later paper, we show how to extend the simple model presented here to also cope with such dynamic dependencies, as well as speculative execution of instructions as resulting from branch-prediction. [Pg.30]

We counted one-bit operations, since single bit tests (BT) are also part of the instruction stream. We additionally calculated the average percentage of operands in the Load/Store streams (for completeness) and the ALU. The results are shown in 6. For the speedup in... [Pg.188]

Our work is substantially different from the previous work. First, we support different execution schemes such as lockstepped execution and implicit multithreading, switching the context with each latency. Latencies can be memory accesses and conditional branches. We assume two redundant instruction streams, ideally executing on a Simultaneous Multithreaded (SMT) [17] processor. With simple methods, faults can be detected [18]. Note that we do not deal with the detection of faults. Thus, our work is completely different from the work of Austin [15][16]. Austin uses a master-checker model to detect permanent and design faults. Our work solely handles the recovery from transient faults. Furthermore, Austin dedicates a whole additional checker processor to do the work. We use different parts within an existing SMT processor and additional registers. [Pg.1901]

In the fault-free case the values of pipeline-registers will be written in backup-registers. As backup-registers, checksum registers can be used. Figure 1 shows the micro-rollback in cycle-interleaved mode. As a consequence, a fault can be mapped onto a stage I, since the context is changed in each cycle. The rollback of queues, e.g. instruction stream buffers can be done easily, if the first element within the queue is used for the rollback. [Pg.1901]

When executing redimdant instructions streams, thread 2 contains a previous state of thread 1. If a fault is signaled, either one or both threads are rolled back. How many threads are rolled back is decided by their trust y, a scalar value which exists per thread. The trust Y is simply computed out of correct and faulty execu-tious. If the executiou hetweeu two checkpoints was faulty, Y is decremented hy one. On subsequent faults in the same checkpoint interval, y is exponentially decreased, until the minimum is reached. Otherwise, y is increased by one until the maximum is reached. For details, please see [19]. [Pg.1902]

Is the trust in a thread sufficient (y > p), the roll-forward/ rollback is activated. A roll-forward is the continued execution of an instruction stream after a fault. The execution of thread 1 will not be influenced in any case. The thread 2 repeats the execution after the last checkpoint. If it reaches x, a maximum of four checksums exist (a temporal from the rollback of the trailing (RB), one from the leading and the trailiag (F forward, N - next) and eventually a temporal from thread 1. The condition F 5 N holds and the foUowing cases have to be regarded. [Pg.1904]

Matrix multiplication is generally stated to be an process, n being the dimension of the matrix. However, using the techniques of algebraic complexity theory, matrix multiplication can in fact be shown to be an process on a serial computer. It can be shown that on a parallel processor (single instruction stream, multiple data stream), the maximum number of... [Pg.495]

There are many types of possible computer architecture, including parallel architectures [1, 2]. However, by far the most common one for simulators is the so-called MIMD architecture (multiple instruction stream - multiple data stream). On MIMD machines, each processing element is able to act independently (unlike some other specialist parallel machines). This provides maximum flexibility to the programmer, and unsurprisingly MIMD machines make up most of todays parallel systems. [Pg.336]

MIMD Stands for multiple instruction multiple data. Computer with multiple processors each of which executes an instruction stream using a local data set. [Pg.1408]

ProcessoK Unit of a computer that performs arithmetic functions and exercises control of the instruction stream. [Pg.1408]

Faults in the CPU and FPU registers, data area of the application, executed instruction stream, and static code image are considered. For each fault location approximately 1000 disturbed executions are investigated (single fault injected in each application execution). The summary of results (according to categories described in Sect. 3) is presented in Fig. 5 (explicit DMC [9]). Figure 6 presents results of the numerical DMC implementation for comparison (discussed later on). [Pg.118]

Hardware parallelism can be incorporated within the processors through pipelining and the use of multiple functionalunits, as shown in Fig. 19.2. These methods allow instruction execution to be overlapped in time. Several instructions from the same instruction stream or thread could be executing in each pipeline stage for a pipelined system or in separate functional units for a system with multiple functional units. [Pg.2006]

MIMD systems aUow several processes to share a common set of processors and resources, as shown in Fig. 19.6(b). Multiple processors are joined together in a cooperating environment to execute programs. TypicaUy, one process executes on each processor at a time. The difficulties with traditional MIMD architectures He in fuUy utiUzing the resources when instruction streams staU (due to data dependencies, control dependencies, synchronization problems, memory accesses, or I/O accesses) or in assigning new processes quickly once the current process has finished execution. An important problem with this structure is that... [Pg.2010]

Multithreaded Several instruction streams or threads execute simultaneously. [Pg.2018]

SIMD. This Single instruction stream, multiple data stream or SIMD family employs many fine- to medium-grain arithmetic/logic units (more than tens of thousands), each associated with a given memory block (e.g., Maspar-2, TMC CM-5). Under the management of a single system-wide controller, all units perform the same operation on their independent data each cycle. [Pg.3]

MPP. This multiple instruction stream, multiple data stream or MIMD class of parallel computer integrates many (from a few to several thousand) CPUs (central processing units) with independent instruction streams and flow control coordinating through a high-bandwidth, low-latency internal communication network. Memory blocks associated with each CPU may be independent of the oth-... [Pg.3]

Each processor applies the instructions in its instruction stream to the data in its data stream. This leads to four possible types of parallel computer. [Pg.87]

Each processor has a distinct instruction stream controling its execution, or each processor has the same instruction stream. If there is only one instruction... [Pg.87]

Single instruction stream, single data (SISD) stream. This corresponds to a sequential computer. [Pg.87]

Single instruction stream, multiple data stream (SIMD). Each processor operates in lock step, with an identical instruction stream but different data streams. [Pg.87]

Multiple instruction stream, single data stream (MISD). Each processor applies a different instruction stream (i.e., a different algorithm) to the same data stream. [Pg.87]

Multiple instruction stream, multiple data stream (MIMD). This is the most general case in which each processor has its own instruction stream applied to its own data stream. [Pg.88]


See other pages where Instruction stream is mentioned: [Pg.239]    [Pg.289]    [Pg.18]    [Pg.18]    [Pg.32]    [Pg.24]    [Pg.1900]    [Pg.1408]    [Pg.2007]    [Pg.2008]    [Pg.2009]    [Pg.2010]    [Pg.2010]    [Pg.2010]    [Pg.2010]    [Pg.2011]    [Pg.2012]    [Pg.78]    [Pg.87]   
See also in sourсe #XX -- [ Pg.17 , Pg.18 , Pg.32 ]




SEARCH



Instructions

© 2024 chempedia.info