Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data corruption

Data coiraplion refers to unwanted changes in clinical information which were not knowingly initiated by a user. At the simplest level, information stored in the system database could be truncated or transformed into an unreadable format essentially resulting in a loss of information. Note that unlike presentation issues, corruption refers to fundamental changes in the underlying data itself independently of how that information is subsequently rendered in the user interface. In this sense corruption can be more serious as it may well be irreversible or at least require significant effort to restore. [Pg.95]

Corruption of information can also occur whenever data is transferred from one logical or physical location to another - in HIT systems data transfer often takes the form of messaging in either a proprietary or standard format such as Health Language 7 (HL7). Data which is in transit between systems is often vulnerable and robust design measures are usually required to ensure that any corruption is detected and brought to the attention of an appropriate service manager. [Pg.95]

In practice, corruption of clinical data is often more subtle than a simple scrambling of text. One potential failure mode which can give rise to corruption is the loss of referential integrity within the database. Most proprietary databases hold information in tables, collections of data with a conunon set of fields and sanantic meaning in the real world. Information in one table is often linked to that in another and it is this linkage which is vulnerable. [Pg.95]

Corruption of data is not something which generally happens spontaneously in most state of the art databases. Occasionally it is caused by tasks undertaken on the database itself. Databases require a certain amount of maintenance to keep them in good order. Database engines and their supporting infrastructure are often updated as are the applications which make use of the data. These factors sometimes require database administrators to move data or transform it in some way. These activities have the potential to impact large numbers of records and even small risks can become significant when the likelihood is multiplied across thousands or millions of [Pg.95]

Take an example, imagine a new system administrator notices that, within the drug administration frequency reference dataset, someone has accidentally created two data items 3 times a day and three times a day . He also notices that there is no entry for five times a day . He can kill two birds with one stone he accesses the redundant 3 times a day option and replaces the text with five times a day . Being a diligent individual he navigates to the prescribing screen and, sure enough, all options are appropriately present and both issues have been fixed. [Pg.96]


Fig. 10. Evolution of learning for Example 1 data corrupted with noise. Fig. 10. Evolution of learning for Example 1 data corrupted with noise.
Fig. 8. Reconstruction of Young s modulus map in a simulated object. A 3D breast phantom was first designed in silico from MR anatomical images. Then a given 3D Young s modulus distribution was supposed with a 1 cm diameter stiff inclusion of 200 kPa (A). The forward problem was the computing of the 3D-displacement field using the partial differential equation [Eq. (5)]. The efficiency of the 3D reconstruction (inverse problem) of the mechanical properties from the 3D strain data corrupted with 15% added noise can be assessed in (B). The stiff inclusion is detected by the reconstruction algorithm, but its calculated Young s modulus is about 130 kPa instead of 200 kPa. From Ref. 44, reprinted by permission of Wiley-Liss, Inc., a subsidiary of John Wiley Sons, Inc. Fig. 8. Reconstruction of Young s modulus map in a simulated object. A 3D breast phantom was first designed in silico from MR anatomical images. Then a given 3D Young s modulus distribution was supposed with a 1 cm diameter stiff inclusion of 200 kPa (A). The forward problem was the computing of the 3D-displacement field using the partial differential equation [Eq. (5)]. The efficiency of the 3D reconstruction (inverse problem) of the mechanical properties from the 3D strain data corrupted with 15% added noise can be assessed in (B). The stiff inclusion is detected by the reconstruction algorithm, but its calculated Young s modulus is about 130 kPa instead of 200 kPa. From Ref. 44, reprinted by permission of Wiley-Liss, Inc., a subsidiary of John Wiley Sons, Inc.
The hardware and software used to implement LIMS systems must be validated. Computers and networks need to be examined for potential impact of component failure on LIMS data. Security concerns regarding control of access to LIMS information must be addressed. Software, operating systems, and database management systems used in the implementation of LIMS systems must be validated to protect against data corruption and loss. Mechanisms for fault-tolerant operation and LIMS data backup and restoration should be documented and tested. One approach to validation of LIMS hardware and software is to choose vendors whose products are precertified however, the ultimate responsibility for validation remains with the user. Validating the LIMS system s operation involves a substantial amount of work, and an adequate validation infrastructure is a prerequisite for the construction of a dependable and flexible LIMS system. [Pg.518]

In this first case, system security is associated with preventing the accidental or intentional alteration and corruption of the data to be displayed on the screen, or be used to make a decision to control the operation. To avoid accidental or intentional loss of data, the data collected must be defined, along with the procedures used to collect it, and the means to verily its integrity, accuracy, reliability, and consistency. A failure modes-and-effects analysis (FMEA) is one of many methods used to uncover and solve these factors. For example, to avoid data corruption, an ongoing verification program (Chapter 18) should be implemented. [Pg.191]

Laboratory management must provide a method of assuring the integrity of all data. Communication, transfer, manipulation, and the storage/recall process all offer potential for data corruption. The demonstration of control necessitates the collection of evidence to prove that the system provides reasonable protection against data corruption. [Pg.279]

The supplier must not only be able to show that there is a QMS but also that it is actively used. SAP manages quality management documents and quality item lists electronically and thus ensures that employees have constant access to the most up-to-date information. Because it is managed electronically, it is possible to construct automatic workflows to ensure that the process steps prescribed are followed. It is important to note that suppliers do not have to conform with U.S. 21 CFR 11 governing the use of electronic records and signatures. Customers should nevertheless expect suppliers to ensure that electronic records are secure and that their integrity cannot be compromised by data corruption or unauthorized manipulation. [Pg.394]

The second idea is restricted to PAROS. When some variable is subject to repeated additions, it need not be replicated before each addition (12) and mapped to a single variable value thereafter (16). Instead, replication before the first and selection after the last addition is sufficient. Besides saving cost and/ or runtime this approach has a further substantial benefit contributing to fault detection as well The values are stored in their redundant representation. Moreover, on their transfer to and from the ALU they are redundant as well. Hence, any data corruption limited to a number of bits at any point in time during the complete redundant interval can be detected by the very tests already implemented for checking the correctness of the adder (15). [Pg.185]

ICTs continue to have a major impact on business in general, and supply chains in particular. Technology allows the reduction or elimination of paperwork (with its attendant delays in trans-mission/reception and possible data corruption if information is re-entered). Both technologies and applications continue to evolve, with the Internet now providing an efficient, effective communication link for supply chain partners. The power of the Internet comes from its open standards and widespread availability, permitting easy, universal, secure access to a wide audience at very low cost. [Pg.38]

For example, it is possible to detect data corruption due to tampering or disk errors. [Pg.317]

NOTE 3 Modifiable parameters should be verified for protection against invalid of undefined initial values erroneous values unauthorized changes data corruption. [Pg.85]

Data corruption may be a result of illegitimate data modification on behalf of an attacker. Furthermore, we assume that attackers are not oirmipresent and take over a small fraction of peers. The target of data corruption attacks may be any RTU or router in the SCADA system which we assume to be IP based. We do not consider attacks directed against sensors, actuators, or high-level stations. Consequent on a data integrity attack is the provision of incorrect data to the SCADA system which results in an inconsistent system state. The introduced fault and attack classes endanger both safety-critical and operational-critical control loops. [Pg.164]

Protecting SCADA from Data Integrity Attacks. PeSCADA is able to discover data corruption attacks, if the location of corruption is between source and destination. We consider corruptions that occur after initial message replication in the overlay, i.e., the corruption occurs on a compromised router. PeSCADA operates as follows Whenever a SCADA message arrives at an MTU through the conventional SCADA communication channel, the MTU requests the same message via the P2P overlay from q different replica locations and... [Pg.169]

Data Default values in table Data corruption Critical... [Pg.196]

Node Many systems use separate processor modules to achieve system flexibility and performance. The node processors are responsible for all system functions pertinent to the node. The I/O or chassis processors are responsible for I/O scanning, floating point conversion, and results voting. This is a part of resource management to relieve the load of the node processor as well as to minimize the data corruption and to maximize system response required for critical controls. In this connection, processor connections pertinent to safe PLC, discussed in Chapter IX, may be referred to ... [Pg.824]

High-integrity validation Communications are validated with the help of cyclic redundancy check (CRC) routines for main processors and the redundant I/O networks. System information is validated for remote I/O to ensure hardware availability, error -free performance. Error checking on the data transfers diagnoses data corruption. Field wiring is supervised to ensure error-free output data, I/O card diagnostics, even calibration checks on the... [Pg.824]

These checks do not always guarantee complete reliability but repeated many times they recognise and warn of behavioural anomalies whether they are caused by a hardware fault, a data corruption or a program anomaly. [Pg.124]

In the first step those data set needs to be determined, which clearly describes the state of the system. In this example, these are the position, the speed, the deceleration and the braking curve, which describes the braking ability of the train. Further, the actual time is important, alternatively the cycle number. Obviously, these data should be written into memory together with a checksum. Then it is possible to detect data corruption caused by malfunctioning. If, however, it is important not only to detect but also restore the correct data, i.e. [Pg.1794]

Hari, S.K.S., Adve, S.V., Naeitni, H. Low-cost program-level detectors for reducing silent data corruptions. In 42nd lEEE/IFIP Int. Conf. on Dep. Sys. Netw., DSN 2012. IEEE... [Pg.31]

It is also important to only inflate the airbag if the car is in a real collision. We therefore want to detect data corruption at the same time as that we do not just send a one bit zero and one message, but use over engineering to send a byte with two different bit patterns to distinguish normal and collision situation. [Pg.87]

Typical systems might support ratios of 4, 8 and 16. The greater the ratio supported, the larger the memory required to accommodate the readout so that each row can subsequentiy be recombined. The timing of these systems can become very complex to ensure that the reset and sample pointers of each exposure do not overlap and cause data corruption. [Pg.193]

Automatic operation is when the system is in an operational mode or state, in which the system is automatically executing its preprogrammed task or set of tasks. For example, most aircraft have an automatic flight control system that automatically flies the aircraft according to a set of preprogrammed instructions and control laws. Automatic modes of operation are of system safety concern due to the possibility that they could fail to operate or operate incorrectly, particularly without detection or warning. Any data or information input into a control system is also of safety concern as data corruption could cause the system to operate in an unsafe manner. [Pg.34]

Example 2 refers to one of two external hard disks attached to the file server, for which three failure modes are shown (Table 4). The second entry is described as follows if one hard disk fails, due to hardware error (failure cause), the affected drive cannot store data (local consequences) but since the disk is mirrored by an identical disc to which it is paired, information from the back up disc is used automatically. Consequently, there is no data corruption... [Pg.91]

Silent Data Corruption (SDC). The program terminates normally, but the output is erroneous and there is no indication of failure. [Pg.269]

In the first part of our results, we saw that the main difference in the impact of single bit errors and double bit errors was in the proportions of the outcomes No Impact and Hardware Exception. The double bits errors had a much higher percentage of errors detected by hardware exceptions, and a lower percentage of errors having no impact or causing silent data corruption, compared to the single bit errors. [Pg.273]

Such benchmarking experiments aim to measure the likelihood that the executable code of a software component exhibit silent data corruptions (SDCs) for hardware errors that propagate to instruction set architecture (ISA) registers and main memory locations. The purpose of such measurements is to identify weaknesses in the executable code, and thereby finding ways of hardening the code against hardware errors by means of software-implemented hardware fault tolerance techniques. [Pg.275]

Based on the discussion in Section 6, we believe the use of double and multiple bit injections mainly would lead to fewer observations of silent data corruptions. This suggests that it is unlikely that experiments with double-bit errors would expose weaknesses that are not revealed by single bit-flips injection. To further assess whether single bit flips can be trusted to generate the most pessimistic results (the highest number of SDCs), our future work will include experiments where bit-flips will be injected in different target locations at the same time. [Pg.275]


See other pages where Data corruption is mentioned: [Pg.179]    [Pg.243]    [Pg.134]    [Pg.200]    [Pg.147]    [Pg.423]    [Pg.196]    [Pg.95]    [Pg.95]    [Pg.194]    [Pg.306]    [Pg.314]    [Pg.17]    [Pg.3]    [Pg.18]    [Pg.303]    [Pg.1290]    [Pg.1403]    [Pg.265]    [Pg.270]    [Pg.272]    [Pg.275]   
See also in sourсe #XX -- [ Pg.95 , Pg.96 ]




SEARCH



Corrupted data

Corruption

© 2024 chempedia.info