Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Analysis of the Historical Data

We carry out a Bayesian analysis of the historical data in Table 2.2. Under the random effects model, the posterior distribution of (Yo/ 0o/ 4o) based on the [Pg.24]

16) and (2.17), we assume independent priors for all parameters, and an initial N(0, 10) prior is specified for each of Yo/ Oo/ and for /c = 1,. . Kq - 1. In addition, an initial prior for is taken to be XRo(T )oc[l/(T2f-°° +1] exp -0.1/T ). We generated 20,000 iterations from each of the posterior distributions in (2.16) and (2.17) using the Markov chain Monte Carlo (MCMC) sampling algorithm discussed in the next section. For the historical data [Pg.24]

Prior Estimates of Parameters Based on the Meta-Historical Data in Table 1.2 [Pg.25]


Consideration of the thermohaline structure of the Black Sea provides new results on the statistical and physical analysis of the historical data of ship-borne observations of the vertical profiles of the temperature and salinity of the waters. The general features of the vertical thermohaline structure of the Black Sea waters, the seasonal and interannual variabilities of the horizontal structure of the temperature and salinity in all the main water layers are described. The relations of the large-scale features of the hydrology of the Black Sea waters to external forcing (heat and moisture fluxes across the water surface, river mouths and straits, fluxes of the momentum and relative vorticity of wind) are shown. The generalization of the results of the studies of the T,S-structure of the Black Sea waters and of its seasonal and interannual variability allows the following conclusions to be made. [Pg.442]

Consider a retailer (ASSORT) who sells women s dresses. Analysis of the historical data has indicated that at the end of the season, some dresses follow a demand whose distribution is uniform between 1 and 5. Other (in-fashion) dresses follow a demand whose distribution is between 4 and 8 units. About eight months in advance, when the order is placed, the best estimate is that demand for a particular dress will be either of these two distributions with a 50% probability. [Pg.2027]

Very often the market demand and the price of the finished products that the supply chain provides are uncertain. Even for a strategic alliance supply chain which has a dominant core business, what it can coordinate and control is only limited to the supply and demand, price and other related information between node enterprises inside the supply chain. In the face of rapidly changing external market, it is difficult to determine parameters like the quantity of demand and price exactly. But decision makers can derive a probability distribution ffinction of the changing within the market demand and the price of the product through the analysis of the historical data of demand and price. In other words, we can use random variables to describe such uncertain parameters as demand quantity and price. [Pg.57]

This study assumes that demand and prices are two independent random parametral which are based on the distribution of their own can be t>btained through the analysis of the historical data, not considering the price elasticity of demand conditions. [Pg.64]

The analysis of the collected data, identified that the ineffective judgement element of the mixing sub-process was caused by a combination of the lack of information from the mixing process, the lack of historical information about coagulating substances while mixing, time constraints and operator knowledge. [Pg.86]

For a discussion of the result of the micronucleus assay a comparison of the data from the treatment group vs. concurrent negative control data and historical control data, as well as a statistical analysis of the experimental data using trend analysis or pair-wise comparison (treatment group versus control) need to be considered. It is also recommended to check the variance between the animals and gender. However, for the final assessment, biological relevance of the results should be considered. [Pg.835]

There are two basic approaches to validation — one based on evidence obtained through testing (prospective and concurrent validation), and one based on the analysis of accumulated (historical) data (retrospective validation). Whenever possible, prospective validation is preferred. Retrospective validation is no longer encouraged and is, in any case, not applicable to the manufacturing of sterile products. [Pg.112]

Reducing accidents in the workplace requires that the performance management approach is applied to all unsafe behaviours. Clearly this is a mammoth task that should be approached systematically. It can be achieved either by an analysis of the accident data (i.e. using historical data) or by using Job Hazard Analysis of tasks undertaken (i.e. using predictive data). In either case the analysis is looked at from a behavioural perspective. [Pg.399]

These tests generate several Gigabytes of data that are fed into a historical database. Although most of the analysis is performed automatically, human interaction is still needed to compare current and past data. Data are stored on optical CD S s from which the historical data bank are retrieved during field inspections from a mobile unit. Each of these is equipped with a CD-jukebox linked to an analysis station. The jukebox can handle 100 CD s, enough to store all previously recorded data. A dedicated software pre-fetches the historical data and compares it on-line with the newly acquired NDT-data. It is based on fuzzy algorithms applied to signal features. [Pg.1022]

The multivariate statistical data analysis, using principal component analysis (PCA), of this historical data revealed three main contamination profiles. A first contamination profile was identified as mostly loaded with PAHs. A samples group which includes sampling sites R1 (Ebro river in Miranda de Ebro, La Rioja), T3 (Zadorra river in Villodas, Alava) and T9 (Arga river in Puente la Reina, Navarra), all located in the upper Ebro river basin and close to Pamplona and Vitoria cities,... [Pg.146]

Historically, the so-called safety factor approach was introduced in the United States in the mid-1950s in response to the legislative needs in the area of the safety of chemical food additives (Lehman and Fitzhugh 1954). This approach proposed that a safe level of chemical food additives could be derived from a chronic NOAEL from animal studies divided by a 100-fold safety factor. The 100-fold safety factor as proposed by Lehman and Fitzhugh was based on a limited analysis of subchronic/chronic data on fluorine and arsenic in rats, dogs, and humans, and also on the assumption that the human population as a whole is heterogeneous. Initially, Lehman and Fitzhugh reasoned that the safety factor of 100 accounted for several areas of uncertainty ... [Pg.214]


See other pages where Analysis of the Historical Data is mentioned: [Pg.24]    [Pg.24]    [Pg.110]    [Pg.88]    [Pg.84]    [Pg.359]    [Pg.669]    [Pg.71]    [Pg.93]    [Pg.17]    [Pg.109]    [Pg.97]    [Pg.70]    [Pg.50]    [Pg.96]    [Pg.1023]    [Pg.168]    [Pg.1]    [Pg.477]    [Pg.55]    [Pg.4]    [Pg.11]    [Pg.23]    [Pg.164]    [Pg.211]    [Pg.54]    [Pg.331]    [Pg.63]    [Pg.205]    [Pg.62]    [Pg.10]    [Pg.126]    [Pg.525]    [Pg.20]    [Pg.280]    [Pg.168]    [Pg.8]    [Pg.430]   


SEARCH



Analysis of data

Analysis of the data

Historic data

Historical data

The Data

© 2024 chempedia.info