Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Deconvolution process

Since the proline residue in peptides facilitates the cyclization, 3 sublibraries each containing 324 compounds were prepared with proline in each randomized position. Resolutions of 1.05 and 2.06 were observed for the CE separation of racemic DNP-glutamic acid using peptides with proline located on the first and second random position, while the peptide mixture with proline preceding the (i-alamine residue did not exhibit any enantioselectivity. Since the c(Arg-Lys-0-Pro-0-(i-Ala) library afforded the best separation, the next deconvolution was aimed at defining the best amino acid at position 3. A rigorous deconvolution process would have required the preparation of 18 libraries with each amino acid residue at this position. [Pg.64]

However, the use of a HPLC separation step enabled a remarkable acceleration of the deconvolution process. Instead of preparing all of the sublibraries, the c(Arg-Lys-O-Pro-O-P-Ala) library was fractionated on a semipreparative HPLC column and three fractions as shown in Fig. 3-2 were collected and subjected to amino acid analysis. According to the analysis, the least hydrophobic fraction, which eluted first, did not contain peptides that included valine, methionine, isoleucine, leucine, tyrosine, and phenylalanine residues and also did not exhibit any separation ability for the tested racemic amino acid derivatives (Table 3-1). [Pg.64]

In addition to the development of the powerful chiral additive, this study also demonstrated that the often tedious deconvolution process can be accelerated using HPLC separation. As a result, only 15 libraries had to be synthesized instead of 64 libraries that would be required for the full-scale deconvolution. A somewhat similar approach also involving HPLC fractionations has recently been demonstrated by Griffey for the deconvolution of libraries screened for biological activity [76]. Although demonstrated only for CE, the cyclic hexapeptides might also be useful selectors for the preparation of chiral stationary phases for HPLC. However, this would require the development of non-trivial additional chemistry to appropriately link the peptide to a porous solid support. [Pg.66]

Our group also demonstrated another combinatorial approach in which a CSP carrying a library of enantiomerically pure potential selectors was used directly to screen for enantioselectivity in the HPLC separation of target analytes [93, 94]. The best selector of the bound mixture for the desired separation was then identified in a few deconvolution steps. As a result of the parallelism advantage , the number of columns that had to be screened in this deconvolution process to identify the single most selective selector CSP was much smaller than the number of actual selectors in the library. [Pg.85]

As expected from the design of the experiment, the HPLC column packed with CSP 14 containing all 36 members of the library with tt-basic substituents separated 7t-acid substituted amino acid amides. Although encouraging since it suggested the presence of at least one useful selector, this result did not reveal which of the numerous selectors on CSP 14 was the most powerful one. Therefore, a deconvolution process involving the preparation of series of beads with smaller numbers of attached selectors was used. The approach is schematically outlined in Fig. 3-17. [Pg.87]

Fig. 3-17. Schematic of the deconvolution process used in the library-on-bead approach. Fig. 3-17. Schematic of the deconvolution process used in the library-on-bead approach.
Registration of a metastable ion in the spectrum is rather useful, as it confirms realization of a certain fragmentation reaction. The fragmentation schemes are considered to be true if corresponding metastable peaks are detected. On the other hand, metastable peaks deteriorate spectral resolution. Depending on the amount of energy released, the forms of the metastable peaks may be quite different. These peaks are eliminated from the spectra as part of the computer deconvolution process. [Pg.136]

An example of the deconvolution process applied to reactions 13.23 and 13.24 is shown in figure 13.8. The method consists of finding the best set of the parameters and k2 that fit 5,exp(0- Note that the nonradiative yields for... [Pg.205]

Figure 13.8 Deconvolution process applied to reactions 13.23 and 13.24. a is the experimental waveform, 5exp (t), b is the calculated waveform for f-BuOOBu-f photolysis and c is the calculated waveform for the hydrogen abstraction from PhOH. Note that c is phase shifted to a longer time because it refers to a slow process. Figure 13.8 Deconvolution process applied to reactions 13.23 and 13.24. a is the experimental waveform, 5exp (t), b is the calculated waveform for f-BuOOBu-f photolysis and c is the calculated waveform for the hydrogen abstraction from PhOH. Note that c is phase shifted to a longer time because it refers to a slow process.
The corrected free induction decay Sc t) will transform to a spectrum Sc i ) in which not only the acetone signal but also all the ethanol signals have had the instrumental contributions to their lineshapes removed. Provided that the reference region lui to wr gives a complete and accurate representation of the experimental acetone lineshape, our deconvolution process should allow us to obtain a clean corrected spectrum even when the shimming is far from ideal. There are of course limitations on this process. If the experimental lineshape is very broad, it will clearly not be possible to obtain a corrected spectrum in which the lines are very narrow without some sort of penalty. Here the limiting factor is signal-to-noise ratio since S u>) is much sharper than Se u>), the ratio of their inverse Fourier... [Pg.306]

Occasionally, useful information may be gleaned from the observed spectrum of an isolated line without deconvolution, even though the instrument response function is wider than the line itself. We see this in the application of the method of equivalent widths to the determination of line strengths (Chapter 2, Sections II.F and II.G). When more complete knowledge is sought, we can often achieve the desired end by employing fewer degrees of freedom than a true deconvolution process utilizes. [Pg.30]

What meaning do these two-point resolution criteria have in describing the deconvolution process, that is, resolution before and after deconvolution Although width criteria may be applied to derive suitable before-after ratios, the Rayleigh criterion raises an interesting question. Because the diffraction pattern is an inherent property of the observing instrument, would it not be best to reserve this criterion to describe optical performance The effective spread function after deconvolution is not sine squared anyway. [Pg.63]

Finally, we suggest that a backlog of data on objects (spectra) of known properties be analyzed until the characteristics of the deconvolution method used are completely familiar. Only then should results yielded by unknown objects be judged. As in any other experimental work, the experiment should be repeated to verify reproducibility and develop confidence in the result. In this sense, the deconvolution process may be treated just like any piece of laboratory apparatus. Indeed, it takes on that identity when packaged in a laboratory microcomputer. [Pg.131]

One further comment regarding noise in absorption spectral data, it is the signal-to-noise ratio that affects the quality of the results, not the peak-height -to-noise or information-to-noise ratio. This statement assumes that the data to be deconvolved are principally 10-30% absorbing and that the signal-to-noise ratio satisfies the requirements of Eq. (40). That this is a reasonable observation follows from the physically meaningful constraints that are imposed and the deconvolution process as discussed in Chapters 4 and 7. [Pg.174]

Anyone proceeding to deconvolve the raw, tipped-over data set will be disappointed (see Pliva et al, 1980 e.g., Section 3). It is easy to see why this is so when constraints are placed on deconvolution as in the Jansson method (Blass and Halsey, 1981). The most effective constraint placed on the deconvolution process in the Jansson algorithm is that 0% absorptance is a lower limit in the deconvolved estimate. That is, the deconvolved absorption spectrum is not allowed to exhibit an emission signal. If the observed data include a bias (i.e., offset) that does not represent actual absorption, then the... [Pg.181]

There are many reasons why deconvolution algorithms produce unsatisfactory results. In the deconvolution of actual spectral data, the presence of noise is usually the limiting factor. For the purpose of examining the deconvolution process, we begin with noiseless data, which, of course, can be realized only in a simulation process. When other aspects of deconvolution, such as errors in the system response function or errors in base-line removal, are examined, noiseless data are used. The presence of noise together with base-line or system transfer function errors will, of course, produce less valuable results. [Pg.189]

In testing a smoothing technique, two criteria should be considered. Obviously, a smoothing technique should reduce the magnitude of the noise and the impact of noise on the deconvolved spectrum. Second, a smoothing technique should not seriously affect the deconvolution process or the deconvolved spectrum. That is, if the smoothing is too severe, it will further... [Pg.197]

Much of the evolution of deconvolution from a fascinating novelty to a realistic spectroscopic tool can be attributed to developments in the application of constraints to the deconvolution process (Jansson et al., 1970 Blass and Halsey, 1981). It has been shown that if the corrections to the deconvolved spectrum are weighted the results can be markedly improved. In Figs. 1 and 2, two deconvolutions were detailed. These two deconvolution tests used the same simulated spectrum, but one (Fig. 1) employed a relaxation method similar to that described by Jansson et al. (1970). With this relaxation method, a physically meaningful deconvolved spectrum was found that was in good agreement with the original spectrum in that simulation. [Pg.201]

Fig. 30. Quasielastic broadening of the neutron scattering peak observed in the paraelectric phase for the copolymer 60/40. The points represented the experimental results with their statistical error bar. The solid line is the Lorentzian function obtained after the deconvolution process described in the text... Fig. 30. Quasielastic broadening of the neutron scattering peak observed in the paraelectric phase for the copolymer 60/40. The points represented the experimental results with their statistical error bar. The solid line is the Lorentzian function obtained after the deconvolution process described in the text...

See other pages where Deconvolution process is mentioned: [Pg.85]    [Pg.99]    [Pg.98]    [Pg.211]    [Pg.179]    [Pg.313]    [Pg.21]    [Pg.26]    [Pg.261]    [Pg.315]    [Pg.61]    [Pg.170]    [Pg.173]    [Pg.180]    [Pg.187]    [Pg.187]    [Pg.188]    [Pg.188]    [Pg.194]    [Pg.195]    [Pg.198]    [Pg.210]    [Pg.324]    [Pg.330]   
See also in sourсe #XX -- [ Pg.21 , Pg.25 ]

See also in sourсe #XX -- [ Pg.367 ]




SEARCH



Deconvolution

Deconvolutions

Examining the Deconvolution Process

Iterative deconvolution process

© 2024 chempedia.info