Magic bullet

Mesh-belt furnace Mesidine [88-05-1]  [c.607]

Light loads are often processed in a mesh-belt furnace which usually carries the work load direcdy on the mesh belt. At a given operating temperature, loading per unit area of the belt is limited by its tensile strength. Cast-link belt furnaces function in the same manner as mesh-belt furnaces except that the former carry heavier loads because the belt is made from suitable alloy castings instead of woven wire. The belt is normally contained in the furnace on both the working and return sides, whereas the mesh-belt usually exits the furnace with the work load and returns outside the furnace. Because of its large weight, it is uneconomical to let the cast-link belt cool on the return and reheat it with the work load.  [c.135]

In sintering, the green compact is placed on a wide-mesh belt and slowly moves through a controlled atmosphere furnace (Fig. 3). The parts are heated to below the melting point of the base metal, held at the sintering temperature, and cooled. Basically a solid-state process, sintering transforms mechanical bonds, ie, contact points, between the powder particles in the compact into metallurgical bonds which provide the primary functional properties of the part.  [c.178]

Conveyors may be of parallel-chain, mat, slat, woven wire-mesh belt, or cast-alloy type. Automatic tensioning devices are used to maintain belt tension during heating and cooung. The product may rest directly on the conveyor or on special supports built into it. RoUer-conveyors are used for large pieces. Flame curtains are provided for sealing the ends and for protection of special treating atmospheres.  [c.1197]

With the proper idlers select ed for size and service conditions, the most important step is to locate them properly. For long belts the tension varies considerably, and idlers should be spaced to hold belt sag to reasonable limits along the full length of travel. Too much belt sag can cause a significant power loss, but for most belts of ordinary length it is usu ly satisfactoiy to space idlers fairly closely at the feed  [c.1918]

Waste from cooling systems. Cooling water systems also give rise to wastewater generation. Most cooling water systems recirculate water rather than using once through arrangements. Water is lost from recirculating systems in the cooling tower mainly through evaporation but also, to a much smaller extent, through drift (wind carrying away water droplets). This loss is made up by raw water which contains solids. The evaporative losses from the cooling tower cause these solids to build up. The buildup of solids is prevented by a purge of water from the system, i.e., cooling tower blowdown. Cooling tower blowdown is the source of the largest volume of wastewater on many sites.  [c.294]

This brief description of past and present refining developments leads to a certain number of important remarks. First of all, we are observing a gradual, continuous evolution. It could hardly be otherwise, considering the large time factors —it takes several years to build a refinery— the capital investment, and the tightness of the product specifications. Moreover, refining evolves around successive modifications to a basic flow scheme containing a limited number of processes. These processes have been greatly improved over the past twenty years from the technological point of view and, for catalytic processes, the level of performance of the catalysts in service. On the other hand, very few new processes have appeared as early as 1970 one could almost have built the refinery of the year 2000 but with much lower performance with regard to energy, economics, and product quality. Among the truly new processes, one can name selective oligomerization, light olefin etherification and very low pressure reforming with continuous catalyst regeneration.  [c.485]

In an offshore environment development via a subsea satellite well can be considered in much the same way as a wellhead on land, although well maintenance activity will be more expensive. However, if a simple self contained processing platform is installed over a new field and the host platform is required only for peak shaving or for export, a number of other development options may become available. The host platform may actually cease production altogether and develop a new role as a pumping station and accommodation centre, charging a tariff for such services. There may be significant construction savings gained for the new platform if it can be built to be operated unmanned. The old reservoir may even in some cases be converted into a water disposal centre or gas storage facility.  [c.364]

Operating strategies and product quality should be carefully reassessed to determine whether less treatment and more downtime can be accommodated and what cost saving this could make. Many facilities are constructed with high levels of built-in redundancy to minimise production deferment early in the project life. Living with periodic shutdowns may prove to be more cost effective in decline. Intermittent production may also reduce treatment costs by using gravity segregation in the reservoir to reduce water cuts or gas influx, as mentioned in Section 15.4.  [c.367]

Proteins adsorbing to solid surfaces are a ubiquitous feature of medicine, biotechnology, food processing, and environmental engineering and thus have received much attention in the past 10 years [101-103]. The fact that naturally occurring macromolecules adhere to solid surfaces impacts the blood clotting cascade, enzymatic reactions in detergents, and biological sensors. While some flexible proteins behave much the same as water-soluble polymers, most are globular in structure and exhibit different adsorption behavior. Globular proteins lose entropy on folding and are held in place by intramolecular forces between residues [104]. Many proteins unfold on adsorption to regain some  [c.403]

Surfaces can be active in inducing blood clotting, and there is much current searching for thromboresistant synthetic materials for use in surgical repair of blood vessels (see Ref. 111). It may be important that a protective protein film be strongly adsorbed [112]. The role of water structure in cell-wall interactions may be quite important as well [113].  [c.552]

Polymers are substances consisting of large molecules also known as macromolecules. The molecules are built up of many subunits called monomers which are linked together, usually by covalent bonds. In a polymer, the number of subunits is generally larger than 100 [1]. Assemblies of less than 100 subunits are often referred to as oligomers. Macromolecules make up many of the materials in living organisms, as for example cellulose, lignin, proteins and nucleic acids. The latter two have highly specific roles in life. Proteins control many biochemical processes and nucleic acids store genetic infonnation. Many polymers are man-made materials, and are therefore called synthetic polymers. These polymers have a great industrial importance because they offer an attractive compromise between ease of processability and final mechanical and thennal properties. This article focuses on the general properties of polymers, without dealing with the specific roles of natural polymers, such as proteins and nucleic acids.  [c.2513]

The incorporation of holonomic constraints for covalent bond lengths (and sometimes bond angles) saves roughly a factor of 4 in the allowed time step for molecular systems and has been common practice for many years. Conserving constraints involves the solution of a set of nonlinear equations, which can be solved iteratively, either by solving a matrix equation after linearization, or by iteratively solving successive equations for each constraint. The latter method is employed in the widely used SHAKE program [8]. Recently a linear constraint solver lincs has been introduced [43] which is much faster and more stable than shake and is better suited for implementation in programs for parallel computers. It is built in our MD package GROMACs [44].  [c.7]

There is, however, another type of learning inductive learning. From a series of observations inferences are made to predict new observations. In order to be able to do this, the observations have to be put into a scheme that allows one to order them, and to recognize the features these observations have in common and the essential features that are different. On the basis of these observations a model of the principles that govern these observations must be built such a model then allows one to make predictions by analogy.  [c.7]

As was said in the introduction (Section 2.1), chemical structures are the universal and the most natural language of chemists, but not for computers. Computers woi k with bits packed into words or bytes, and they perceive neither atoms noi bonds. On the other hand, human beings do not cope with bits very well. Instead of thinking in terms of 0 and 1, chemists try to build models of the world of molecules. The models ai e conceptually quite simple 2D plots of molecular sti uctures or projections of 3D structures onto a plane. The problem is how to transfer these models to computers and how to make computers understand them. This communication must somehow be handled by widely understood input and output processes. The chemists way of thinking about structures must be translated into computers internal, machine representation through one or more intermediate steps or representations (sec figure 2-23, The input/output processes defined  [c.42]

Nevertheless, chemists have been planning their reactions for more than a century now, and each day they run hundreds of thousands of reactions with high degrees of selectivity and yield. The secret to success lies in the fact that chemists can build on a vast body of experience accumulated over more than a hundred years of performing millions of chemical reactions under carefully controlled conditions. Series of experiments were analyzed for the essential features determining the course of a reaction, and models were built to order the observations into a conceptual framework that could be used to make predictions by analogy. Furthermore, careful experiments were planned to analyze the individual steps of a reaction so as to elucidate its mechanism.  [c.170]

Now, chemists have acquired much of their knowledge on chemical reactions by inductive learning from a large set of individual reaction instances. How has this been done And how can we build on these methods and knowledge and perform it in a more systematic manner by algorithmic techniques  [c.172]

Thi.s could provide a much richer source of information on chemical reactions and thus build a better basis for automatic learning methods.  [c.545]

For each individual mechanistic reaction type, quite an elaborate heuristic decision scheme was built to arrive at rules that allow one to make predictions on specific reaction queries.  [c.549]

This textbook can build on 25 years of research and development in my group. First of all, I have to thank all my co-workers, past and present, that have ventured with me into this exciting new field. In fact, this textbook was written nearly completely by members of my research group. This allowed us to go through many text versions in order to adjust the individual chapters to give a balanced and homogeneous presentation of the entire field. Nevertheless, the individual style of presentation of each author was not completely lost in this process and we hope that this might make reading and working through this book a lively experience. Writing these contributions on top of their daily work was sometimes an arduous task. 1 have to thank them for embarking with me on this journey.  [c.672]

Add 23 g. of powdered (or flake ) sodium hydroxide to a solution of 15 ml. (18 g.) of nitrobenzene in 120 ml. of methanol contained in a 250 ml. short-necked bolt-head flask. Fix a reflux water-condenser to the flask and boil the solution on a water-bath for 3 hours, shaking the product vigorously at intervals to ensure thorough mixing. Then fit a bent delivery-tube to the flask, and reverse the condenser for distillation, as in Fig. 59, p. 100, or Fig. 23(D), p. 45). Place the flask in the boiling water-bath (since methanol will not readily distil when heated on a water-bath) and distil off as much methanol as possible. Then pour the residual product with stirring into about 250 ml. of cold water wash out the flask with water, and then acidify the mixture with hydrochloric acid. The crude azoxybenzene separates as a heavy oil, which when thoroughly stirred soon solidifies, particularly if the mixture is cooled in ice-water.  [c.212]

Assemble an apparatus precisely similar to that used for the preparation of acetophenone (p. 255), viz. a 500 ml. bolt-head flask having a reflux water-condenser, into the top of which is htted a 100 ml. dropping-funnel by means of a cork having a V-shaped groove cut vertically in the side to allow escape of air. (Alternatively, use the apparatus shown in Fig. 23(A), p. 45.) Place 10 g. of finely powdered anthracene and 100 ml. of glacial acetic acid in the flask, mix thoroughly by shaking and then heat the flask over a gauze so that the acetic acid boils gently under reflux, and the greater part of the anthracene goes into solution. Then dissolve 20 g. of chromium trioxide in 15 ml. of water, add 50 ml. of glacial acetic acid, and pour the w ell-stirred mixture into the dropping-funnel. Now allow the chromium oxide solution to run drop by drop down the condenser at such a rate that the total addition takes about 40 minutes. As the oxidation proceeds, the anthracene dissolves up completely in the boiling acetic acid. When the addition of the chromium oxide solution is complete, continue the boiling for a further 20 minutes, and then allow the solution to cool somewhat before pouring it into a large excess (about 500 ml.) of cold water. The crude anthra-quinone separates as a greenish-grey powder. Stir the mixture vigorously in order to wash out as much acetic acid and chromium derivatives as possible from the anthraquinone, and then filter off the latter under gentle suction of the pump, wash it thoroughly on the filter with hot water, then with a hot dilute solution of sodium hydroxide, and finally with much cold water, before draining it well. Dry the anthraquinone as completely as possible by pressing it between several thick sheets of drying-paper. Yield, 10-11 g. (almost theoretical).  [c.260]

Now assemble the apparatus for ether distillation shown in Fig. 64, p. 163, except that, in place of the small distilling-flask A, use a wide-necked 100 ml. bolt-head flask, closed by a cork carrying the dropping-funnel B and also a bent delivery-tube (or knee-tube ) for connection to the condenser C. (Alternatively, use the apparatus shown in Fig. 23(E), p. 45.) Carefully decant the ethereal solution of the ethyl-benzene into the dropping-funnel B (Fig. 64), leaving as much solid material behind in the 750 ml. flask as possible wash this solid residue with a further quantity of ether, decanting the latter in turn into the dropping-funnel. (The residue in the flask may still contain some unchanged sodium therefore place some methylated spirit in the flask, and then, when all effervescence has subsided dilute carefully with water, and finally pour into the sink.) Now distil the ether, running the ethereal solution into the bolt-head flask as fast as the ether itself distils over, and observing all the usual precautions for ether-distillations (p. 163). When no more ether distils over, detach the bolt-head flask, and fit to it a short fractionating column, similar to that shown in Fig. i i(B), p. 26, a water-con-denser being then connected in turn to the column. Carefully  [c.289]

Most type A gelatin is made from pork skins, yielding grease as a marketable by-product. The process includes macerating of skins, washing to remove extraneous matter, and swelling for 10—30 h in 1—5% hydrochloric [7647-01-0], phosphoric [7664-38-2], or sulfuric acid [7664-93-9]. Then four to five extractions are made at temperatures increasing from 55—65°C for the first extract to 95—100°C for the last extract. Each extraction lasts about 4—8 h. Grease is then removed, the gelatin solution filtered, and, for most apphcations, deioni2ed. Concentration to 20—40% soflds is carried out in several stages by continuous vacuum evaporation. The viscous solution is chilled, extmded into thin noodles, and dried at 30—60°C on a continuous wire-mesh belt. Drying is completed by passing the noodles through 2ones of successive temperature changes wherein conditioned air blows across the surface and through the noodle mass. The dry gelatin is then ground and blended to specification.  [c.207]

The Nippon CRI procedure uses enclosed moving-belt calciners (Eig. 2). The catalyst is conveyed on a stainless steel mesh belt through gas-fired heating 2ones in which the catalyst is contacted with the appropriate gas. The independentiy heated 2ones, variable belt speed, inlet and outiet dow dampers, and choice of gas addition of such equipment allows the control of temperature, time, gas dow rate, and gas composition. The initial treatment is intended to volatili2e residual hydrocarbons from the catalyst. These hydrocarbons can be removed without combustion by heating the catalyst in an inert atmosphere, such as nitrogen, or by cautiously heating the catalyst in an oxygen-containing stream while avoiding the initiation of combustion. The subsequent highly exothermic carbon-burning step is controlled by limiting the addition of fresh air to the oven. Temperatures are measured using thermocouples in contact with the moving catalyst bed or suspended closely above it. Complete carbon burning may require more than one pass through the oven if the maximum temperature is to be adequately controlled to avoid damage to the catalyst.  [c.225]

The ICI group, with collaboration around the world, put a great deal of effort into developing this MDF approach to making ceramics strong in tension and bending, including the use of such materials to make bullet-resistant body armour. However, commercial success was not sufficiently rapid and, sadly, ICI closed down the New Science Group and the MDF effort. However, the recognition that the removal of internal defects is a key to better engineering ceramics had been well established. Thus, the experimental manufacture of silicon nitride for a new generation of valves for automotive engines deriving from research, led by G. Petzow, at the Powder Metallurgical Laboratory (which despite its name focuses on ceramics) of the Max-Planck-Institut fiir Metallforschung makes use of clean rooms, like those used in making microcircuits, to ensure the absence of dust inclusions which would act as stress-raising defects (Hintsches 1995). Petzow is quoted here as remarking that old-fashioned ceramics using clay or porcelain have as much to do with the high-performance ceramics as counting on five fingers has to do with calculations on advanced computers .  [c.376]

The transducer materials previously used in probe design meet these requirements only to a certain extent. There are high resolution probes made with transducer materials that can be well damped, such as lead metaniobate piezoceramic or piezoelectric polymer foil. These probes, however, show a low sensitivity because most of the acoustic energy is absorbed by the backing and the element materials only have a low efficiency. On the other hand, there are high sensitivity probes which are built by using common PZT piezoceramic with low damping. Due to their design principle, however, these probes have a low resolution. In general, with the previously known probes the requirements regarding resolution and sensitivity were always opposed so that it was often necessary to make compromises.  [c.708]

A number of refinements and applications are in the literature. Corrections may be made for discreteness of charge [36] or the excluded volume of the hydrated ions [19, 37]. The effects of surface roughness on the electrical double layer have been treated by several groups [38-41] by means of perturbative expansions and numerical analysis. Several geometries have been treated, including two eccentric spheres such as found in encapsulated proteins or drugs [42], and biconcave disks with elastic membranes to model red blood cells [43]. The double-layer repulsion between two spheres has been a topic of much attention due to its importance in colloidal stability. A new numeri-  [c.181]

For centrosynnnetric media the spatially local contribution to the second-order nonlinear response vanishes, as we have previously argued, providing the interface specificity of the method. This spatially local contribution, which arises in the quantum mechanical picture from the electric-dipole tenns, represents the dommant response of the medium. Flowever, if we consider the problem of probing interfaces closely, we recognize that we are comparing the nonlinear signal originating from an interfacial region of monolayer thickness with that of the bulk media. In the bulk media, the signal can build up over a thickness on the scale of the optical wavelength, as dictated by absorption and phase-matching considerations. Thus, a bulk nonlmear polarization that is much weaker than that of the dipole-allowed contribution present at the interface may still prove to be significant because of the larger volume contributing to the emission. Let us examine this point in a somewhat more quantitative fashion.  [c.1279]

The general instrumentation of an EM very much resembles the way an ordinary, modem light microscope is built. It includes an electron beam fomimg source, an illumination fomiing condenser system and the objective lens as the main lens of the microscope. With such an instmmentation, one fomis either the conventional bright field microscope with a large illuminated sample area or an illumination spot which can be scaimed across the sample. Typical electron sources are conventional heated tungsten hairpin filaments, heated LaB -, or CeB -single-crystal electron emitters, or—as the most sophisticated source—FEGs. The latter sources lead to very coherent electron beams, which are necessary to obtain high-resolution imaging or very small electron probes.  [c.1630]

Most electrochemical experiments are designed so that one of the mass transport regimes dominates over the others, thus simplifying the theoretical treatment, and allowing experimental responses to be compared with theoretical predictions. Nomially, specific conditions are selected where the mass-transport regime results only from diffusion or convection. Such regimes allow mass transport to be described by a set of mathematical equations, which have analytical solutions. A connnon experimental practice to render the migration of reactants and products negligible is to add an excess of inert supporting electrolyte, tiuis ensuring that any migration is dominated by the ions of the electrolyte. Electro-neutrality is also thus maintained, ensuring that electric fields do not build up in the solution. Furthemiore, the addition of a high concentration of electrolyte increases the solution conductivity, compresses the double-layer region to dimensions of 10-20 A, and ensures a constant ionic strength during the electrochemical experiment. As a consequence, the activities of the electroactive species and tlius the applied potentials, as predicted by tlie Nemst equation and by the rate of electron transfer, remain constant tln-oughout the experiment.  [c.1925]

In the case of proteins, it is advantageous to make use of the fact that not all combinations of the two dihedral angles (the only degrees of freedom of the polypeptide chain) specifying the orientations of the C - C and C -N bonds are pennitted [39]. Essentially there are just tliree basins of attraction, corresponding to left- and right-handed a-helices (compact confonnations) and the P-sheet (extended confonnation). Consensus sequences (mns of amino acid residues whose local confonnations fall in the same basin) result in persistent, ultimately global stmcture being built up (the folding problem can be viewed as fixing the relation between local and global stmctures). As with RNA, the barriers are entropic, determined solely by loop closure (except where contacts have to be disassembled), and the SMEL principle applies. This approach has been successfully used to fold bovine pancreatic trypsin inlribitor [40].  [c.2821]

The danger is overfittmg/overtraining, which would lead to the model obtained being too tightly linked to the dataset that has been used to build it. Thereafter, it would be useless to make predictions regarding any other related dataset. The problem is serious. Before the model is applied to a dataset containing compounds with imknown activities, it has to be tested with the help of another dataset, where the activities have been measured. In other words, it is an absolute necessity to split the initial dataset into training and test datasets.  [c.222]

How should the initial dataset be split into two or three parts The answer is not so trivial as it might appear. Suppose we have detected and removed neither the outliers nor the redimdancy. Then, there is a danger that the last two datasets (control and test) may contain much less relevant information than the training set. Now, even if the model built via the training set is good enough, when the control set contains too many outliers the diagnostics will be invahd. This will lead to the loss of a good model and a wrong assignment of convergence.  [c.222]

Molecular mechanics force fields have much information built into them and can be accurate for the molecules used in their param eten/ation. For molecules outside the limited scope for 40. Dewar. J. S. Dicier, K. M../. Am. Chem. Soc. 108 807. ), 1086.  [c.132]

See pages that mention the term Magic bullet : [c.660]    [c.223]    [c.366]    [c.313]    [c.332]    [c.247]    [c.266]    [c.981]    [c.558]    [c.2]    [c.1139]    [c.1938]    [c.568]    [c.543]    [c.46]   
The organic chemistry of drug synthesis Vol.1 (1977) -- [ c.223 ]