A greater proportion of ruxolitinib-treated patients achieved stabilisation or improvement of brosis grade at 24 and 48 months compared with patients who received hydroxyurea (at 24 months buy indomethacin 75 mg lowest price, 72% with ruxolitinib versus 62% with hydroxyurea; at 48 months buy indomethacin 50 mg with mastercard, 77% with ruxolitinib versus 35% with hydroxyurea) buy indomethacin 75mg free shipping. Patients who received hydroxyurea had greater worsening of bone marrow brosis grade at both time points. Acknowledgements The author would like to thank Stephanie Leinbach, PhD, for editorial assistance. View Online The Discovery and Development of Ruxolitinib for the Treatment of Myelobrosis 437 18. James, Hematology/the Education Program of the American Society of Hematology, American Society of Hematology, Education Program, 2008, p. Teﬀeri and International Working Group for Myelobrosis and Treatment, Leukemia, 2008, 22, 437. Vannucchi, Hematology/the Education Program of the American Society of Hematology, American Society of Hematology, Education Program, 2011, vol. Barosi, View Online The Discovery and Development of Ruxolitinib for the Treatment of Myelobrosis 439 H. Despite this, the number of drugs reaching marketing approval across member states of the Organisation for Economic Co-operation and Development remains frustratingly at. Research into rare diseases faces inherent challenges throughout clinical drug development and regulatory approval. Tailor-made regulatory and access solutions are needed to overcome these problems. Here I suggest areas for consideration that could have an immediate impact in facilitating regulatory approval and access to treatments for rare diseases. Rare diseases are oen chronically debilitating, life-threatening or life-limiting. With close to 7000 rare diseases identied, these conditions create a sizeable medical and social burden. This leads to sometimes impractical regulatory expectations, and diﬃculty demonstrating the public health impact of complex new treatments. Proposals to expedite the approval of drugs for rare diseases, while continuing to meet the (correctly) stringent regulatory environment should help to increase the rate of new treatments reaching the market. It is diﬃcult to envisage an industrial sponsor being able to develop, in isolation, the discovery infrastructure (screening, in vitro and in vivo model development) that could identify a biological target for each subtype, select a candidate molecule, and prog- ress the molecule to clinical development. Therefore, natural history studies should be encouraged by regulators, and their results published by sponsors to help to characterise underlying disease pathophysiology. Rare diseases have highly variable presentations (even between siblings) including disease burden, clinical symptoms, age of onset and rate of disease progression. Consequently, it is highly unlikely that all the multiple relevant disease subtypes can be studied prior to the rst registration of a new ther- apeutic agent. Post-marketing studies in heterogeneous populations are therefore important to continue to learn about the wide application and eﬃcacy of a new drug. Patient registries and post-approval studies should also play a more signicant role, alongside sponsored controlled clinical trials, to accelerate access to new rare disease treatments. As the power of genetic diagnoses increases and our understanding of disease pathologies improves, a pharmacogenomics approach can be used to expand clinical results from a specic genetic sub-population to a broader population. An illustration of how this could be used is with oligonucleotide- based medicines, such as in Duchenne’s muscular dystrophy. Pre-selection of patients known to have the targeted genetic defect improves clinical response rates and reduces the size of clinical trials. It would seem appropriate that when developing new View Online Possible Solutions to Accelerate Access to Rare Disease Treatments 445 treatments in the same disease, but with alternative sequences to correct alternative frame-shi mutations, data from the lead programme should be considered as supporting evidence for safety and eﬃcacy. In many cases, it is unfeasible to conduct the type of formal, statistically-powered, randomised, controlled development plans that encompass demonstration of appropriate posology. Wherever possible, historical data and meta-analyses should also be taken into account in support of eﬃcacy claims. Given the associated problems with recruitment in clinical trials, it may be more appropriate to demonstrate proof of concept using a single adaptive trial, as well as a pivotal registration study. The adoption of biomarkers or surrogate markers of clinical meaningfulness could be a viable alternative that enable faster, more eﬃ- cient clinical trials. It is not practical, in terms of cost or rate of decision making, to rely on disease progression as a clinical end point in these diseases. Sensible application of biomarkers or pharmacodynamic markers can support reasonable dose selection and, in some cases, early registration. Stronger consideration should be given to the use of surrogate markers as primary (or co-primary) end points in pivotal clinical trials of rare diseases where disease progression is slow and denitive proof of eﬃcacy requires prolonged monitoring of patients. Although validation of surrogate end points is challenging in small disease populations, a concerted eﬀort to develop and support their utilisation by industry, academia and regulatory agencies could make this a feasible option. Studies using more traditional clinical end points could then be performed as post-approval commitments. Of particular interest in the context of many rare diseases is the prediction of paediatric dosing of an orphan drug. These quantita- tive prediction systems could provide supporting data in orphan drug applications. Simplication of drug development requirements for rare diseases, while maintaining rigorous standards of care and an evidence-based approach, could have a big impact on this eld. Currently, conducting diﬀerent development programmes to respond to the diﬀering requirements of separate regulatory agencies can be detrimental to the access to new medicines. Regulators around the world need to harmonise the interpretation and application of technical guidelines and requirements for product registration. Expanding on this, regulatory agencies should recognise and utilise the assessment performed by other agencies in order to facilitate and accelerate their own review processes. Given the high medical need of patients with rare diseases, regulatory agencies should endeavour to grant accelerated approval or conditional approval of rare diseases drugs where possible, and industry should commit to rapid completion of post-marketing commitments. Although accelerated approval represents a greater workload for the agency concerned, the possible benets to the patients are clear. Widely establishing such early access schemes and facilitating cross-border healthcare treatment for diseases, for example where the delivery of breakthrough treatments is only available in very specialised centres, would improve the equity for rare diseases patients to access innovative therapies. Rare diseases oen represent uncharted territory; therefore there is a greater need for frequent dialogue between industry, regulators and patient View Online Possible Solutions to Accelerate Access to Rare Disease Treatments 447 organisations to generate a less risk-averse approach to clinical development and patient access that can be tailored to individual rare diseases. Political backing will also be needed to support the introduction of regulatory solu- tions leading to faster access to rare diseases treatments. Tackling these key areas is vital in the optimisation of rare disease drug development, and could play an important role in accelerating access to treatments that manage these chronic degenerative, debilitating diseases. Enquiries concerning reproduction outside those terms should be sent to the publishers. Product liability: The publisher can give no guarantee for information about drug dosage and application thereof contained in this book. In every individual case the respective user must check its accuracy by consulting other pharmaceutical literature. It is well known that most of the drugs used in this field are either unlicensed or off-label. We have to face a very heterogeneous group of patients not only in pathologies but also in size, weight, and metabolism, par- ticularly when we face problems in premature or newborns. Not only have very few studies offered evidenced-based data for our pediatric patients, but even simple pharmacokinetics data are lacking. This is, to a great extent, caused by the difficulties of conducting research in pediatrics, particularly large-scale double-blinded randomized and controlled studies. This may change in the close future as authorities throughout the world have clearly stated that they will support this type of study. However, for the time being, we must rely on the available pharmacological knowledge, on the rare clinical trials and case series, and, most of all, on the international cumulative clinical experience. This comprehensive handbook on pediatric cardiovascular drugs offers a wide source of useful information for specialists in different fields, such as pediatricians, pediatric cardiologists, intensivists, neonatologists, anesthesi- ologists, cardiac surgeons, nurses, and others involved in the management of cardiovascular disorders. It presents the updated knowledge and experience regarding the use of the currently available therapies for pediatric patients with primary or secondary cardiac and circulatory disorders.
A reported affinity in one of the source databases classified a compound as active indomethacin 50mg line, independent on the reported binding affinity quality 75mg indomethacin. Salts effective 75mg indomethacin, counter-ions, and other small fragments associated with the molecules were removed and zwitterions neutralized. Charge and stereochemical information was discarded and bonded hydrogen atoms were omitted 41 from the representation. After that, ChemAxon’s standardizer was used (for consistency with existing databases) to convert the structures into a uniform representation and to filter out duplicates. In some cases, analysis of large datasets using elaborate representation (see below) proved to be difficult since physical limits of system resources (maximum file size) were reached. These ‘sampled sets’ were constructed 20 using Pipeline Pilot’s Random Percent Filter. In both elaborate chemical representations, wildcards are used for heteroatoms (‘No’) and for halogens (‘X’) with a label attached specifying the actual atom-type. Figure 1 offers an example 73 Chapter 3 that accompanies the following description of the representations. Elaborate representation is a method to include extra information about the molecule by using abstractions, translations, and/or extra labels. The first elaborate representation includes a special bond type for aromatic bonds. The third representation offers a special type for planar ring systems, which has been successfully applied previously to predict the 28 mutagenicity of compounds. In elaborate chemical representation, aliphatic Nitrogen, Oxygen, and Sulfur atoms were represented as aliphatic heteroatom by replacement with the symbol No. An extra label was attached to N and O to indicate the type and number of bound hydrogens, ‘Ze’ (zero) for no bonded hydrogens, ‘On’ for one bonded hydrogen, and ‘Tw’ for two bonded hydrogens. The halogen atoms, Cl, Br, I, and F, were replaced by X and an extra label was attached to indicate their type. Figure 1 has an example of a molecular structure in normal and chemical representation. The use of alternate representations may cause the same graph to appear multiple times. The aim of abstractions for atom and bond types is to raise the occurrence of similar substructures above the support threshold. Individually, these substructures might go undetected; however, the occurrence of their common representation sums the individual frequencies. The frequent subgraph-miner Gaston was used to find all frequently occurring 26, 27 substructures in the datasets. Frequent subgraph miners such as Gaston iterate over all molecules, extracting all possible substructures per molecule. Current subgraph miners utilize several approaches to keep the number of found substructures to a minimum. One reasoning is that a larger substructure can never occur more frequently than the smaller substructures it consists of. Compared to other algorithms, Gaston is more efficient since computationally expensive operations take place in the last steps, when a large number of possible substructures has already been discarded. For a quantitative comparison of Gaston with other frequent subgraph miners, see 22 Wörlein et al. The importance of a substructure was determined by comparing its frequency against the frequency of occurrence in the control set. The most revealing substructures are those that occur frequently in one set and not in the other. As a measure of the importance of a substructure, the significance of association with one of the sets was determined by calculating the p-value of the finding. The p-value as used in this study is defined in page 3 of the Supporting Information of Kazius et al. It is the probability to find a statistical association with one of the two groups based on chance alone. On the assumption of a binomial distribution, it was calculated based on the number of ligands versus control group that were detected using that substructure. While this measure makes assumptions such as to the underlying distribution of features in each database, we still found it to be useful also in the ranking scheme described here. Using the p-value, the lists of frequently found substructures were ordered according to significance with the most-discriminating substructures at the top. The substructure with the lowest p-value was considered the most significant finding. When two substructures had the same p-value and one substructure was a substructure of the other substructure, only the larger substructure was kept. In case of substructures with equal p-values that were not substructures of each other, the larger substructures had preference over smaller ones in the list ordering. For example, if alanine and valine would be substructures with equal occurrences, and hence equal p-values, only the valine substructure would be kept in the list since alanine is a substructure of valine. In the case of a leucine instead of alanine, both substructures would be kept since neither of the two is a substructure of the other. Another important parameter in frequent subgraph mining is the minimum support value, which is the relative number of molecules a substructure should occur in to be detected by the algorithm. Lowering 75 Chapter 3 the minimum support will result in finding an equal or higher number of substructures. A higher number of substructures increase the chance of finding a substructure that is more significant. However, there is a balance between minimum support and p-value of the most significant substructure. This will be illustrated with the following example of two sets of 100 compounds each. Presume that the most significant substructure found at a support threshold of 30 compounds occurred in 60 active and 20 control compounds. Lowering the minimum support from 30 to 20 means that a new set of substructures is added to the already generated set. In theory, the most significant substructure that could be added with the new set has an occurrence of 29 active and 0 control compounds. The experiment is completed if it results in the same substructure as found in the first run. This is because a more significant substructure cannot be found by lowering the support, i. When another, more significant substructure is found at a lower support, the process is repeated until no theoretical substructure can be found that is more significant. Concluding, the minimum support value was chosen by iteratively lowering it per run until no better (more significant) substructures could be found, resulting in practice in support values between 10% and 30% for the datasets used here. For each representation, the same substructure is found in both databases, except for the ‘aromatic atoms and bonds’ representation. Note that the ‘normal’ representation uses Kekulé structures for aromatic systems and not separate types for delocalized bonds and aromatic atoms. This results in some interesting examples where the single bond of an aromatic ring is part of the aliphatic chain of the overlaid substructure, i. Analysis of the substructure distributions revealed the best discriminating substructure for each of the four elaborate representations (see Materials & Methods). The statistics for all representations are summarized in Table 1, demonstrating that within each of the four representations highly significant substructures are occurring. A similar, though one atom smaller, substructure is found in the ‘normal’ representation. These types of overlay also illustrate the completeness of coverage compared to the chemical fragment approach discussed in the Introduction. The other significant substructures in Table 2 are essentially variations of the first; the only differences are in number and length of carbon chains/atoms attached to the nitrogen atom. Table 1 & 2, and Table 1 & 2 in Supporting Information), a recurring theme becomes apparent. The top most significant substructures are alkyl chains, some in combination with nitrogen, aromatic bonds or combinations of these. In the ‘normal’ representation, a recurring theme is the alternating single/double bond feature, most likely being the substitute for aromatic bonds.
Cholesterol also prevents the phase transition of niosomes from the gel to liquid state and thereby reduces drug leakage from the niosomes generic 25 mg indomethacin with visa. The stability of the niosomes can be further improved by the addition of charged molecules such as dicetyl phosphate 50mg indomethacin amex, which prevents aggregation by charge repul- sion (40) generic indomethacin 75 mg on-line. Generally, an increase in surfactant/lipid level increases the drug encap- sulation efﬁciency in niosomes (41). Niosome preparation requires some energy in the form of elevated temper- ature and/or shear (41). The majority of the methods involve hydration of a mix- ture of surfactant/lipid at elevated temperature, followed by size reduction using sonication, extrusion, or high-pressure homogenization. Finally, the unentrapped drug is removed by dialysis, centrifugation, or gel ﬁltration. Size reduction by soni- cation and/or extrusion results in niosomes of 100 to 200 nm, whereas microﬂu- idizer or high-pressure homogenizer can achieve niosomes of 50 to 100 nm (40). Furthermore, the smaller niosomes are relatively more unstable than larger ones and, therefore, require stabilizers to prevent aggregation (41). Both hydrophilic and hydrophobic drug molecules have been encapsulated in niosomes by using either dehydration–rehydration technique or the pH gradient within and outside the nio- somes (40,41). The rate of drug release from the niosome is dependent on the surfac- tant type and its phase-transition temperature. For example, the release of carboxy ﬂuorescein, a water-soluble ﬂuorescent dye from Span niosomes, was in the follow- ing decreasing order: Span 20 > Span 40 > Span 60 (i. Niosomes exhibit different morphologies and size depending on the type of nonionic surfactants and lipids. Discoid and ellipsoid vesicles (∼60 m in diame- ter) with entrapped aqueous solutes are formed when hexadecyl diglycerol ether is solubilized by Solulan C24 [cholesteryl-poly(24-oxyethylene ether)] (42). Poly- hedral niosomes are formed when cholesterol content is low in the same system (43). Polyhedral niosomes are thermoresponsive and release the encapsulated drug when heated above 35◦C (40). This can be useful for sunscreen formulations in which the sunscreen can be released on exposure to sun (40). Niosomes have been shown to penetrate the skin and enhance the permeation of drugs (44). Span nio- somes showed signiﬁcantly higher skin permeation and partitioning of enoxacin than those shown by liposomes and the free drug (44). The niosomes dissociate and form loosely bound aggregates, which then penetrate to the deeper strata (40). Furthermore, the skin penetration has been attributed to the ﬂexibility of niosomes, and this is supported by the fact that a decrease in choles- terol content increases the drug penetration through the skin (45). In addition, adsorption and fusion of niosomes with the skin surface increase the drug’s thermodynamic activity, leading to enhanced skin penetration (46). In vitro studies have found that the chain length of alkyl Nanosystems for Dermal and Transdermal Drug Delivery 137 polyoxyethylene in niosomes did not affect the cell proliferation of human ker- atinocytes, but ester bond was found to be more toxic than ether bond in the surfac- tants (47). Generally, the droplet size of these systems is less than 100 nm and they ﬂow easily (48). Nanoemulsion is transparent, stable, and spontaneously formed, whereas a macroemulsion is milky and nonstable that requires some energy to form (49). The formation of nanoemul- sion is dependent on a narrow range of oil, water, surfactant, and cosurfactant concentration ratios (48). A cosurfactant is commonly used to lower the interfacial tension and ﬂuidize the interfacial surfactant (48–50). Nonionic and zwitterionic surfactants are the ﬁrst line of choice for emulsion-based systems (51). Structurally, nanoemulsions biphasic with oil or water as the continuous phase, depending on the phase ratios (48). As nanoemulsion is in a dynamic state and the phases are inter- changeable, it is difﬁcult to characterize these systems, unlike other disperse sys- tems. As these systems have water and oil phases, both hydrophilic and lipophilic drugs can be delivered using nanoemulsions (48,49). The surfactants in the system can act on the intercellular lipid structure and increase skin permeation (48). On the other hand, the oil phase may act as an occluding agent and can increase skin hydration (51). Drug release from the nanoemulsions depends on whether the drug is in the internal or external phase (52). Nanoemulsions have been found to pro- duce higher skin penetration than macroemulsions (53). In contrast, a comparative study of macroemulsions and nanoemulsions found no signiﬁcant difference in the skin penetration of tetracaine (54). The emulsion droplets may collapse or fuse with the skin components, and thus the size of the emulsion may have a minimal effect on skin penetration. On the other hand, nanoemulsions have also been shown to penetrate through the hair follicles (55). Furthermore, the drug can be adsorbed, complexed, or conjugated to the surface of nanoparticles. Unlike the other systems discussed so far, these are relatively rigid nanosystems. Various types of biodegradable and nondegradable polymers can be used for the preparation of these nanosystems. Some of the polymers that have been used for topical or transdermal drug delivery include poly(lactide-co- glyocolide), polymethacrylate, poly(butyl cyanoacrylate), poly(E-caprolactone), and chitosan (56–60). Recently, poly(vinyl alcohol)–fatty acid copolymers and tyrosine- derived copolymers have also been used for preparing nanocapsules or nanoparti- cles for skin applications (61,62). Nanoparticles or nanocapsules can be prepared by either solvent evapora- tion or solvent displacement procedures (63). In solvent evaporation technique, the polymer is dissolved in an organic phase, such as dichloromethane or ethyl acetate. This organic phase is then dispersed in an aqueous phase containing the surfac- tant and emulsiﬁed by sonication or high-pressure homogenization. Subsequently, 138 Venuganti and Perumal the organic phase is removed by evaporation under reduced pressure or continu- ous stirring to form polymeric nanoparticles (63). In this method, a lipophilic drug is loaded in the polymeric matrix by dissolving the drug in the organic phase. In solvent displacement method, the polymer is dissolved in a water-miscible organic solvent and injected into an aqueous medium with stirring in the presence of the surfactant as a stabilizer (63). Water-miscible organic solvents such as ethanol, acetonitrile, and acetone are used. The rapid diffusion of the organic solvent through the aqueous phase with the dissolved polymer at the interface leads to the formation of nanoparticles. Only a few studies have investigated the size-dependent penetration of polymeric nanoparticles into the skin. On the other hand, there was a size- and time-dependent accumulation of particles in the follicular regions, where 20-nm particles accumulated more than the 200-nm particles. The 40-nm particles were found to penetrate deeper in the follicles and also further pen- etrate into the epidermal Langerhans cells present at the infundibulum of hair fol- licles. On the other hand, the larger particles (750 and 1500 nm) did not penetrate into the follicles. In this regard, hair follicles can be used as a reservoir for drug delivery to localize the drug to the hair follicles or deliver the drug to the surround- ing epidermal cells (4). This was found tape-stripping studies in human volunteers by using ﬂuorescent-labeled poly(lactide-co-glycolide) nanoparticles (300–400 nm). The nanoparticles are slowly cleared from the hair follicles by sebum secretions and the migration of par- ticles to nearby cells and through the lymphatic system (4). The surface charge on the polymeric nanoparticles also inﬂuences their permeation through the skin.
Usually order indomethacin 50 mg online, the release study is carried out by controlled agitation followed by centrifugation buy cheap indomethacin 50 mg. Due to the time-consuming nature and technical difﬁculties encoun- tered in the separation of nanoparticles from release media indomethacin 25mg line, the dialysis technique is generally preferred. Various researchers have proposed different methods with one common strategy of using synthetic membrane bag with speciﬁed porosity to hold the sample. The bag containing the sample is immersed in the recipient ﬂuid, which is stirred at a speciﬁed rpm. The samples are withdrawn at regular intervals and are analyzed for the drug content. Some reports by various workers on the methods adopted to determine the release proﬁle are summarized in the following text. The release behavior of the drug from the gelatin matrix showed a biphasic pattern that is characterized by an initial burst, followed by a slower sustained release. It is evident that the method of drug incorporation has an effect on its release proﬁle. If the drug is loaded by incorporation method, the system has a relatively small burst effect and better sustained release characteristics (31). If the nanoparticle is coated by the polymer, the release is then controlled by diffusion of the drug from the core across the polymeric membrane. The membrane coating acts as a barrier to release; therefore, the solubility and diffusivity of the drug in the polymer mem- brane becomes determining factor in drug release. Furthermore, release rate can also be affected by ionic interaction between the drug and the addition of auxiliary ingre- dients. When the drug is involved in interaction with auxiliary ingredients to form a less water soluble complex, then the drug release can be very slow with almost no burst release effect (32); whereas if the addition of auxiliary ingredients [e. Depending on the drug–polymer interaction, several mathematical models are discussed based on the type and mech- anism of drug release from the micro/nanoparticulate drug delivery systems. Predicting drug pharmacokinetics and effect in vascularized tumors using computer simulation. Evaluation of mucoadhesive properties of chitosan microspheres prepared by different methods. Analysis of non-Fickian transport in polymers using simpliﬁed exponential equation. Characterization of reservoir-type microcapsules made by the solvent, exchange method. Polymers for sustained macromolecular release: Proce- dures to fabricate reproducible delivery systems and control release kinetics. Mechanism of sustained action medication: Theoretical analysis of rate of release of solid drugs dispersed in solid matrices. Albumin microspheres as a drug delivery system: Relation among turbidity ratio, degree of crosslinking and drug release. Casein microspheres: Preparation and evalu- ation as a carrier for controlled drug delivery. Sustained release ketoprofen microparticles with ethylcellulose and carboxymethylethylcellulose. Synthesis of chitosan succinate and chitosan phthalate and their evaluation as suggested matrices in orally administered, colon-speciﬁc drug delivery sys- tems. University of Baroda, Vadodara, India Yashwant Pathak Department of Pharmaceutical Sciences, Sullivan University College of Pharmacy, Louisville, Kentucky, U. Some general methods and instrumentation used for cytomic study are discussed in this chapter. Flow cytom- etry uses the principles of light scattering, light excitation, and emission of ﬂuo- rochrome molecules to generate speciﬁc multiparameter data from particles and cells in the size range of 0. As cells or particles of interest intercept the light source, they scatter light, and ﬂuorochromes are excited to a higher energy state. This energy is released as a photon of light with speciﬁc spectral properties unique to different ﬂuorochromes. Commonly used ﬂuorescent dyes and their excitation and emission spectra are given in Figure 1 (2). These images also include the most common laser light sources with their multiple lines of emission. One unique feature of ﬂow cytometry is that it measures ﬂuorescence per cell or particle. Both scattered light and emitted light from cells and particles are converted to electrical pulses by optical detectors. Collimated (parallel light waveforms) light is picked up by confocal lenses focused at the intersection point of cells and the light source. For example, a 525-nm band-pass ﬁlter placed in the light path prior to the detector will allow only “green” light into the detector. This type of ampliﬁcation expands the scale for weak signals and compresses the scale for “strong” or speciﬁc ﬂuorescence signals. Flow cytometry data outputs are stored in the computer as listmode and/or histogram ﬁles. Excitation spectra are repre- sented by the gray lines, while emission spectra are in black. The bottom part of the table summarizes the emission wavelengths of various light sources used in ﬂow cytometry. In Vitro Characterization of Nanoparticle Cellular Interaction 171 Histogram Files Histogram ﬁles can be in the form of one-parameter or two-parameter ﬁles. His- togram ﬁles consist of a list of the events corresponding to the graphical display speciﬁed in your acquisition protocol. One-Parameter Histograms A one-parameter histogram is a graph of cell counts on the y-axis and the measure- ment parameter on the x-axis. Therefore, brighter and speciﬁc ﬂu- orescence events will yield a higher pulse height, and thus a higher channel number, when displayed as a histogram. Two-Parameter Histograms A graph representing two measurement parameters, on the x-axis and the y-axis, and cell count height on a density gradient is similar to a topographical map. Listmode Data Files Listmode ﬁles consist of a complete listing of all events corresponding to all the parameters collected, as speciﬁed by one’s acquisition protocol. Once the data are collected and written into a listmode ﬁle, one can replay the ﬁle using either the speciﬁc protocol used for collection or any other program speciﬁcally designed for the analysis of ﬂow cytometry data. Instead, the technology allows automated analysis of solid-phase samples, including adherent cultured cells, tissue sections, cancer tissue imprints, and cytology smears, preserving the sample along with the exact position of each measured sample. This important feature allows the researcher to automatically return to visually inspect and interrogate speciﬁc cells having deﬁned genetic, biochemical, or morphological properties, or to remeasure specimens after re-treating them with reagents or drugs. This not only allows for a more efﬁcient use of reagents and other resources but also provides for direct and easy cross correlation of com- pound effects on multiple cellular targets from the same experiment. The targets/biological indicators of toxicity we chose include cell membrane permeability, nuclear morphology, mito- chondrial transmembrane potential, and induction of apoptosis. Our results show that single-walled nanotubes are more potent than multiwalled nanotubes or C60 fullerene in affecting the mitochondrial transmembrane potential in the two cell lines studied. The key feature of confocal microscopy is its ability to produce in-focus images of thick specimens, a process known as optical sectioning. Images are acquired point-by-point and reconstructed with a computer, allowing three-dimensional reconstructions of topologically complex objects. In a confocal laser scanning microscope, a laser beam passes through a light source aperture and then is focused by an objective lens into a small (ideally diffrac- tion limited) focal volume within a ﬂuorescent specimen. A mixture of emitted ﬂuorescent light and reﬂected laser light from the illuminated spot is then recol- lected by the objective lens. A beam splitter separates the light mixture by allow- ing only the laser light to pass through and reﬂecting the ﬂuorescent light into the detection apparatus. The detector aperture obstructs the light that is not coming from the focal point, as shown by the dotted gray line in the image. The out-of-focus light is sup- pressed; most of the returning light is blocked by the pinhole, resulting in sharper images than those obtained from conventional ﬂuorescence microscopy techniques, and permitting one to obtain images of various z-axis planes (also known as z stacks) of the sample.
Richmond Rascals. 12 Richmond Hill. Richmond-Upon-Thames. TW10 6QX tel: 020 8948 2250