text
stringlengths
330
20.7k
summary
stringlengths
3
5.31k
The α2-adrenoceptor agonists medetomidine and dexmedetomidine produce reliable sedation and some degree of analgesia, permitting minor procedures to be performed in clinical veterinary practice.Butorphanol is a synthetic opioid that is frequently used with medetomidine to enhance the quality of sedation in dogs.However, all α2-adrenoceptor agonists have adverse effects, mainly related to depression of the cardiovascular system.Specifically, α2-adrenoceptor agonists induce vasoconstriction, followed by marked baroreflex-mediated bradycardia.The bradycardia is associated with pronounced decreases in cardiac output and oxygen delivery, an outcome that challenges the usefulness of these drugs.Although the beneficial effects of α2-adrenoceptor agonists are produced at the level of the central nervous system, activation of peripheral α2-adrenoceptors located within vascular smooth muscle leads to the initial vasoconstriction and related cardiovascular effects.In view of this, MK-467, a peripherally acting α2-adrenoceptor antagonist, has been investigated for its ability to prevent or attenuate the negative impact of dexmedetomidine and medetomidine in dogs.Only a small proportion of MK-467 crosses the blood–brain barrier into the mammalian CNS.Thus, the pharmacodynamic actions of MK-467 are limited to tissues and organs outside the blood–brain barrier.Several studies have demonstrated that MK-467 is able to prevent cardiovascular depression in dogs without substantially altering the sedation elicited by dexmedetomidine and medetomidine.It has also been shown that concomitant administration of MK-467 attenuates the cardiovascular effects of a medetomidine–butorphanol combination when both are given by intramuscular injection in the same syringe.Furthermore, an increase in the absorption rate of medetomidine, when combined with MK-467 for IM administration, has been reported.To date, studies on MK-467 in dogs have been performed using laboratory beagles under controlled, experimental conditions.Hence, our aim was to investigate the effects of MK-467 on sedation and bradycardia expected after IM administration of medetomidine and butorphanol in healthy dogs of various breeds in a clinical environment.We hypothesised that MK-467, when co-administered IM with medetomidine and butorphanol, would attenuate bradycardia without impairing the sedative action of this commonly used combination.After receiving approval from the National Animal Experimental Board of Finland, the study was performed at the Veterinary Teaching Hospital, Faculty of Veterinary Medicine, University of Helsinki, Finland.The target population was client owned dogs that required sedation for non-invasive radiographic imaging.Inclusion criteria were weight ≥5 kg, age from 3 months to 10 years and American Society of Anesthesiologists classification I and II.Exclusion criteria were breed-related contraindication for deep sedation, systemic disease or medications affecting the CNS.Informed consent was obtained from the owners.Most of the dogs enrolled in the study were scheduled for radiographic imaging required by the Finnish Kennel Club Health Programme for the screening of canine genetic diseases and defects.In a randomised, complete, block design, animals were assigned to receive one of two treatments: 0.5 mg/m2 medetomidine HCl + 0.1 mg/kg butorphanol tartrate; or 0.5 mg/m2 medetomidine HCl + 0.1 mg/kg butorphanol tartrate + 10 mg/m2 MK-467 HCl.The body surface area was calculated using the following formula: body surface area = 10.1 ×2/3 × 10−2.The dose of medetomidine HCl was equivalent to a dose of 29.5 μg/kg for a 5 kg dog and 11.7 μg/kg for an 80 kg dog.A solution containing 0.5 mg/mL medetomidine HCl for the MB treatment and a solution containing 0.5 mg/mL medetomidine HCl and 10 mg/mL MK-467 HCl was used for the MB-MK-treatment.Butorphanol was drawn up separately and mixed with the solution containing medetomidine before administration.The end volume of the injectable solution in both treatments was 0.03 to 0.07 mL/kg, depending on the weight.Randomisation into treatment groups was performed in blocks for breed and weight to ensure relatively homogeneous populations between treatments.Treatments were administered IM into the gluteal muscles.Ten minutes after drug injection, a catheter was inserted in a cephalic vein aseptically and blood was drawn into tubes containing ethylene diamine tetra-acetic acid for plasma drug concentration analyses and complete blood counts, and into a serum tube for basic serum chemistry.Blood samples obtained later than 20 min after drug injection were excluded from the plasma drug concentration data.The total volume harvested was <10 mL, representing no more than 3% of the total blood volume of a 5 kg dog.Plasma was separated by centrifugation at 2300 g for 10 min within 30 min after collection and stored at −20 °C until analysis for medetomidine, butorphanol and MK-467 concentrations.Oxygen was supplemented with a loose mask at 2–4 L/min according to the dog’s size.Prior to treatment administration, heart rate was assessed by auscultation, respiratory rate was assessed by observation of thoracic movements, colour of the mucous membranes was assessed by direct observation and the level of sedation was scored.The evaluations were repeated at 5 min after treatment and thereafter at 10 min intervals.Rectal temperatures were measured prior to treatment administration and at every 30 min.Dogs were passively insulated with blankets; if the body temperature decreased <37 °C, they were warmed actively by a convective temperature management system.The primary investigator who assessed the sedation was blinded to the treatment.A second investigator administered the treatments and recorded the HRs and other clinical variables.Sedation was determined using a visual analogue scale from 0 to 100, where 0 represents no sedation and 100 represents the animal in lateral recumbency, unresponsive to a loud hand clap.The ‘area under the sedation score time’ curve for VAS was calculated for the first 30 min after treatment using the trapezoidal method.‘Head down time’ was recorded as the time when the dog had become recumbent and did not react to the hand clap.If the level of sedation was inadequate for performing the
In a prospective, randomised, blinded clinical trial, 56 client-owned dogs received one of two IM treatments: (1) 0.5 mg/m2 medetomidine + 0.1 mg/kg butorphanol (MB, n = 29); or (2) 0.5 mg/m2 medetomidine + 0.1 mg/kg butorphanol + 10 mg/m2 MK-467 (MB-MK, n = 27).Heart rates and visual sedation scores were recorded at intervals.The area under the sedation score-time curve for visual analogue scale (AUCVAS30) was calculated for the first 30 min after treatment using the trapezoidal method.
the MB-MK group, which suggests that MK-467 did not override the central component of the bradycardic action of α2-agonists, although it was presumed to have been able to alleviate the baroreflex-mediated bradycardia caused by peripheral α2-adrenoceptor activation.The present results should not be over-interpreted, since we did not show that cardiac output or oxygen delivery were improved by the addition of MK-467.However, in experimental canine studies, the alleviation of bradycardia by MK-467 has been associated with improvement of cardiac output.The lack of blood pressure data in our study is a major limitation; blood pressure was monitored non-invasively and blood pressure measurements could not be carried out systematically at the prescribed same time points, nor using the same artery, because the clinical priority was to avoid excessive interference with the radiological exams.Therefore, we did not consider the data to be reliable and further studies using clinical cases are needed.The onset of sedation appeared to be faster in the MB-MK group and, initially, was deeper than in the MB group.The ‘head-down’ time was significantly shorter in the MB-MK group and we detected a deeper overall sedation during the first 30 min.Subsequently, dogs in the MB-MK group were more alert than dogs in the MB group, since more additional medetomidine and less atipamezole were required.Thus, the use of MK-467 can be considered advantageous, especially if short, intense, sedation is required.If additional medetomidine was needed, medetomidine alone was given to dogs in both groups, since no studies have been performed on the effects of repeated doses of MK-467 in sedated dogs.Since the need for additional sedation arose after a mean of 52 min in dogs in the MB-MK group, the duration and magnitude of the sedative effect of medetomidine and butorphanol combined with MK-467 could provide sufficient sedation to complete a minor non-invasive procedure.The quality and level of sedation induced by medetomidine in our study probably were improved by butorphanol, independent of MK-467, in accordance with that reported previously.Conversely, the effects of MK-467 on the plasma concentration profile of medetomidine and butorphanol apparently also affected the depth of sedation.The interactions between α2-adrenoceptors and their antagonists start at the site of the extravascular injection; MK-467 enhances the IM absorption of medetomidine, probably by preventing local vasoconstriction caused by medetomidine and it also appears to affect the absorption of other co-administered sedatives.In our study, plasma medetomidine and butorphanol concentrations were significantly higher in the MB-MK group than in the MB group.The plasma sample was collected approximately 14 min after the injection, when we expected to detect a clear difference between the treatments based on our previous findings.Furthermore, Restitutti et al. reported that the time-concentration curves of dexmedetomidine intersected at approximately 30 min; the plasma concentration of medetomidine was higher in the presence of MK-467 before the 30 min time point, whereas later it was higher in dogs that had received medetomidine alone IM.These effects of MK-467 on the plasma concentration profiles of medetomidine and butorphanol probably explain both the deeper initial sedation observed in the MB-MK group and the later lighter plane of sedation in the MB-MK group when compared with the MB group.The most frequent side effects after sedation in both groups were lethargy and loose faeces during the evening after the examination.All dogs that had loose faeces had received one or both of the α2-adrenoceptor antagonists MK-467 and/or atipamezole.Atipamezole restores intestinal motility after α2-adrenoceptor agonist-induced sedation and induces defaecation in dogs.Moreover, frequent defaecation after administration of MK-467 has been reported in horses.Therefore instead of giving atipamezole in one single dose, we administered two equal smaller doses if needed, to reduce the risk of intestinal hypermotility, especially in the presence of MK-467, but still to have the desired effect of reversing the sedation.Honkavaara et al. administered atipamezole 50 μg/kg to reverse sedation induced by IV dexmedetomidine and MK-467.MK-467 alleviates the bradycardia induced by medetomidine in dogs in a clinical setting, and provides reliable sedation for short term clinical procedures, such as diagnostic imaging, when it is combined with IM medetomidine and butorphanol in healthy dogs.In addition, MK-467 increases the early stage plasma concentration of both medetomidine and butorphanol when administrated IM in the same syringe and results in deeper initial sedation with shorter duration.None of the authors of this paper have a financial or personal relationship with other people or organisations that could inappropriately influence or bias the content of the paper.
The aim of this study was to investigate the clinical usefulness of MK-467 (vatinoxan; L-659’066) in dogs sedated for diagnostic imaging with medetomidine-butorphanol.It was hypothesised that MK-467 would alleviate bradycardia, hasten drug absorption and thus intensify the early-stage sedation.Plasma drug concentrations were determined in venous samples obtained approximately 14 min after injection.Additional sedation (50% of original dose of medetomidine IM) and/or IM atipamezole for reversal were given when needed.AUCVAS30 was significantly higher after MB-MK.More dogs treated with MB-MK required additional sedation after 30 min, but fewer needed atipamezole for reversal compared with MB.Plasma concentrations of both medetomidine and butorphanol were higher after MB-MK.MK-467 alleviated the bradycardia, intensified the early stage sedation and shortened its duration in healthy dogs that received IM medetomidine-butorphanol.
binding action starts at the edge of the gold electrode.Secondly, as seen in the inset of Fig. 3, the electric field distribution suggests that LSPR-active region is mostly on the sidewall of the gold nanodiscs.The introduced antigen must bind to the antibodies residing on these sidewalls to obtain the full LSPR response.We have demonstrated the simultaneous measurement capabilities of a hybrid sensor that integrates a transmission-mode LSPR sensor with a QCM sensor.The device provides a versatile tool for studying dynamic processes in biomolecular reactions and thin films.The measurement platform can be further improved to include a QCM dissipation measurement or choosing a QCM to operate at a higher frequency to obtain higher sensitivity.Moreover, the costly and bulky equipment required to detect the LSPR and QCM signal could be replaced by Si pn diodes and thin film bulk acoustic resonators resulting in a low-cost, portable, hybrid LSPR and QCM device suitable for POC diagnostics.
We report on the design and fabrication of a hybrid sensor that integrates transmission-mode localized surface plasmonic resonance (LSPR) into a quartz crystal microbalance (QCM) for studying biochemical surface reactions.The coupling of LSPR nanostructures and a QCM allows optical spectra and QCM resonant frequency shifts to be recorded simultaneously and analyzed in real time for a given surface adsorption process.This integration simplifies the conventional combination of SPR and QCM and has the potential to be miniaturized for application in point-of-care (POC) diagnostics.The influence of antibody-antigen recognition effect on both the QCM and LSPR has been analyzed and discussed.
different cations has a characteristic binding time of 0.01–1 ms; this is much faster than we could resolve in our experimental data.To form the force-bearing chain from integrin to F-actin of cytoskeleton, we see the following reactions necessary: 1) the binding of talin and kindlin to integrins, 2) the binding of paxillin to kindlin, 3) the binding of talin to F-actin, 4) the binding of FERM domain of FAK to talin, 5) the binding of FAT domain of FAK to paxillin, and 6) the binding of FAK/paxillin to the F-actin.It is difficult to find any estimates of the rates of these processes.One can find evidence for the fast strengthening of focal adhesions under load, but this is not the same as the assembly of these complexes at the onset of spreading.Our experiments suggest that four of these reactions are quite slow; we cannot be certain which, but we have measured the combined activation energy of these four reactions in 3T3 and EA cells.Only once the full force chain of the integrin adhesome is assembled can the mechanosensor produce the signal for the cell to modify its morphology to the substrate.Another possible scenario that could account for our five-step initial kinetics still has to rely on activation and adhesion of integrins but could include a phase of initial viscoelastic spreading that should be controlled by physical interactions on a more macroscopic scale.In that case, we would require a few slow steps of adhesome assembly.We cannot rule this possibility out with our data, but it is interesting to note that the universal timescale suggested by Cuvelier et al. was between 5 and 10 min.Using their model with parameters they fitted for HeLa cells with our fibronectin density, gives the estimate of a spreading time to our criterion of around 2–3 min.As such, this is not inconsistent with our data, with the caveat that we are still seeing the adhesion process before spreading in the early power-law kinetics.It is also unclear whether there should be an Arrhenius activation-type temperature dependence for their spreading timescale.Certainly, the work of Cuvelier et al. avoids kinetic complications by simply considering the adhesion energy gain per unit area of the cell.The unusual feature of this work is the use of population dynamics of spreading cells to infer details of the microscopic processes governing the cell response to an external substrate.By linking the results to nucleation theory, details of which are given in Supporting Materials and Methods, we found a, to our knowledge, novel way of looking at the onset of cell spreading as a problem of complex assembly.
When plated onto substrates, cell morphology and even stem-cell differentiation are influenced by the stiffness of their environment.Stiffer substrates give strongly spread (eventually polarized) cells with strong focal adhesions and stress fibers; very soft substrates give a less developed cytoskeleton and much lower cell spreading.The kinetics of this process of cell spreading is studied extensively, and important universal relationships are established on how the cell area grows with time.Here, we study the population dynamics of spreading cells, investigating the characteristic processes involved in the cell response to the substrate.We show that unlike the individual cell morphology, this population dynamics does not depend on the substrate stiffness.Instead, a strong activation temperature dependence is observed.Different cell lines on different substrates all have long-time statistics controlled by the thermal activation over a single energy barrier ΔG ≈ 18 kcal/mol, whereas the early-time kinetics follows a power law ∼t 5 .This implies that the rate of spreading depends on an internal process of adhesion complex assembly and activation; the operational complex must have five component proteins, and the last process in the sequence (which we believe is the activation of focal adhesion kinase) is controlled by the binding energy ΔG.
Environmental factors, such as nutritional and psychological conditions during the fetal period, have been shown to impact health and disease conditions later in adulthood; this is now known as the theory of developmental origins of health and disease.Evidence to support this theory was originally established from the nutritional conditions of pregnant mothers but has been extended to include chemical exposure.The extent of chemical exposure on the developing brain, which might result in pathogenesis of mental disorders, has been reported to be considerably large and dependent on exposure conditions, such as the chemical species, dose, and time of exposure.To date, exposure to low doses of chemicals, such as dioxin, methylmercury, and lead, during the prenatal period does not manifest conspicuous abnormalities in mothers and fetuses, such exposure affects mental disorders later in life.Because the developmental process of the central nervous system is orchestrated to proceed on a finely controlled timeframe and in a correct sequence, chemical exposure during a critical phase of the brain development could induce deviation of the neural network, leading to higher brain function abnormalities.Adult rodent offspring born to dams exposed to low doses of 2,3,7,8-tetrachlorodibenzo-p-dioxin, the most toxic congener among 29 dioxin congeners, was found to manifest in cognitive and behavioral abnormalities, such as spatial, reversal and alternate, and paired associate learning and memory, as well as anxiety and sociality.Previous studies have shown that in utero and lactational exposure disrupt the expression of NMDA receptor subunits and BDNF in the hippocampus and cerebral cortex and induce an imbalance of neural activity between the medial prefrontal cortex and amygdala.Although it is plausible to hypothesize that there are some alterations in morphology, there is a notable paucity of neuromorphological evidence linking chemical exposure with cognitive and behavioral abnormalities.TCDD binds aryl hydrocarbon receptor, a transcription factor present in cells of various organs, including the brain, and induces a variety of toxicities in an AhR-dependent manner, as has been shown by the three independent colonies of AhR-null mice.Although recent studies have revealed the molecular basis of dioxin toxicities, such as hydronephrosis, prostate development, and developing zebrafish brain circulation, knowledge about the regulation of AhR downstream signals in the majority of dioxin toxicities remains elusive.To study AhR gain-of-function, transgenic mouse strains that expressed constitutively active-AhR was produced to mimic the situation of AhR agonist exposure and was found to manifest typical signs of dioxin toxicities such as thymic involution and liver enlargement, as well as tumors in the glandular part of the stomach.These results strongly suggest that AhR signaling is essential to the induction of TCDD toxicity.Accordingly, in the present study, we used hippocampus-specific in utero CA-AhR electroporation to study whether the activation of AhR signaling affected neuronal morphology in the developing brain.Next, we studied when and how perinatal low-dose TCDD exposure affected neuronal morphology in the brains of developing and aged mice and compared the phenotypes of these mice.Because previous studies have reported that TCDD affects fear memory, we selected the hippocampus and amygdala for an intensive morphological analysis.TCDD was purchased from Cambridge Isotope Laboratory.Corn oil and n-nonane were purchased from Wako Pure Chemicals and Nakalai Tesque, respectively.The manufacturers of other reagents and instruments used in this study are described in each section below.The animal experimental protocols used in this study were approved by the Animal Care and Use Committees of Keio University and the University of Tokyo.For the in utero CA-AhR electroporation experiments, time-mated pregnant C57BL/6 mice were purchased from Japan SLC.For TCDD-exposure experiments, C57BL/6 mice were purchased from CLEA Japan, and Thy1-green fluorescent protein-M mice were a kind gift from Dr. G. Feng at the Massachusetts Institute of Technology.Female C57BL/6 wild-type mice were mated with male Thy1-GFP-M mice to produce pups bearing the Thy1-GFP allele.These mice were housed in an animal facility with a set temperature of 22 °C–24 °C and humidity of 40% − 60%, as well as a 12/12-h light–dark cycle.Laboratory rodent chow and distilled water were provided ad libitum.To obtain a full-length AhR cDNA fragment from a C57BL/6 mouse, a nested polymerase chain reaction was performed using a mouse liver cDNA library as the template and two primer pairs: 5′-CCTCCGGGACGCAGGTG-3′/5′-AGCATCTCAGGTACGGGTTT-3′ and 5′-CTCGAGGCGGGCACCATGAGCAGCGGCGCCA-3′/5′-CTCGAGTCAACTCTGCACCTTGCT-3′.An AhR deletion mutant that lacks a part of the ligand-binding domain was shown to function as CA-AhR.To produce this CA-AhR cDNA fragment, it was amplified by PCR using the specific primers 5′-CTCGAGGCGGGCACCATGAGCAGCGGCGCCA-3′ and 5′-CTCGAGTCAACTCTGCACCTTGCT-3′ and pQCXIN-CA-AhR-EGFP as a template.The resulting AhR and CA-AhR fragments were excised by XhoI digestion and inserted into the XhoI site of pCAGGS1 to generate the pCAGGA1-AhR and pCAGGS1-CA-AhR plasmids, respectively.These plasmids were subsequently used for in utero electroporation to induce AhR and CA-AhR expressions, respectively, in hippocampal CA1 pyramidal neurons.Pregnant mouse surgery and embryo manipulation in utero were performed as previously described.Briefly, on gestational day 14, pregnant C57BL/6 mice were deeply anesthetized with sodium pentobarbital and laparotomized to expose the uterine horns.Plasmid DNA was dissolved in 10 mM Tris–HCl at a concentration of 2 μg/μl, and Fast Green solution was added to the plasmid solution in a 1:10 ratio to monitor the injection.For embryonic hippocampal transfection, approximately 1–2 μl of plasmid solution was injected into the lateral telencephalon ventricle of each embryo with a glass micropipette made from a microcapillary tube.We used a tweezer-type electrode for electroporation and placed the cathode adjacent to the neocortex on the hippocampal side.Using an electroporator, electronic pulses were charged four times at 950-ms intervals.The uterine horns were placed back into the abdominal cavity
Among various environmental chemicals, in utero and lactational dioxin exposure has been extensively studied and is known to induce higher brain function abnormalities in both humans and laboratory animals.Transfecting a constitutively active AhR plasmid into the hippocampus via in utero electroporation on gestational day (GD) 14 induced abnormal dendritic branch growth.
aged mice, a decrease in spine density in the hippocampal CA1 but not BLA was observed.Dendritic spines bear excitatory synapses that express glutamate receptors and play an important role in neuronal transmission."A decrease in hippocampal spine density was observed in patients with Alzheimer's disease, suggesting a relationship between spine density and memory function.The decreased number of spines in the CA1 region of aged mice are thought to be responsible for synaptic dysfunction and consequent impaired memory function.In particular, perinatal TCDD exposure under the same dosing conditions in the present study was found to affect higher brain function in adulthood, including fear memory, behavioral flexibility, repetitive compulsory responses, and abnormal social behavior."Previous animal studies have shown a causal relationship of perinatal lead exposure with increased amounts of amyloid precursor protein and β-amyloid, indicators of Alzheimer's disease, later in life.However, very few reports show the effects of perinatal chemical exposure on neuronal morphology in aged animals.Thus, this animal model of AhR-disruption by perinatal TCDD-exposure with micromorphological phenotypes early in life, but with cognitive and behavioral phenotypes later in life, corresponds well to the theory of DOHaD, which was originally conceived to address nutritional states during pregnancy.We failed to analyze the spine density in CA-AhR-transfected hippocampus because of an insufficient resolution of the microscope, and the use of membrane-bound GFP will be pursued in a prospective study.GD 12.5, when TCDD was administered, is during the development period of the telencephalon, including the hippocampus and amygdala.Thus, it can be speculated that the impact of TCDD exposure on dendrites is not limited to the hippocampus and amygdala but is present in other region of the telencephalon, such as the cerebral cortex and olfactory bulb.The retarded growth of dendrites may be induced by the disruption of the dendritic elongation signaling due to altered expression of NMDA receptor subunits and BDNF.This speculation is supported by experimental evidence in that elongation of dendrites is regulated by NMDA receptor and BDNF signaling and that the gene expression of NMDA subunits and BDNF was altered by perinatal exposure to TCDD in rats.In these studies, dams were orally administered TCDD at a dose of 200 or 800 ng/kg b.w. or 100 or 700 ng/kg b.w. on GD 15.An increase in NR2A mRNA abundance but a decrease in NR2B mRNA abundance suggests that signal transmission via NMDA receptors did not function normally.In addition, perinatal exposure to TCDD suppresses the induction of BDNF mRNA expression, suggesting the negative regulation of dendritic growth.In the hippocampal CA1 region of the developing brain, the third branch in the TCDD-0.6 group was longer than that in the control group, but that in the TCDD-3.0 group was similar to that in the control group; this showed a nonmonotonic dose–response.Such a response pattern was observed for neuronal cell activity as well as abnormal mouse behavior under the same experimental conditions.In this previous study, mice born to dams exposed to 0.6 μg TCDD/kg b.w. showed abnormal behavioral flexibility and sociality compared with the control group, but mice born to dams exposed to 3.0 μg TCDD/kg b.w. were similar to the control group.Such a nonmonotonic dose response was supported by the immunostaining intensity of neuronal activity markers, c-Fos and Arc proteins.Other examples of nonmonotonic dose response were observed in rats that were subjected to a saccharine test and a paired association test.Other endpoints, such as immune system and cellular proliferation, were also reported to follow a nonmonotonic dose response in TCDD-exposed rats.Exposure to TCDD as well as other chemicals is known to result in a nonmonotonic dose response, and the mechanisms of action of endocrine disrupting chemicals has been reported.Until AhR is saturated with TCDD, a monotonic dose response is considered to occur, whereas beyond the saturation of AhR, TCDD may crosstalk with estrogen receptors or other hormone receptors to induce a secondary reaction that is not mediated by AhR, and may negate the TCDD toxicity induced at a lower dose.In conclusion, micromorphological analysis of neuronal growth and neural network formation may clarify the relationship between low-dose chemical exposure and neurotoxicity phenotypes.An investigation of developmental neurotoxicity consequent to chemical exposure is expected to shed light on the underlying mechanisms of not only chemical toxicity but also mental disorders and related health.The Transparency document associated with this article can be found, in online version.The authors declare that they have no actual or potential competing financial interests.
Increased prevalence of mental disorders cannot be solely attributed to genetic factors and is considered at least partly attributable to chemical exposure.However, how the perinatal dioxin exposure affects neuromorphological alterations has remained largely unknown.Therefore, in this study, we initially studied whether and how the over-expression of aryl hydrocarbon receptor (AhR), a dioxin receptor, would affect the dendritic growth in the hippocampus of the developing brain.Finally, we observed that 16-month-old mice born to dams exposed to perinatal TCDD as described above exhibited significantly reduced spine densities.These results indicated that abnormal micromorphology observed in the developing brain may persist until adulthood and may induce abnormal higher brain function later in life.
number of conserved divergences in some common kinase motifs.First, Phe and Gly in the DFG triad are often replaced by other hydrophobic residues.Second, in many FIKK kinases, the activation loop features proline in place of the more common alanine in the APE.Furthermore, the HRD motif in subdomain VI features a leucine in place of arginine.The absence of arginine in this position typically signifies that a kinase does not need auto-phosphorylation of the activation loop to become active.Interestingly, we found S1320 in PfFIKK8l, which is located in the activation loop and conserved in all FIKK kinases, to be auto-phosphorylated.To investigate the relevance of this phosphoserine to FIKK8 activation, we attempted to express a mutant form of PfFIKK8l with S1320 mutated to alanine; however, the mutated protein did not express.Therefore, a possible regulation mechanism for FIKK kinases mediated by phosphorylation of the activation loop remains to be confirmed.The inclusion of the NTE as an integral component of PfFIKK8l and CpFIKKd is corroborated by the kinetic parameters we obtained, all of which are in the range of active protein kinases with average to above average binding affinities for ATP and the optimized substrates used.Furthermore, sequence alignment indicates that the NTE is conserved among available FIKK8 orthologues from apicomplexan parasites and, to a lesser degree, the other FIKK paralogues found in P. falciparum and P. reichenowi.Given that PfFIKK8l and PfFIKK8o behaved nearly indistinguishably in the kinetics study, we propose that PfFIKK8o specifically defines the boundaries of the active FIKK8 domain and that M1049 is the start of the NTE.Identifying the functional significance of this NTE is left for future research; however, the evidence of auto-phosphorylation discussed above suggests the possibility of a regulatory role.In addition to phosphoserines in the NTE, our auto-phosphorylation experiment also revealed one phosphorylated site in the N-lobe and 2 more in the C-lobe of PfFIKK8l.The N-lobe site, namely S1099, has previously been reported in a phospho-proteomics study of P. falciparum .Furthermore, this phosphoserine is conserved in our autophosphorylated CpFIKKd sample and, significantly, located on the glycine-rich loop of both kinases – implicated in positioning of γ–phosphate in ATP hydrolysis.Previously, a phosphoserine in the same region of yeast ATG1 protein kinase was found to be inhibitory .The peptide array study revealed a preference for Arg at the −3 and +3 positions for PfFIKK8l and CpFIKKd with both enzymes showing the strongest selection for arginine at the +3 position when analyzed using consensus peptide substrates.Notably, the consensus peptides included basic residues at multiple positions upstream of the phosphorylation site.Therefore, it is possible that the more modest effect of replacing the −3 Arg residue is due to compensation by nearby basic residues.Both PfFIKK8l and CpFIKKd largely preserved their catalytic efficiencies when the shortened substrate PT was used in place of PO, suggesting that this short substrate may be as an ideal tool for assaying FIKK8 activity and screening for small molecule inhibitors.In conclusion, our recombinant samples of PfFIKK8 and CpFIKK are orthologous and catalytically active protein kinases, both of which featuring an approximately 40-residue long integral N-terminal extension.Future research to determine the function of this extension may reveal the mechanism of FIKK kinases.It is also possible that, in vivo, regions of the proteins not included in our active constructs may play catalytic, regulatory and localization roles.
FIKKs are protein kinases with distinctive sequence motifs found exclusively in Apicomplexa.Here, we report on the biochemical characterization of Plasmodium falciparum FIKK8 (PfFIKK8) and its Cryptosporidium parvum orthologue (CpFIKK) - the only member of the family predicted to be cytosolic and conserved amongst non-Plasmodium parasites.Recombinant protein samples of both were catalytically active.We characterized their phosphorylation ability using an enzymatic assay and substrate specificities using an arrayed positional scanning peptide library.Our results show that FIKK8 targets serine, preferably with arginine in the +3 and -3 positions.Furthermore, the soluble and active FIKK constructs in our experiments contained an N-terminal extension (NTE) conserved in FIKK8 orthologues from other apicomplexan species.Based on our results, we propose that this NTE is an integral feature of the FIKK subfamily.
The repofs source code is available under an Apache license on GitHub: https://github.com/AUEB-BALab/RepoFS/.Empirical software engineering work often involves studying revision control system repositories maintained using the popular Git system .Revision control systems offer the ability to store and traverse versions of software, and are used by virtually all professional development teams.Therefore studies involving software analysis, software evolution, and software faults rely heavily on the ability to traverse and inspect the history of revisions.However, researchers point out that although studies based on mining software repositories depend on software engineering data mining tools , there are only few practical and reusable tools for such tasks .Under the Git system, the user can switch between revisions using the checkout command.However, switching through revisions is time consuming, especially on large repositories, because the working tree must be updated each time.Moreover, command-line tools and file explorers can work on only one revision at a time, making it cumbersome to run analysis tools as different processes for different revisions or to inspect two different revisions side-by-side.An alternative to checking out are Git’s commands that directly support accessing and traversing the history of revisions.Git’s interface to common file system operations through commands such as ls-tree and show has several drawbacks.First, users have to learn new commands and memorize an often inconsistent syntax.De Rosso et al. note that users regard Git’s interface as complex, counterintuitive, and difficult to learn.For example, in some cases a file’s revision is given as “revision -- file” while in others it is specified as “revision:file”.The operations are also significantly less efficient than running commands on a checked out tree.Then, the usability aids provided by the Unix shell, such as wildcards and filename completion, are missing from Git repository access commands, or they are provided only in a subset of cases.In addition, Git’s commands, lack the notion of a current directory and current revision.Unless one checks out a specific revision to navigate through it – an expensive operation – one has to specify the exact revision and the full path in each Git command.Finally, the lack of directory navigation in Git commands means that other features built on top of existing shell directory navigation facilities, such as the display of the current directory on the terminal window or the provision of a visited directory stack, are not available when using Git commands.In this paper we introduce repofs: a tool that exposes a Git repository as a virtual user-level file system.All commits, branches, tags and the contents of the revisions they point to, appear as separate directory trees allowing them to be easily and efficiently processed through command-line tools and file explorers.Thus repofs removes the need for performing costly checkout operations, by allowing the inspection of different directories and files of separate revisions concurrently and on demand through common shell methods instead of bespoke ls-tree and show commands.A virtual file system is a middle layer module between the user space and the kernel, allowing the creation of what seems like a regular file system without actually storing any data on the disk.repofs provides comparable performance to that of shell commands on a checked out revision and in many cases better performance compared to the use of Git tools.The contributions of this work are the provision of an open-source tool that allows the intuitive and efficient analysis of Git repositories, and the illustration of the analysis methods enabled by the tool.repofs can be installed by first installing its dependencies1 and then installing it from the Python Package Index.2,It has been tested under the gnu/Linux Debian Stretch distribution.repofs operates as a command line tool that accepts as required arguments a path to a local Git repository and a mount directory where the repository’s history of revisions will be mounted as a tree directory structure.After initialization, the mounted directory will contain the following subdirectories.The commits-by-date directory contains subdirectories named after each year within the range of the year of the repository’s first and last commit.Each year directory contains one directory for each month, which, in turn, contain directories named after the days of that month.Thus, the directory commits/yyyy/mm/dd contains directories each named after the commit hashes of the commits made on that date.Commits reference a given state/revision of the repository and we represent this as a commit hash directory.Each commit hash directory contains the state of the project at the time the commit was made, i.e. the tree structure associated with that commit, in which the contents of directories and files can be accessed in a read-only manner.In addition, the commit hash directories contain a hidden directory named .git-parents which contains symbolic links to the parents of the corresponding commit, and two hidden files named .author and .author-email which contain the name and email of the commit’s author respectively.For example, the directory pointed by the link commits-by-date/2015/11/27/c32..f93/.git-parents/38f..4b5 contains the contents of the root directory of the commit with hash 38f..4b5, which is a parent of the commit with hash c32..f93, which was created on 2015-11-27.For each file inside a commit hash directory the last access and change time are set to the time the commit was authored and committed respectively.Note that we use the committed time of the commit to organize the commits-by-date directory.The commits-by-hash directory contains subdirectories named after the commit hash of each commit stored in the repository.The commit hash subdirectories behave exactly as the commit hash subdirectories contained in the commits-by-date directory.Some tools misbehave when
Empirical software engineering work often involves studying revision control system repositories maintained using the popular Git system.Checking out each revision one wants to study is inefficient.We introduce RepoFS, a tool that exposes a Git repository as a virtual user-level file system.Commits, branches, and tags appear as separate directory trees allowing them to be efficiently processed through command-line tools and file explorers.
.git-parents metadata directory.The difference between the two numbers gives the number of merge commits.5.How many directories were documented in the Seventh Research Edition Unix hierarchy and the FreeBSD 11.0.1 release?,The numbers are obtained by counting the number of indented paragraphs whose title ends with a slash in the troff text markup of the directory hierarchy manual page—hier.The typing of long paths is made easier through the use of filename completion, which the shell transparently implements on top of the repofs virtual file system.The git grep commands can be easily replaced by grep commands.6.How many errors does JSHint generate for the latest releases of jQuery?,jshint3 is a static analysis tool used for assessing code quality.The numbers are obtained by iterating through tags specifying a release after 3.0 and running jshint for their contents.We only keep the number of errors, which is the first word of the last line of the output.jshint works on the current directory tree, therefore using a git checkout for each revision is required.However, using repofs the files of different revisions can be accessed concurrently.Many tools aim to simplify the analysis of Git repositories.We compare repofs against tools that fall under two categories:GVFS and GitOD virtualize the file system beneath a Git repository so that Git tools see what appears to be a normal repository when, in fact, the files are not actually present on disk.Their aim is to reduce the use of bandwidth and disk space by initially downloading only the essential files.Both of those tools are aimed towards developers working on a specific component of a large repository, and do not need all components.Compared to these tools, repofs uses local copies of Git repositories and is not targeting the software development process.Tools similar to repofs use local copies of Git repositories and provide support for traversing history.FigFS provides commits, branches and tags subdirectories containing the history of the repository.However, the project is abandoned , and there are no installation instructions.GitFS is a well-maintained project that supports traversing and viewing the history of a repository.However, its focus is for creating new commits.Although both GitFS and FigFS support traversing history using a virtual file system, they do not do so efficiently.In contrast, repofs does not allow adding commits to the repository, focusing instead on the efficient support of msr tasks whose performance can be slow under GitFS and FigFS.Cosentino et al. propose avoiding the use of Git commands by exporting the repository’s data to a relational database, meaning that data can be queried using sql commands.However, this approach creates multiple tables that are difficult to track, requires knowledge of sql, and can result in complex queries for accomplishing simple tasks.On the performance front, Table 1 compares the time it takes to execute some representative operations using repofs, GitFS, FigFS, the Unix shell, and Git commands.The Ready for use column corresponds to the initialization times of each tool due to pre-processing.The find and cat commands were used in order to get a list of files under a specific commit and access the contents of each one of those files.The corresponding Git commands were git ls-tree and git show respectively.In addition, the ls and git rev-list commands were used in order to count the number of commits.The operations were performed on the Linux kernel repository which is currently 2.09 GiB in size, contains more than 720,000 commits, spans 13 years, and contains more than 38 million lines of code in the most recent commit.The machine the metrics were run, uses the gnu/Linux Ubuntu Server distribution, has 4 cpus, 4gb of ram and 20gb of hard disk storage.The times are the best out of five tries, given in hh:mm:ss, accurate at the largest unit, and the metrics were completed with cold caches.As can be seen, repofs surpasses GitFS’s and FigFS’s performance on most cases measured, on some cases with a dramatic difference.Regarding the better initialization times of FigFS, that difference can be attributed to the fact that FigFS does not list the commits of the repository, instead relying on the user to know the commit hash of the commit they wish to access.Note that repofs, FigFS, and GitFS all avoid the cost of checking out each commit.Git provides an efficient mechanism for collaborative software development.Its storage model can be easily conceptualized as a versioned file system.However, although Git’s command-line interface provides full access to a repository’s contents, using its commands for empirical software engineering research can be a daunting proposition.By providing a file system map of a Git repository, repofs allows the use of known and commonly used Unix shell commands, idioms, and tools on all commits, branches, and tags of an examined Git repository.In addition, the provided directory structure can be readily used by general-purpose gui tools, such as file explorers and file differencing tools.Future work on repofs will involve further time and memory performance optimizations.As is the case with all open source software tools, community adoption, feedback, and, hopefully, contributions, will also guide the direction of the tool’s evolution.
On the other hand the examination of directories and files of past revisions using Git's commands suffers from a usability perspective.We illustrate these points through motivating examples and discuss the advantages and drawbacks of the proposed approach.
not only a result of the Na and Mg leaching, but also of some degree of metal oxide particles agglomeration, which is revealed by the increase of the band gap energies relative to the fresh samples.Some decrease of the crystallinity of the zeolites after the reaction cycles can also be observed.Furthermore, it can be seen that the activity loss seems to increase slightly with the degree of desilication.This could be a consequence of the progressive increase of the Na and Mg leaching, since neither the MgO particles size nor the crystallinity on the desilicated zeolites after 3 reaction runs are significantly changed when compared to the parent zeolite.Nevertheless, upon oxidative treatment at high temperature, the samples can almost fully or partially recover their initial activity.This clearly supports the previously reported benefit of a regeneration step between reaction runs , which comes from a combination of the coke removal and redistribution of the magnesium oxide onto the supports.The latter can be confirmed by the slightly smaller size of the MgO particles after regeneration than without, despite the very high temperature applied, as well as by the increase of leaching in the second run.However, it is important to note that the capacity of the samples to have their activity restored decreases significantly with the level of desilication, maybe as a consequence of the higher magnesium leaching.On the other hand, as also previously reported for the 5%MgNaY zeolite , fructose selectivity for the alkali-treated zeolites increases in consecutive reaction runs, both without and with regeneration.This becomes especially evident with the increase of the desilication degree as selectivities were initially much lower.The reduction of the basicity due to the Na and Mg leaching associated with the considerable improvement of the textural properties observed after the reaction cycles, as well as coke combustion in the case of the regenerated samples, can explain this more limited further transformation of fructose.Desilication of the parent NaY zeolite at different NaOH concentrations resulted in an increase of both the mesoporous volume and external surface area, while preserving the microporous volume and crystallinity.Addition of magnesium leads only to some decrease of the micropores, this effect being reduced by the desilication.Magnesium-doped alkali-treated zeolites also revealed higher density and strength of basic sites, as well as stronger magnesium-support interaction.Both glucose conversion and fructose yield were remarkably increased over Mg-impregnated desilicated zeolites when compared to the parent zeolite.Enhanced performance was a result of the improved textural and basic properties, as activity was observed to be mainly governed by heterogeneous catalysis.Glucose conversion gradually increases with the degree of desilication, while a maximum fructose yield of 35% is achieved when using low concentration NaOH solutions.High fructose selectivities >87% are obtained for the low-severity desilicated zeolites.Deactivation in consecutive reaction steps without regeneration increased with the desilication degree, as a result of the higher Na and Mg leaching for these samples.Nevertheless, upon high temperature treatment under air, desilicated zeolites can still recover part of their initial activity, especially for lower desilication level.Overall, catalytic data show that low-severity desilicated NaY zeolites could be better supports in combination with magnesium for the glucose isomerisation into fructose.They present improved activity and higher fructose productivity than the parent catalyst, and they can still be successfully regenerated.The potential of these catalysts is also higher than that of the previously reported higher magnesium content NaY zeolites.
The impact of desilication on the performance of a series of alkali-treated NaY zeolites impregnated with 5 wt.% of magnesium for glucose isomerisation into fructose has been studied.Desilication at different NaOH concentrations increases the mesoporous volume and external surface area, without compromising microporosity and crystallinity.The observed reduction of the microporous volume due to magnesium impregnation was found to decrease for the alkali-treated zeolites.Higher density and strength of basic sites and stronger magnesium-support interaction were also achieved with the treatment.These improved properties resulted in a significant increase of both glucose conversion and fructose yield on the magnesium-doped desilicated zeolites.Glucose conversion continuously increases with desilication (28–51%), whereas fructose yield passes through a maximum (35%) at low desilication levels.Among the prepared desilicated samples, low-severity alkali-treated zeolites also show lower deactivation in consecutive reaction runs, as well as superior regeneration behaviour.Thus, hierarchical NaY zeolites impregnated with magnesium could be favourably used for glucose isomerisation into fructose if suitable alkaline treatment conditions are selected, with low-severity treated NaY zeolites being the best choice.Higher fructose productivities were achieved for the low-severity desilicated zeolites than for higher magnesium content NaY zeolites reported previously, leading to a lower Mg requirement.
can be used for expulsion of CR from waste water.The CCD model was effectively connected to examine the interactive impacts of adsorption factors and enhance the adsorption.The greatest adsorption effectiveness for evacuation with Pb@ZnFe2O4 at pH: 7.0, adsorbent mass: 250 mg, starting fixation CR dye: 150 mg L−1 at 90 min evacuation is around 96.49 %.The adsorption conduct of CR onto the prepared Pb@ZnFe2O4 was methodically researched, which was observed to be spontaneous, exothermic and obey pseudo-second order rate equation.Furthermore, the Langmuir isotherm is better than the Freundlich isotherm to fit the exploratory information.The monolayer adsorption limit of Pb@ZnFe2O4 for CR is 1042 mg g−1.Together with the thermodynamic parameters, our outcomes demonstrate that chemisorption assume the overwhelming job in the adsorption procedure.The blended Pb@ZnFe2O4 can be utilized as a productive and recyclable adsorbent for the expulsion of CR from aqueous media.Sanjay Attarde: Conceived and designed the experiments; Analyzed and interpreted the data.Ganesh Jethave: Performed the experiments; Wrote the paper.Umesh Fegade: Conceived and designed the experiments; Wrote the paper.Sopan Ingle: Contributed reagents, materials, analysis tools or data.Mehrorang Ghaedi, Mohammad Mehdi Sabzehmeidani: Analyzed and interpreted the data.This work was supported by the Council of Scientific & Industrial Research, India, for financial support under CSIR-SRF Scheme fellowship awarded to first author Mr. Ganesh Jethave.The authors declare no conflict of interest.No additional information is available for this paper.Supplementary content related to this article has been published online at https://doi.org/10.1016/j.heliyon.2019.e02412
In the present research article we explore the synthesis method and adsorption capability of ZnFe oxides nanocomposites by using Pb as dopant.A conventional and simple batch adsorption method is selected and optimized.Pb@ZnFe2O4 NCs were fabricated by facile method i.e.co-precipitation method and characterized by FESEM, XRD, IR, EDX.The removal of dye has monitored by UV method.An outstanding result is obtained as adsorption efficiency of 1042 mg g−1 shows more significant performance than currently available bench-mark adsorbents.The optimized parameters pH 7.1, Adsorbent Mass: 50 mg, Initial Dye Concentration: 150 mg/l and Agitation Time: 90 min results in 96.49 % removal of CR (Congo red) dye.A CCD (central composite design) is applied to evaluate the role of adsorption variables.Based on its excellent performance, cost effectiveness, facile fabrication and large surface area, the Pb@ZnFe2O4 has considerable potential for the manufacture of cost effective and efficient adsorbents for environmental applications.
We propose a new method for training neural networks online in a bandit setting.Similar to prior work, we model the uncertainty only in the last layer of the network, treating the rest of the network as a feature extractor.This allows us to successfully balance between exploration and exploitation due to the efficient, closed-form uncertainty estimates available for linear models.To train the rest of the network, we take advantage of the posterior we have over the last layer, optimizing over all values in the last layer distribution weighted by probability.We derive a closed form, differential approximation to this objective and show empirically over various models and datasets that training the rest of the network in this fashion leads to both better online and offline performance when compared to other methods.
This paper proposes a new method for neural network learning in online bandit settings by marginalizing over the last layer
Antisense oligonucleotides are a growing class of versatile biomolecules, which have garnered much attention in the past decade as a mature and attractive platform for therapeutic drug development.Nearly 30 years of exhaustive research into antisense technology have advanced the platform into a rapid development stage for the treatment of a broad range of diseases, including severe and rare genetic disorders, cancers, cardiovascular and metabolic illnesses, and infections.1–3,ASOs are considered the most direct therapeutic strategy to hybridize target RNA, and, as such, to no surprise, ASOs compose the majority of investigational new drug submissions for nucleic-acid-based therapeutics.4,Significant advancements in ASO chemistries have fostered a wide range of modifications with an improved understanding of ASO pharmacology, pharmacokinetics, and toxicology, which collectively have led to widespread use of ASOs within broadened clinical pipelines.5–7,ASOs undergo Watson-Crick hybridization to bind to cognate RNA sequences, which could modulate gene expression or translation of proteins that are in question.8–10,ASOs function through a wide variety of mechanisms, such as the RNase H degradation pathway to achieve the desired pharmacological effect.11,Pivotal modifications to backbone and base pair chemistries have included the use of phosphorothioate and 2′-MOE-ASOs, which increase overall tolerance and potency.PS modifications increase nuclease resistance and extend circulating half-life, whereas 2′-MOE-ASOs have ribose sugar modifications at the 5′ and 3′ termini, which increase resistance to exonuclease cleavage and enhance binding to target mRNA.Lastly, and as one of the most compelling successes in oligonucleotide drug development, triantennary N-acetyl galactosamine-conjugated ASOs allow for efficient delivery, with high affinity to asialoglycoprotein receptors for liver targeting.12,Remarkably over 20- to 30-fold improved potency of GalNac3-ASO conjugates compared to unconjugated ASOs have been observed in vivo.13,The PK properties of PS and 2′-MOE-ASOs are widely comparable, highly predictable, extrapolatable, and well documented across species in preclinical and clinical findings.14–18,Nevertheless, because ASO therapies are largely in the development stage, only a limited number of reported drug-drug interaction studies between unconjugated ASOs and other drugs have been reported, and no accounts for DDI studies with GalNAc3-conjugated ASOs exist.19–22,Published findings of clinical DDI studies have investigated the potential interactions of unconjugated ASOs with co-medications that are often used in the disease populations under study.These co-medications have included simvastatin, ezetimibe, rosiglitazone, glipizide, metformin, cisplatin, and gemcitabine, which collectively utilize diverse clearance routes, including cytochrome P450 3A4, glucoronidation, CYP2C8/C9, CYP2C9/C8, renal, and nucleoside kinases.The results of these studies have shown no reported cases of known clinical interactions between unconjugated ASOs and co-medications.Currently, there are no specific regulatory guidances on clinical pharmacology studies for nucleic-acid-based therapeutics, and the DDI panel recommendations are similar to those for small molecules, which include in vitro induction and inhibition screens for the major CYP enzymes and substrate and inhibition investigations for the major drug transporters to evaluate the need for in vivo studies.23,24,DDIs can occur when one drug alters the uptake or metabolism of a co-administered drug, leading to altered PKs and pharmacology.Drugs that are substrates or inhibitors of these transporters or inducers or inhibitors of the major CYP enzymes may cause adverse drug reactions if co-medications, foods, or supplements are also substrates or inhibitors of the same transporters or inducers or inhibitors of CYPs.25,Within this research, an extensive investigation across a diverse panel of ASOs is conducted to evaluate a total of four distinctive ASOs, including a GalNAc3-conjugated-ASO.CYP1A2, CYP2B6, CYP2C8, CYP2C9, CYP2C19, CYP2D6, CYP2E1, and CYP3A4 inhibition potential using human primary hepatocytes and CYP1A2, CYP2B6, and CYP3A4 induction potential at both the enzyme activity level and the mRNA level were assessed.Additionally, the cellular level exposure of each respective ASO in the hepatocytes was also evaluated under the same conditions used in the inhibition experiments to ensure adequate uptake.For transporter studies, the potential for ASOs as substrates or inhibitors of major drug transporters was also examined, including organic anion transporters, organic cation transporters, organic anion transporting polypeptides, breast cancer resistance protein, P-glycoprotein, and the bile salt export pump.These in vitro CYP and cell-based transporter assays provide mechanistic insights into the lack of cytochrome-P450-related DDI as well as the lack of transporter-related drug interactions with the 2′-MOE-ASOs, providing better confidence for the safety profiles of ASOs.The incubation conditions were optimized with acceptable dosing concentrations and respective signal levels for detection.Incubation times of 45 or 90 min were selected across isoforms using known probe substrates, and the same time interval was applied to both the positive control and antisense drug test articles.Due to the slow metabolism process of ASOs, no major differences were observed between 45 and 90 min based on pilot experiments.Two studies were done at different incubation times and results were reported.For a single ASO, the study was done at a different date using a different concentration unit, μg/mL, as opposed to μM.Overall, a lower concentration was used because the clinical dose of the drug was lower, 12 mg by intrathecal route, and the expected systemic exposure is much less in comparison to subcutaneous delivery.The half maximal inhibitory concentration values of three 2′-MOE-modified ASOs, ISIS-304801, ISIS-396443, and ISIS-420915, for CYP1A2, CYP2B6, CYP2C8, CYP2C9, CYP2C19, CYP2D6, CYP2E1, and CYP3A4 were all greater than 100 μM, 100 μg/mL, and 100 μM, respectively.Similar findings were also observed using a GalNAc3-conjugated 2′-MOE-ASO, ISIS-681257.The IC50 values of the positive controls ranged from 0.00108 to 1.56 μM, as expected for these CYP enzymes.The IC50 data are presented in Table 2; data for the inhibition of CYP enzyme activities by the positive controls are presented in the top row of Figures 1A
Antisense oligonucleotides are metabolized by nucleases and drug interactions with small drug molecules at either the cytochrome P450 (CYP) enzyme or transporter levels have not been observed to date.Herein, a comprehensive in vitro assessment of the drug-drug interaction (DDI) potential was carried out with four 2′-O-(2-methoxyethyl)-modified antisense oligonucleotides (2′-MOE-ASOs), including a single triantennary N-acetyl galactosamine (GalNAc3)-conjugated ASO.The inhibition on CYP1A2, CYP2B6, CYP2C8, CYP2C9, CYP2C19, CYP2D6, CYP2E1, and CYP3A4 and induction on CYP1A2, CYP2B6, and CYP3A4 were investigated in cryopreserved hepatocytes using up to 100 μM of each ASO.In addition, transporter interaction studies were conducted with nine major transporters per recommendations from regulatory guidances and included three hepatic uptake transporters, organic cation transporter 1 (OCT1), organic anion transporting polypeptide 1B1 (OATP1B1), and OATP1B3; three renal uptake transporters, organic anion transporter 1 (OAT1), OAT3, and OCT2; and three efflux transporters, P-glycoprotein (P-gp), breast cancer resistance protein (BCRP), and bile salt export pump (BSEP).Based on these findings, the unconjugated and GalNAc3-conjugated 2′-MOE-ASOs would have no or minimal DDI with small drug molecules via any major CYP enzyme or drug transporters at clinically relevant exposures.
reference inhibitor, and reference inhibitors were used at concentrations of ≥10X the IC50 value, which corresponded to ≥85% inhibition.The inhibitory effects of ISIS 304801, 396443, 420915, and 681257 on transport of substrate by OAT1, OAT3, OCT1, OCT2, OATP1B1, OATP1B3, BCRP, P-gp, and BSEP was investigated.The transport of substrate in the presence of the vehicle control was compared to the uptake in the presence of test ASO or a known reference inhibitor.As a positive control reference inhibitor, probenecid was shown to inhibit up to 85.7 ± 0.843% and 98.3 ± 6.07% OAT1- and OAT3-mediated transport of p-aminohippurate, respectively.Similarly, quinidine inhibited 91.2 ± 1.57% of OCT1-mediated transport of MPP+ and 98.5 ± 1.34% of OCT2-mediated transport of metformin.Rifampicin inhibited 98.8 ± 1.09%, 99.0 ± 0.955%, and 98.0 ± 0.382% of OATP1B1-, OATP1B3-, and BSEP-mediated transport of estradiol-17β-D-glucuronide, CCK-8, and taurocholate, respectively.Finally, Ko143 inhibited 90.4 ± 1.89% of BCRP-mediated transport of prazosin, and elacridar inhibited 90.2 ± 1.42% of P-gp-mediated transport of quinidine.Using unconjugated 2′-MOE-ASOs at a concentration of 100 μM, the mean % inhibition ranged from −21.0% to 22.3%, −19.4% to 19.0%, and −16.2% to 24.2% for all transporters evaluated, respectively.Similar results were obtained for the GalNAc3-conjugated ASO, ISIS 681257, whereby the mean % inhibition ranged from −38.3% to 21.3% for all transporters, respectively.Very slight enhancement of probe substrate uptake in the presence of 100 μM ISIS 304801 or 396443 was observed for BSEP: 21.0% and 8.99%, respectively.Minimal enhancement of probe substrate uptake in the presence of 100 μM ISIS 396443 was observed for OAT1.Lastly, minimal enhancement of probe substrate uptake in the presence of 100 μM ISIS 681257 was observed for OAT3 and OATP1B3.The absolute magnitude of these apparent enhancements are smaller than the magnitude of enhancement or inhibition for several of the other transporters tested and is most likely due to random chance.Because the concentration of these ASOs significantly exceed peak plasma concentrations of relevant clinical doses, it is highly unlikely that this marginal increase in transport is clinically relevant.ISIS 304801, 396443, 420915, and 681257 are not considered inhibitors of either BCRP, P-gp, OAT1, OAT3, OCT1, OCT2, OATP1B1, OATP1B3, or BSEP.It is well established that the major CYP enzymes and drug transporters make a significant contribution to the PK and pharmacodynamic properties of small molecules, yet the interaction and relation of unconjugated ASOs toward these enzymes and drug transporters are not as well characterized, with limited DDI data and no existing investigations of GalNAc3-ASO conjugates.Antisense drugs may be used within very diverse treatment areas, and, as such, an ASO therapy requiring the combination of several concomitant medications as a standard of care may be encountered frequently.The modulation of CYP enzymes and transporters by DDIs has been shown to define systemic and tissue concentrations of small molecules, leading to inter-individual treatment variability and even adverse accounts of increased toxicity and mortality for several molecules.26,Therefore, a complete knowledge of the enzymatic pathway of ASOs and the potential interactions with CYP enzymes and drug transporters are of major importance to ensure treatment safety and efficacy.Because ASOs are drastically different from small molecules in their physico-chemical properties, such as molecular weight, number of hydrogen bonds, and disposition, the ability to interact with CYP enzymes and drug transporters is significantly limited.Similarly, sequence and secondary structure of ASOs are unlikely making a difference for ASO-CYPs or ASO-transporter interactions at the protein level because three-dimensional structures of ASOs are so similar among different sequences and modifications.At the RNA level, sequence or secondary structure could make a difference theoretically if the sequence of an ASO happens to hybridize with the RNA of a CYP or transporter in the nucleus.However, the probability of a perfect match and hybridization with the RNAs of CYPs or transporters on top of a perfect match with the target RNA of the disease is unlikely or remote for 20-mer ASOs with careful designs.ASOs are readily taken up into numerous types of liver cells, including parenchymal, non-parenchymal, and sinusoidal endothelial cells, and have long been known to be metabolized into shortmers by endonucleases and exonucleases without being subjected to metabolism by CYP enzymes.More recently, GalNAc3-ASO-conjugation strategies have offered hepatic-specific internalization of ASOs in a target-mediated disposition process.The major pathways for cellular uptake of antisense oligonucleotides are much different from small molecules and are presumed to be by endocytosis and involve the interaction with proteins on the cell surface.ASOs modified with phosphorothioate linkages stick to cell surface proteins and internalize into cells at the cell surface; the protein is internalized by endocytosis or membrane turnover.Recently published data have shown that the asialoglycoprotein receptor, along with other cell surface proteins, are involved in the uptake of GalNAc conjugated and unconjugated ASOs.These receptors have been shown to be expressed in human hepatocytes in vitro.27,Because CYP enzymes are membrane-bound and located in the endoplasmic reticulum, the CYP inhibition potentials may be impacted by subcellular concentrations during the probe substrate incubation and by compartmentalization after the drug enters the cells.The intention of the in vitro CYP- and/or transporter-mediated DDI studies was to evaluate the potential of DDI in vivo at exceedingly high exposure scenarios.Incubation concentrations for three compounds included a high concentration at 100 μM, which is several fold higher than the projected liver exposures in humans or monkeys at clinically relevant doses.14,28,29,ASOs accumulate extensively in tissues, where DDIs may take place, if any, at these high concentrations.Similarly, for transporter substrate evaluation, a 10-μM concentration was selected to cover the
Several investigations to describe the DDI potential of a 2′-MOE-ASO conjugated to a high-affinity ligand for hepatocyte-specific asialoglycoprotein receptors are explored.No significant inhibition (half maximal inhibitory concentration [IC50] > 100 μM) or induction was observed based on either enzymatic phenotype or mRNA levels.Additionally, neither of the four ASOs showed meaningful inhibition on any of the nine transporters tested, with the mean percent inhibition ranging from −38.3% to 24.2% with 100 μM ASO.
polymersomes and the distributions of free and internalised polymersomes were identical.If the internalisation function was saturating, then distributions in the internalisation threshold and the number of binding sites n both showed an increased uptake with increasing standard deviation.Also the mean size of the polymersome internalised differed from the mean size of the free polymersomes.The distributions used here should be considered as distributions of polymersomes around 200 nm, the size of polymersomes used in this study.Much smaller or larger polymersomes will be affected by other physical aspects of endocytosis.For example, a reduction in gold nanoparticle uptake by HeLa cells below approximately 40 nm and above 50 nm has been reported.35,In our model the reduced uptake below 40 nm could correspond to the internalisation rate being a threshold effect where a minimum number of receptor-polymersome bonds are needed before internalisation.Nanoparticles smaller than the threshold might not bind to enough receptors for internalisation to be completed.The amount of therapeutic load the cells are exposed to will depend on the size and number of the polymersomes.Given experimental data on the dependency of polymersome size on uptake,14 the relationships between size and the model parameters could be estimated.From this, the model can predict the optimum size of polymersomes to be used in treatment and how much the variability in size of a sample of polymersomes will alter the uptake and encapsulated drug delivery.In the mathematical model, receptors are recycled to the surface at a fixed rate.Regulation of receptor recycling and production probably occurs biologically and may depend on the number of internalised polymersomes.However, the model achieves a good fit to the experimental data so regulation of receptors may not be an important factor for the polymersomes and receptors considered here.However it is considered to be a factor in the uptake of other nanoparticles21 and will feature in our ongoing work on this model.
This study is motivated by understanding and controlling the key physical properties underlying internalisation of nano drug delivery.We consider the internalisation of specific nanometre size delivery vehicles, comprised of self-assembling amphiphilic block copolymers, called polymersomes that have the potential to specifically deliver anticancer therapeutics to tumour cells.The possible benefits of targeted polymersome drug delivery include reduced off-target toxic effects in healthy tissue and increased drug uptake by diseased tissue.Through a combination of in vitro experimentation and mathematical modelling, we develop a validated model of nanoparticle uptake by cells via the clathrin-mediated endocytotic pathway, incorporating receptor binding, clustering and recycling.The model predicts how the characteristics of receptor targeting, and the size and concentration of polymersomes alter uptake by tumour cells.The number of receptors per cell was identified as being the dominant mechanism accounting for the difference between cell types in polymersome uptake rate.From the Clinical Editor: This article reports on a validated model developed through a combination of in vitro experimentation and mathematical modeling of nanoparticle uptake by cells via the clathrin-mediated endocytotic pathway.The model incorporates receptor binding, clustering, and recycling and predicts how the characteristics of receptor targeting, the size and concentration alter polymersome uptake by cancer cells.
Over the past two decades, researchers and practitioners in earth sciences, ecology, and cognate disciplines have been creating innovations in environmental monitoring technologies that combine Information and Communication Technologies with conventional monitoring technologies, and Environmental Sensor Networks.These technologies, which we collectively label “Smart Earth,” have proliferated due to the rapid decrease in cost of cloud-based computing and innovations in Machine to Machine infrastructure, enabling unprecedented environmental management applications.Simply put, Smart Earth is the set of environmental applications of the Internet of Things, and is thus analogous to the widely discussed “Smart City,”, but articulated across a much wider range of ecosystems and land use types.Smart Earth technologies enable terabytes of environmental data to be derived from terrestrial, aquatic, and aerial sensors, satellites, and monitoring devices, relying on a rapidly diversifying set of sources—including “wearables” and biotelemetric technologies devised for humans, animals, and even insects.New cloud-based Web platforms have been created that enable the aggregation, analysis, and real-time display of these unprecedented streams of environmental data.Scientists are also applying innovations in AI, Big Data analytics, machine learning, 3D object-recognition algorithms, and genetic learning to the study and administration of ecological processes.Collectively, these developments have dramatically increased scientists’ ability to assess spatiotemporal changes in abiotic conditions as well as biotic communities.We contend that the volume, integration, accessibility, and timeliness of the data provided by Smart Earth technologies potentially creates the conditions for significant changes in environmental governance.To date, the majority of research on this topic has focused on the potential implications for conservation and waste reduction, pollution mitigation, mapping environmental degradation, geosecurity, and disaster management.However, although a few scholars have engaged with questions of the implications of these technologies for environmental governance, this issue remains relatively under-studied from a multi-disciplinary perspective.This paper seeks to address this gap.Our paper begins from the premise that Smart Earth technologies have the potential to disrupt existing modes of environmental governance.Here, environmental governance is defined from an analytical perspective as the set of social actors and institutions, as well as data-gathering and decision-making processes, engaged in environmental decision-making.Our definition is broadly aligned with social scientists engaged in the study of environmental governance at a global scale, notably those who study the institutional and epistemological realignments of environmental governance globally.Our analysis of potential pathways for innovation in environmental governance coupled with Smart Earth technologies is related to and inflected by, but distinct from, governance trends such as the partial redistribution of decision-making power from state to non-state actors, and the rescaling of governance above and below the nation-state.The purpose of this meta-review is to provide a synthesis of key issues and critiques that Smart Earth poses for environmental governance.Smart Earth enables a series of shifts: the time-space compression of data availability and decision-making; the multiplication of modalities and agencies of environmental sensing; the proliferation of new environmental governance actors; and, potentially, a much higher degree of transparency in data collection, accessibility, and integration.Taken together, these innovations create the conditions for potentially significant transformations in environmental governance.Consider, as an example, Sustainability Standards Organisations.New forms of access to real-time, continuous information on environmental data from “virtual” monitoring platforms are challenging the “static, limited, and closed “analog” model of auditing conventionally employed by ”.In the past, SSO audits were conducted through brief, intermittent field visits by small teams of auditors and experts.Smart Earth technology creates the potential for continuous monitoring and assessment of the validity of sustainability claims.This in turn enables the emergence of private regulatory bodies and real-time auditing processes which will drive changes in SSOs.The SSO example illustrates the co-evolution of technology and governance occurring across different environmental domains and scientific disciplines, including established fields such as landscape ecology and geography, as well as emergent sub-fields such as environmental digital humanities, animal biotelemetry, and citizen sensing.Our paper presents a systematic meta-review of this literature.Our intention in conducting this review is to identify the key issues that Smart Earth poses for environmental governance.To conduct this meta-review, as detailed in Section 2, we surveyed the scholarly literature across the full range of academic disciplines to create a database of 3187 articles.In Section 4, we present key issues and critiques relevant to environmental governance debates, including: data; real-time regulation; enhanced predictability, particularly in situations where data was previously unavailable; the technical and ethical implications of open data; and the evolution of citizen engagement through new modalities such as citizen sensing, which incorporate new variables that extend our ability to “sense” the environment.Section 5 concludes by offering suggestions for future research directions regarding environmental governance in a Smart Earth world.Our analysis presents the results of a meta-review of the academic literature on Smart Earth.We conducted a manual search of 17 journals spanning a range of disciplines including computer science, environmental studies, ecology, eco-informatics, and social studies of science.Our manual search included the following journals: Ambio, Annual Review in Environmental Resources, Ecological Informatics, Environmental Humanities, Environment and Planning A, Environment and Planning D, Journal of Applied Ecology, Big Data and Society, Annals of the American Association of Geographers, Global Environmental Change, Global Environmental Politics, International Journal of Digital Earth, PNAS, Nature, Science, Social Studies of Science, Trends in Ecology and Evolution.Through this review, we identified the keywords most frequently used with respect to Smart Earth, as well as commonly-used terms related to earth processes relevant to Smart Earth topics: remote sensing, eco-informatics, Big Data, biomonitoring, citizen sensing, cloud computing, data visualization, fiber optic, Internet
We present a meta-review of academic research on Smart Earth, covering 3187 across the full range of academic disciplines from 1997 to 2017, ranging from ecological informatics to the digital humanities.We then offer a critical perspective on potential pathways for evolution in environmental governance frameworks, exploring five key Smart Earth issues relevant to environmental governance: data; real-time regulation; predictive management; open source; and citizen sensing.We conclude by offering suggestions for future research directions and trans-disciplinary conversations about environmental governance in a Smart Earth world.
dimensions of Smart Earth governance.For example, Smart Earth creates not only new ways of sensing and administering environments, but also new categories of environmental assets.Some scholars have been concerned by the possibility that Smart Earth technologies may be harnessed to increase the efficiency of resource extraction, rather than serve environmental conservation purposes.Further analysis on these issues could usefully draw upon debates of evolving environmental governance frameworks, including debates over Schwab’s “Fourth Industrial Revolution” as well as the politics of adaptive management and resilience, multi-scalar environmental governance, and network fragility.Questions of ethics also merit more scrutiny.Smart Earth governance implies a shift not only from “government to governance”, but also from “manual to automated” eco-governance.Emergent regimes of state-sponsored surveillance consolidated around environmental big data – such as the Smart OceansTM project noted above – are mobilizing in support of security objectives rather than equitable access or efficiency.Smart Earth also raises fundamental issues of socio-environmental justice.Elderly residents and those unable to own a smartphone face diminished opportunities to participate in Smart Earth governance since they “do not register as digital signals”.Such social inequalities risk becoming entrenched through iterative forms of Smart Earth governance.As Leszczynski explains: “Algorithmic governmentality cannot divest itself of actual realities of socio-spatial stratification to which the derivative is theoretically indifferent”.Last but not least, issues of Smart Earth-generated e-waste promise to be major problems in the coming years, and case studies of these issues are scarce to date.Smart data will be derived from an expanded array of sensors, continuously sampling the physical world; its processing will in turn require real-time big-data analytics with greater energy demands.Innovation in batteries, power-saving technologies, and backups will be increasingly essential to the functioning and performance of “actually existing” smart grids, app-based conservation efforts, and the like.E-waste will also pose new ecological problems for system managers and government institutions.There are considerable problems inherent in the Smart Earth proliferation of screen-based technologies, owing to their material externalities.A 2015 report by the Natural Resources Defense Council found that the idle-load electricity demands of digital consumer electronics accounted for 51 percent of an average American household’s energy budget.An earlier report, 2012) noted that 85 percent of electronics are now thrown out rather than recycled, leading some to calls for North Americans to adopt the radio in the place of the television, as the former creates substantially lower ecological costs.However, our review did not identify a single academic publication quantifying the e-waste associated with Smart Earth—a significant gap.Given these concerns, Galaz and Mouazen are well-justified to call for a code of conduct that allows citizens and institutions an opportunity to take stock of the proliferation of new social relationships and ethical challenges created by Smart Earth forms of governance.Data-sharing policies and ecological measurements standards, key mechanisms by which Smart infrastructure attains the obscurity its planners routinely “seek”, require new forms of visibility in public education and debate.Jasanoff’s demand for “technologies of humility” continues to resonate as a forceful appeal for new kinds of mergers between “the ‘can do’ orientation of science and engineering” and “the ‘should do’ questions of ethical and political analysis”.In this framing, ethics is not an “after thought” or addition to design but a crucial input across the life cycle of a given system—particularly one as ambitious and far-reaching as Smart Earth.
Environmental governance has the potential be significantly transformed by Smart Earth technologies, which deploy enhanced environmental monitoring via combinations of information and communication technologies (ICT), conventional monitoring technologies (e.g.remote sensing), and Internet of Things (IoT) applications (e.g.Environmental Sensor Networks (ESNs)).This paper presents a systematic meta-review of Smart Earth scholarship, focusing our analysis on the potential implications and pitfalls of Smart Earth technologies for environmental governance.
planar substrates the P3HT arc is diffuse with the majority of scattering in the out-of-plane direction.After annealing the arc becomes narrower; indicating increased uniformity in the distance between polymer chains, due to improvements in polymer crystallinity.The intensity is at a maximum perpendicular to the substrate, showing that the majority of crystallite planes are orientated in this direction.While this is the preferred orientation for P3HT , it is also worth noting that intensity at all points along the arc are increased, showing that many crystallites have an orientation between the vertical and horizontal i.e. mixed orientation.Although there is an increase in crystallinity, there is little change in the orientation between pristine and annealed samples.In contrast P3HT on ZnO planar layers are seen to orientate with a strong in-plane character.Nanorod arrays provide a template for this orientation in 2-D allowing for both in-plane orientation to the seed layer and in plane orientation to the nanorod walls.This may result in the observed higher degree of mixed orientation in nanorod samples.To further probe the influence of annealing in our nanostructured arrays a series of devices were prepared using the optimized 3:2 P3HT:IC60BA ratio and annealed at 150 °C for various time periods.The results obtained, Fig. 6, show an improvement in PCE, attributed to the improvements in JSC during the initial 10-min anneal.Extending the annealing time, up to 50 min, has little overall effect on the device performance with no systematic changes observed.After melt infiltration the blend is rapidly quenched thus the active layer composition will likely less crystalline, thermal annealing is necessary to induce crystallization of the P3HT and aggregation of the IC60BA.The high aspect ratio channels formed in the nanorod arrays may mean that the optimum active layer microstructure can be achieved rapidly as these channels are typically laterally separated by < 50 nm compared with the active layer thickness of ~ 500 nm.To emphasize the importance of controlling the composition of the active layer in addition to the post deposition annealing we highlight the remarkable changes in performance noted for the non-optimized 1:1 blend composition, Table 1, using comparable device structures to those shown in Fig. 1c–d. Performance enhancements attributed to thermal annealing in planar devices have been reported in detail by Kippelen et al.In our case penetration of nanorods into the active layer and their surface energies may induce segregation of P3HT to the vertical and horizontal oxide interfaces, which results in the optimum photoactive layer composition for our non-planar devices.Here we have demonstrated the effective implementation of nanostructured ZnO cathodes into organic bulk heterojunction photovoltaic devices using a solution-processing route for electrode fabrication followed by a simple infiltration process using commercially available materials.In comparison with planar analogues we observe a significant improvement in measured device performance, attributed to the increased surface area of electron accepting material.We highlight the importance of thermal annealing of the active layer to obtain optimal active layer morphologies.Additionally, we have seen a deviation from the composition of the active layer compared with planar devices, attributed to the ZnO acting as a junction for exciton separation thus reducing the quantity of fullerene required.The increased content of P3HT in our nanostructured devices is likely to contribute to the observed improvements owing to greater absorption over the 450–650 nm range resulting in increased photogeneration compared with planar devices.While previous reports have highlighted the limitations of oxide:polymer active layers here we have shown that solution processed oxide nanostructures can be readily and controllably prepared and effectively implemented in improved photovoltaic devices.
Here we report a simple, solution based processing route for the formation of large surface area electrodes resulting in improved organic photovoltaic devices when compared with conventional planar electrodes.The nanostructured electrode arrays are formed using hydrothermally grown ZnO nanorods, subsequently infiltrated with blends of poly(3-hexylthiophene-2,5-diyl) (P3HT) and indene-C60 bisadduct (IC60BA) as photoactive materials.This well studied organic photoactive blend allows the composition/processing/performance relationships to be elucidated.Using simple solution based processing the resultant nanostructured devices exhibited a maximum power conversion efficiency (PCE) of 2.5% compared with the best planar analogues having a PCE of around 1%.We provide detailed structural, optical and electrical characterization of the nanorod arrays, active layers and completed devices giving an insight into the influence of composition and processing on performance.Devices were fabricated in the desirable inverse geometry, allowing oxidation resistant high work-function top electrodes to be used and importantly to support the hydrothermal growth of nanorods on the bottom electrode — all processing was carried out under ambient conditions and without the insertion of a hole transport layer below the anode.The nanorods were successfully filled with the active layer materials by carrying out a brief melt processing of a spin-cast top layer followed by a subsequent thermal anneal which was identified as an essential step for the fabrication of operational devices.The growth method used for nanorod fabrication and the active layer processing are both inherently scalable, thus we present a complete and facile route for the formation of nanostructured electron acceptor layers that are suitable for high performance organic active layers.
the initial lower values.For the other samples, a significant drop is already observed after 7 weeks.Afterwards, the tensile strength decrease being lower than 50 MPa after 27 weeks of degradation and lower values for samples containing less TMC over the degradation time were measured."Therefore, although fibers with 18 mol% of TMC have initial lower crystallinity, Young's Modulus and tensile strength, they retain the mechanical properties for a longer time, and they have higher elasticity and strength than do the other samples, which no longer have mechanical integrity after 15 or 19 weeks.Notably, the 80LA sample shows the beginning of the mass loss after 31 weeks of degradation; thus, the time when the sample starts to lose mass at faster rate than the others copolymer fibers and the loss of mechanical integrity correspond.Our results highlight that by varying the composition and the structural parameters of l-lactide/trimethylene carbonate copolymer-based multifilament fibers it is possible to modulate their degradation profile and service lifetime, thereby extending the mechanical support the fibers can provide while increasing the rate of mass loss when such support is no longer required.The multifilament fibers undergo a two-stage bulk erosion process.The found hydrolysis kinetics indicate that the rate of chain scission, which is function of the copolymer composition and the initial physical properties, is autocatalyzed by the COOH groups formed and occurs in a random way, leading to the formation of shorter chains.In the early stage of degradation, the hydrolysis probably occurs in the amorphous region and it also involves the T-LL ester bonds, provoking microstructural changes and a significant drop of the mechanical properties for samples containing up to 10 mol% of TMC.The rate of chain scission is higher for samples having higher TMC content and therefore a lower degree of crystallinity, although more amorphous samples show slightly higher mechanical properties over the degradation time.In the later stage, when shorter chains that can diffuse out form the matrix are eventually formed, the rate of chain degradation decrease and the mass loss increase.This occurs after 20–25 weeks of degradation for the samples containing up to 10 mol% of TMC, although such fibers cannot no longer yield and fail in a brittle mode earlier than any significant mass loss.In contrast, multifilament fibers prepared with the copolymer containing 18 mol% of TMC show a more homogeneous degradation process.The lowest rate of chain scission among the four samples analyzed in the early stage of degradation is the results of the less-packed structure, which favors the easy diffusion of the buffer medium, thus, the neutralization of the acidic products formed, and of the higher number of carbonate bonds in the amorphous phase that are inert to hydrolysis.The homogenous degradation profile enables the 80LA fibers to have mechanical integrity for longer time, and once the mechanical properties are eventually lost, the sample starts to lose mass with a faster rate than the other multifilament samples.
We have succeeded to modulated the degradation rate of poly(L-lactide) (PLLA) melt-spun multifilament fibers to extend the service lifetime and increase the resorption rate by using random copolymers of L-lactide and trimethylene carbonate (TMC).The presence of TMC units enabled an overall longer service lifetime but faster degradation kinetics than PLLA.By increasing the amount of TMC up to 18 mol%, multifilament fibers characterized by a homogenous degradation profile could be achieved.Such composition allowed, once the mechanical integrity was lost, a much longer retention of mechanical integrity and a faster rate of mass loss than samples containing less TMC.The degradation profile of multifilament fibers consisting of (co)polymers containing 0, 5, 10 and 18 mol% of TMC has been identified during 45 weeks in vitro hydrolysis following the molecular weight decrease, mass loss and changes in microstructure, crystallinity and mechanical properties.The fibers degraded by a two-step, autocatalyzed bulk hydrolysis mechanism.A high rate of molecular weight decrease and negligible mass loss, with a consequent drop of the mechanical properties, was observed in the early stage of degradation for fibers having TMC content up to 10 mol%.The later stage of degradation was, for these samples, characterized by a slight increase in the mass loss and a negligible molecular weight decrease.Fibers prepared with the 18 mol% TMC copolymer showed instead a more homogenous molecular weight decrease ensuring mechanical integrity for longer time and faster mass loss during the later stage of degradation.
each target a gene-specific and intron-spanning primer was designed with Primer 3.For genes with no possibility for intron-spanning primer Maxima First Strand cDNA Synthesis Kit was used.Using the standard curve method, the absolute amount of the specific PCR products for each primer set was quantified.Actb was amplified from each sample for normalization as reference gene.For all siRNA mediated knockdown experiments isolated hepatocytes from non-transgenic male C57BL/6N mice were used.The specific siRNA and the respective nonsense oligo control siRNA were purchased from Invitrogen, Germany.The hepatocytes were seeded at a density of 100,000 cells per well on 12-well plates.After 4 h, the cells were transfected with the respective siRNA with INTERFERin® according to the manufacturer’s instructions.24 h after transfection, the medium was changed, and fresh medium without the siRNA was added.The changes in gene expression were analyzed by qPCR 48 h post transfection.After isolation, hepatocytes from male C57BL/6N mice at the age of 12 weeks were cultivated at 0.25 Mio.cells per well in 6-well plates in 1.5 ml medium.After 3-4 h medium was changed and the transfection was performed analog.After isolation, hepatocytes from male C57BL/6N mice at the age of 12 weeks were seeded at a density of 100,000 cells per well on 12-well plates in 1.0 ml medium.After 3-4 h medium was changed and fresh media without fetal calf serum, but added recombinant protein for murine IHH and SHH and PBS as control was used.24 h after treatment, the medium was changed, and fresh medium with the recombinant proteins was added.The changes in gene expression were analyzed by qPCR 48 h post treatment.Immunohistochemistry on paraffin sections was performed as previously described.In brief, the sections were deparaffinized, rehydrated and subsequently boiled in citric buffer or Tris/EDTA buffer.Next the slides were incubated for 1 h min in 5% goat serum to block nonspecific binding.The following primary antibodies were used: anti AXIN2, anti CTNNB1, anti FZD2, anti FZD4 anti GLI1, anti GLI2 anti GLI3, Sigma, Germany), anti GS, anti IHH, anti PTCH1, anti PTCH2, anti WNT5a.Peroxidase staining was performed using the EnVision+Dual Link System-HRP according to manufacturer instructions.For immune fluorescence stainings Tyramide Super Boost Kits were used according to the manufacturer instructions and following antibodies were used: biotinylated goat anti-rabbit IgG, Extravidine Cy3, goat anti mouse Cy3.Immunofluorescence double staining of AXIN2/GLUL on ApcWT versus Apchomo and on SAC-WT versus SAC-KO mice liver sections, were analyzed by the modular image analysis software TiQuant.TiQuant provides traditional pixel based image processing pipelines suited for specific applications as well as a general supervoxel-based machine learning approach to segment two- and three-dimensional images.The latter method was used for segmentation of the micrographs at hand.First, an image is split into visually distinct, similarly sized regions, the so called supervoxels, using the SLIC0 algorithm.Each supervoxel is characterized by a set of features, comprising local, and neighborhood color histograms as well as texture and gradient descriptors.An image is then annotated partially to provide training data in the form of labeled supervoxels.A Random Forest classifier is fitted to the training data and subsequently used to predict class membership probabilities for each supervoxel given their respective feature set.Segmentations are then inferred from these probabilities, and post-processed using the watershed algorithm to separate clustered objects.Based on the obtained segmentations, the area of each staining in each individual micrograph was measured by pixel counting, and then statistically merged and analyzed.Most experiments were repeated 2-3 times with different quantities of biological replicates as indicated in each figure.The number of technical replicates depended on the type of experiment and was mostly duplicates or triplicates.Outliers were identified with the ROUT test of GraphPad Prism 7.Values are plotted as average of biological replicates ± standard error of the mean.The statistical evaluation was performed with the unpaired Student’s t test.The null hypothesis was rejected at the p < 0.05; p < 0.01, p < 0.001 and p < 0.0001 levels.The proteome dataset supporting the current study have not been deposited in a public repository because the dataset is not public because further publications based on this data are in progress, but are available from the corresponding author on request.In case of the metabolom raw data please use the following link: https://seek.lisym.org/data_files/439?code=mpMmEN0lH10LB%2B%2FMkYBAYDKtZKV9dJ%2Fu7wAQ7SIU.The published article includes all other datasets generated or analyzed during this study.
Wnt/β-catenin and Hh signaling contribute to embryogenesis as well as to the maintenance of organ homeostasis through intensive crosstalk.describe that both pathways act largely complementary to each other in the healthy liver and that this crosstalk is responsible for the maintenance of metabolic zonation.
at the state level for conservative estimates of the statistical significance.3,Additionally, all regression models were weighted by mean municipal population between 2012 and 2017 to account for the large heterogeneity in municipal population size.For analyses by cause of death, significance of p value testing was adjusted for multiple hypothesis testing.To analyse whether the associations between changes in unemployment and changes in mortality were heterogeneous across the three terciles of health and social protection expenditure, municipalities were divided into three equal groups on the basis of mean expenditures on health and social protection over 2012–17.The categorical tercile variables were interacted with the unemployment variables, allowing three estimates for within-municipality association between changes in unemployment and changes in mortality for the three terciles of municipalities.Municipal government health expenditure and social protection expenditure were modelled separately.The effect estimates reported are interpreted as the mean change in the municipal mortality rate of municipalities in a specific tercile per 1 percentage-point increase in the state unemployment rate.A three-way interaction was also carried out to assess further variations in these relationships.Several assumptions pertinent to this analytical approach were tested.The main assumption is that changes in the unemployment rate are uncorrelated with unobserved factors changing at the same time, which could also affect mortality.We argue that many factors do not change that rapidly, lie on the causal pathway between recession and health, or are captured by municipal time trends.We added the following covariates to the model to test whether other factors could mediate the associations between unemployment and mortality: municipal gross domestic product per capita, municipal coverage with the Family Health Strategy, municipal hospital bed density, and municipal private insurance plans per capita.We also tested linear models with time trends to compare the results with detrended rates.Since state-level unemployment rates were used for our analysis, we repeated the analyses with mortality aggregated to the state level instead of the municipal level to test the robustness of our results.There was no funding source for this study.The corresponding author had access to all study data and responsibility for the decision to submit the paper for publication.Between 2012 and 2017, 7 069 242 deaths were recorded among adults in 5565 municipalities in Brazil.The 17 selected causes of death accounted for 6 621 347 of all 7 069 242 deaths in the period.Of the 7 069 242 deaths, 2 213 942 were due to cardiovascular diseases, 1 238 651 due to malignant neoplasms, 432 432 due to respiratory infections, 412 944 due to unintentional injuries, 418 897 due to respiratory diseases, 415 355 to intentional injuries, 397 653 due to diabetes and endocrine, blood, and immune disorders, and 379 285 due to digestive diseases.Between 2012 and 2017, the mean crude municipal adult mortality rate increased by 8·0% from 143·1 deaths per 100 000 to 154·5 1 deaths per 100 000.Differences and divergent trends in mortality rates were identified across age, sex, and race stratifications; however, mortality rates for these subgroups were not age standardised, which precludes inference.Between 2012 and 2017, the mean state unemployment rate decreased from 8·4% in the first quarter of 2012 to 6·5% in the fourth quarter of 2013, rising to 13·7% in the first quarter of 2017.The state unemployment ranged from 2·7% to 18·8% across the study period.In general, states in the north and north-eastern regions of Brazil had the highest rates of unemployment and the largest increases in the unemployment rate between 2012 and 2017.Regression coefficients for changes in mortality associated with changes in the unemployment rate were plotted from regression models by cause of death.A 1 percentage-point increase in the state unemployment rate was associated with a 0·50 increase per 100 000 population per quarter in the mean municipal all-cause adult mortality rate.Therefore, the annual effect size would be 2·0 deaths per 100 000, and a mean municipal mortality rate of 143·1 per 100 000 in 2012 would result in an mean relative increase in the adult mortality rate of 1·4% per 1 percentage-point increase in unemployment.With a cumulative increase of 3·1 percentage points in the unemployment rate between 2012 and 2017, the recession was associated with a 4·3% increase in mean municipal mortality rate.Increasing mortality was driven by increases in mortality from neoplasms and cardiovascular diseases.These effect sizes were smaller than the all-cause effect size because the all-cause effect size represented an aggregate effect for all causes.Unemployment was also associated with higher mortality from digestive diseases and self-harm, but associated with lower mortality from unintentional injuries, although these differences were not statistically significant when p values were adjusted for multiple hypothesis testing.Across racial subgroups, increases in unemployment were associated with increases in all-cause mortality for black and mixed race Brazilians, whereas no significant difference in mortality rate was identified for white Brazilians.Additionally, increases in unemployment were associated with increases in mortality rates for men and individuals aged 30–59 years.Assuming there had been no changes in the unemployment rate between 2012 and 2017, an estimated 31 415 deaths could have been avoided.Increases in unemployment were only associated with increases in overall mortality in municipalities in the lowest terciles and middle terciles of Bolsa Familia expenditure per poor person, and in municipalities with the lowest public health expenditure per capita.These patterns were similar across population subgroups when stratified by sex and race, with increases in mortality identified among black or mixed race individuals and male individuals in municipalities in the lowest and middle terciles of Bolsa
Findings: Between 2012 and 2017, 7 069 242 deaths were recorded among adults (aged ≥15 years) in 5565 municipalities in Brazil.During this time period, the mean crude municipal adult mortality rate increased by 8.0% from 143.1 deaths per 100 000 in 2012 to 154.5 deaths per 100 000 in 2017.An increase in unemployment rate of 1 percentage-point was associated with a 0.50 increase per 100 000 population per rter (95% CI 0.09–0.91) in all-cause mortality, mainly due to cancer and cardiovascular disease.Funding: None.
allow more accurate estimates of local recession effects.Although this is not ideal, these were the most granular data available and quarterly subnational data on unemployment are rarely available for many middle-income countries.Compared with national-level data, state-level unemployment rates allow regional variations to be exploited in the analytical strategy and better reflect proximity of an individual to the effects of recession.The unemployment data might also be limited by the fact that data were collected from reported employment status and do not reflect nuances of the labour market such as moving from formal to informal employment.Second, any causal claim must be restricted considering that these analyses only examined associations within municipalities over time.Although trends and patterns were identified, determining causal pathways between recession and mortality in Brazil requires more research.Third, although fixed-effects regression methods are robust and frequently used, these models rely on the assumption that unobserved variables associated with unemployment and mortality are time-invariant and not part of the causal pathway.Furthermore, the approach to examining heterogeneous effects of unemployment does not adjust for all multiple municipality characteristics, and considering that it is likely that a correlation exists between these factors, judicious interpretation is necessary.Autocorrelation and heteroscedasticity might also be present in these models, but these factors were controlled for by using cluster robust SEs.The 2012–16 recession in Brazil has most likely contributed to the observed increases in mortality.Black and mixed race Brazilians, men, and individuals of working age were most negatively affected, indicating that the recession might contribute to worsening of health conditions in these groups and thus widening of existing health inequalities.However, no significant increases in recession-related mortality were identified in areas with higher expenditure on health and social protection programmes.These findings are likely to be generalisable to other LMICs as many have sizeable inequalities, precarious job markets, and limited safety nets to protect individuals from the negative effects of economic recession.Our findings underline the importance of nationally appropriate social protection systems to protect at-risk populations from the adverse health impacts of economic recessions in LMICs.All data used in this study are publicly available from the sources listed in appendix 2.
Background: Economic recession might worsen health in low-income and middle-income countries with precarious job markets and weak social protection systems.Between 2014–16, a major economic crisis occurred in Brazil.We aimed to assess the association between economic recession and adult mortality in Brazil and to ascertain whether health and social welfare programmes in the country had a protective effect against the negative impact of this recession.Methods: In this longitudinal analysis, we obtained data from the Brazilian Ministry of Health, the Brazilian Institute for Geography and Statistics, the Ministry of Social Development and Fight Against Hunger, and the Information System for the Public Budget in Health to assess changes in state unemployment level and mortality among adults (aged ≥15 years) in Brazil between 2012 and 2017.Outcomes were municipal all-cause and cause-specific mortality rates for all adults and across population subgroups stratified by age, sex, and race.We used fixed-effect panel regression models with quarterly timepoints to assess the association between recession and changes in mortality.Mortality and unemployment rates were detrended using Hodrick–Prescott filters to assess cyclical variation and control for underlying trends.We tested interactions between unemployment and terciles of municipal social protection and health-care expenditure to assess whether the relationship between unemployment and mortality varied.Between 2012 and 2017, higher unemployment accounted for 31 415 excess deaths (95% CI 29 698–33 132).All-cause mortality increased among black or mixed race (pardo) Brazilians (a 0.46 increase [95% CI 0.15–0.80]), men (0.67 [0.22–1.13]), and individuals aged 30–59 years (0.43 [0.16–0.69] per 1 percentage-point increase in the unemployment rate.No significant association was identified between unemployment and all-cause mortality for white Brazilian, women, adolescents (aged 15–29 years), or older and retired individuals (aged ≥60 years).In municipalities with high expenditure on health and social protection programmes, no significant increases in recession-related mortality were observed.Interpretation: The Brazilian recession contributed to increases in mortality.However, health and social protection expenditure seemed to mitigate detrimental health effects, especially among vulnerable populations.This evidence provides support for stronger health and social protection systems globally.
sensitivity and accuracy for each product.The first objective of the present study was to evaluate the performance of the FDA BAM Chapter 19B method in additional berry types to provide reliable laboratory support for FDA surveillance programs and for disease outbreak investigations related to berries.A second objective of the present study was to evaluate the performance of the C. cayetanensis BAM Chapter 19B detection method in frozen berries.Frozen berries are also widely used as ingredients in many foods such as cakes, yogurts and smoothies.Cyclospora cayetanensis was detected in the frozen raspberry filling of leftovers from a wedding cake associated with an outbreak in Pennsylvania using conventional PCR.In another outbreak in Canada, a dessert with blackberries, strawberries and frozen raspberries was considered as the main source of exposure.A preparation of purified C. cayetanensis oocysts originating from a patient in Indonesia, stored in 2.5% potassium dichromate, with an estimation of 50% sporulated oocysts, was used for these investigations.The use of the oocysts was approved by the institutional review board of the FDA.The oocysts were purified using discontinuous density gradient purification followed by flow cytometry sorting as described elsewhere.The stored stock of oocysts in potassium dichromate was washed three times using 0.85% saline and concentrated by centrifugation.Before seeding the produce, six replicates of the partially purified oocysts were counted by two different analysts with a hemocytometer to estimate the concentration and the sporulation rate of oocysts in the preparation.Oocysts were then diluted in 0.85% NaCl to an estimated concentration of 20 oocysts/μL and 1 oocyst/μL for seeding experiments.The number of oocysts in the 20 oocysts/μL dilution used for that study was also counted and corroborated the expected oocysts numbers per μL in the dilution.The same diluted preparations of oocysts were used for all seeding experiments in every type of berry for comparison purposes.Commercial fresh blackberries, blueberries, strawberries and raspberries, showing no signs of deterioration, were obtained from local grocery stores and stored at 4 °C for no longer than 24–48 h prior to the seeding experiments.Individual fresh berry test samples were prepared as described previously.For mixed samples, blackberries, blueberries, strawberries and raspberries were added in relative proportions to their weight until a combined 50 g was reached per sample.The samples were seeded with 200, 10 or 5 oocysts using a micro pipet in a dropwise fashion to multiple berries.Approximately 10–20 droplets were spread randomly over multiple surfaces of the sample.Unseeded samples were included as negative controls and processed together with the seeded samples.Unseeded and seeded samples were allowed to air dry uncovered at room temperature for approximately 2 h. Afterwards, samples were carefully transferred to BagPage filter bags, sealed with binder clips, and held at 4 °C for 48–72 h before initiating the produce wash step for fresh samples.Frozen samples were seeded in the same manner as fresh berries and after being air dried were held at −20 °C for 7 weeks prior to thawing at 4 °C for 24 h before initiating the produce wash step for frozen samples.Since this method had already been validated for the detection of C. cayetanensis in raspberries, only six to seven sample replicates were included to ensure that detection results in raspberries were comparable to those established in the performance standards for the detection method.For fresh blackberries, strawberries, blueberries and mixed berries, seven to eight sample replicates were examined at the 200 oocysts seeding level, and between 8 and 10 replicates were examined for both 5 and 10 oocysts seeding levels for each of these berry types.A total of 131 seeded samples were analyzed in this study.At least three unseeded samples were processed for each type of berry samples.Frozen mixed berry samples were also seeded with 5, 10 and 200 oocysts.In addition, blackberries, raspberries and blueberries were seeded with 10 oocysts and kept frozen at the same time for up to seven weeks."The washing and molecular detection steps for both fresh and frozen berries followed the FDA's BAM Chapter 19B method.The processing of samples included three steps: 1) washing of produce to recover C. cayetanensis oocysts, 2) DNA extraction of wash pellets containing concentrated oocysts, and 3) real time PCR analysis using a dual TaqMan™ method targeting the C. cayetanensis 18SrRNA, together with amplification of an internal amplification control to detect any false negative results and monitor for reaction failure due to matrix derived PCR inhibitors.The wash protocol to recover the oocysts from fresh and frozen berries was performed using 0.1% Alconox® detergent, with a gentler washing as described for raspberries during the initial validation study in which bags containing raspberries were sealed without massaging or removing air, stood upright in a tray to achieve uniform coverage of berries with wash solution, and gently rocked at a speed of 12 rocks per min on a platform rocker for 30 min.Two wash steps were performed with 0.1% Alconox® detergent, and then sequential centrifugations were performed to recover, pool, and concentrate the wash debris.Produce wash debris pellets were stored at 4 °C for up to 24 h or frozen at −20 °C prior to DNA isolation.The DNA extraction procedure was performed using the FastDNA SPIN Kit for Soil in conjunction with a FastPrep-24 Instrument.The real time PCR assay for C. cayetanensis 18S rRNA gene and IAC control was performed on an Applied Biosystems 7500 Fast Real time PCR System.A commercially prepared synthetic gBlocks gene fragment was used as a positive control for amplification of the C. cayetanensis 18S
The efficacy of the U.S. Food and Drug Administration (FDA) method for detection of C. cayetanensis was evaluated in fresh berries (blackberries, strawberries, blueberries and mixed berries) and in frozen mixed berries.The protocol included seeding with C. cayetanensis oocysts, produce washing, DNA extraction and a dual TaqMan assay.Mixed berries were seeded and frozen for up to seven weeks.
0.1% Alconox® detergent in the washing protocol was responsible for these consistent results across berry types.This washing solution, which contains the surfactant tetrasodium pyrophosphate, improved Cyclospora oocysts recovery rates in raspberries and it was shown to achieve recovery rates of 78–100% when used to wash basil leaves.In another study, the irregular surfaces of raspberries, strawberries, and blackberries were more effectively washed using an elution solution containing sodium pyrophosphate, while glycine buffer was more effective for blueberries.In addition, in the present washing protocol, two sequential washes with Alconox® are performed since the second wash was shown to increase C. cayetanensis oocysts recovery when compared to only one wash by Shields et al.The washing step is critical for detection of C. cayetanensis in fresh produce, particularly in fragile matrices such as berries which require a particularly gentle washing procedure to avoid damaging.Tissue breakdown during processing can produce debris that physically interferes with the recovery of oocysts and excess debris can also interfere with microscopical detection.Stomaching may be suitable for processing leafy greens for the detection of C. cayetanensis oocysts, but it is generally not appropriate for use in fragile matrices particularly in combination with molecular assays.The careful washing protocol performed on berries probably also accounted for the lack of inhibition observed in the qPCR.Gentle washing of berries likely reduces the release of polyphenol inhibitors commonly found in berries including flavonoids, stilbenes, tannins, and phenolic acids.If inhibition occurs, DNA extraction should be followed by specific clean-up treatments to remove these inhibitors prior to molecular detection.The method was also robust and sensitive in frozen berries.In fact, no significant differences were observed for C. cayetanensis detection in fresh and frozen mixed berries.To our knowledge, this is the first study of C. cayetanensis quantitative detection in frozen berries.Previously, the cyclosporiasis outbreak in Philadelphia was the only study to date in which frozen portions of a wedding cake that had raspberry filling were analyzed by nested PCR.As by any molecular method for detection based on DNA, viability of oocysts, in fresh and in frozen samples, cannot be verified.Molecular methods are useful to determine levels of contamination in produce, but they detect live and infectious, live and non-infectious, and dead oocysts.Therefore, this procedure will give the maximum occurrence and levels of contamination for a given matrix.On the other hand, molecular detection methods are extremely useful for genotyping/subtyping and source tracking investigations.There are no proper viability assays for C. cayetanensis other than microscopic examination of sporulated oocysts, which is not feasible in fresh produce.The raspberries used for the filling of the wedding cake associated with an outbreak in Pennsylvania had been frozen for 1–4 days and then thawed to prepare the filling.Therefore, it seems that the C. cayetanensis oocysts that contaminated the cake remained viable even after this freeze-thaw cycle.Because of that it is important to have methods that can detect C. cayetanensis on frozen foods, since it is possible that such a scenario will happen again in the future.Only freezing for long periods of time seems able to inactivate other coccidian parasites: T. gondii oocysts kept in water at −21 °C for 28 days were not killed and Cryptosporidium parvum oocysts remained infectious for mice after storage at −10 °C for one week, but were not infectious for mice after storage at −15 °C for one week.In the only study performed with C. cayetanensis oocysts, no sporulation of oocysts was observed when oocysts previously kept in water, in dairy products or on basil test samples were placed at −70 °C for 1 h or when kept in water and basil at −20 °C for four days.Another limitation of the study is that in spiking experiments there is invariably some inconsistency in the exact number of oocysts seeded per sample, due to pipetting variability, particularly with low oocyst counts.Additionally, minor variations in efficiency at each step of the procedure for each sample replicate are likely to contribute to small variations in the outcomes.This variation does limit accuracy of quantification, and further studies would be required to identify and resolve issues related to quantification.However, even with variations, the comparison of gene target copy numbers by real time PCR was useful for identifying the lack of significant differences among matrices, which will be valuable information for risk assessment when the method is applied to surveys or traceback investigations in these matrices.In conclusion, the FDA BAM Chapter 19B method for detection of Cyclospora showed consistent and high detection rates in all types of the berries analyzed, including mixed frozen berries.The method was robust, reliable and highly sensitive in fresh and frozen berries, with as few as five C. cayetanensis oocysts detected in all the berry types analyzed.Evaluation of the FDA BAM method in berries will provide reliable laboratory support for surveillance programs and for disease outbreak investigations.
No significant differences were observed in C. cayetanensis CT values between fresh and frozen mixed berries at any seeding level.In conclusion, the FDA BAM Chapter 19B method for the detection of Cyclospora was robust, consistent, and showed high sensitivity in all types of berries analyzed.Evaluation of the FDA detection method in berries will provide reliable laboratory support for surveillance programs and for outbreak investigations.
Retention of high performing employees is important and is an essential component for success in an increasingly competitive and demanding environment.Today, organizations are becoming more concerned with employee retention but despite their efforts, employees still leave and this becomes worrisome.Hence, the importance of retaining and maintaining committed employees is especially critical for ICT and Accounting firms in Nigeria.The data can be used by managers to properly make decisions that in the long-run would lead to goal attainment in the organization.The data can be used to enlighten managers on the importance of retention attributes and how it can be beneficial to the overall wellbeing of the organization.The data provides ample knowledge on how different organisational retention attributes can interact effectively by building healthy relationship and sustaining greater commitment.Generally, data acquired from this study would be significant for organizational goal achievement, proper building of corporate image which would in turn lead to organizational success.The data described in this article is made widely accessible to facilitate critical or extended analysis.The study is quantitative in nature and data were retrieved from staff and management of the sampled firms.The decision to elicit information from the employees and the management group was based on the fact that while employees were often in the best position to describe their real employment relationships and knowledge of retention practices as presented in Fig. 1.The study also adopted the approach recommended by Anderson and Gerbing to evaluate: measurement model and structural model.To demonstrate the measurement model, we used Confirmatory Factor Analysis and the three conditions for CFA loadings indicate firstly, that all scale and measurement items are significant when it exceeds the minimum value criterion of 0.70; second, each construct composite reliability exceeds 0.80 and thirdly, each construct average variance extracted estimate exceeds 0.50, as presented in Table 1 and Fig. 2 respectively.The results of CFA analysis suggest that the factor loadings for all major variables range between 0.820 and 0.981.The three conditions used to assess convergent validity as suggested and recommended by Fornell and Larcker and Bagozzi and Yi were met.Details of the results are available in Table 2, which exhibit that the coefficient correlation is highly correlated and are all significant.Based on the results of the test, it has been proven that the data are good in terms of convergent validity, construct reliability, and discriminant validity.Having run the test, the SEM was obtained, and results of fit indices is shown in Table 3.Results in Table 3 dictate that the value of χ2 is within the acceptable range of 1 and 3 as suggested by Brown and Cudeck and Hu Bentler.On top of that, the incremental fit, NFI, TLI, CFI, and GFI were above 0.90.Meanwhile, results for standardised regression weights for each variable are stated in Table 4.All the basic assumptions were acceptable and prove that the data met the conditions of basic assumption in regression analysis.Of the 418 copies of questionnaire distributed, 376 responses were received, resulting in a response rate of 89.9%.Members of selected five ICT and five Accounting firms were represented in this study.Data were gathered from directors, managers, assistant managers, scientists, field agents, and other categories of employees across the various ICT and Accounting firms with the aid of a researcher- made questionnaire based on the works of .The demographic data presented information based on gender, age, education and experience as well as questions related to organisational retention attributes and staff commitment.There was a meaningful relationship between organisational retention attributes and the commitment of staff in the selected firms.The collected data were coded and analysed using SPSS version 22.Data was analysed applying descriptive and inferential statistical tests.Importantly, the study participants were selected based on the following inclusion criteria:Participants were employees of the sampled ICT and Accounting firms.Participants were literate, able to read and write English.Participants signed the consent form provided and have worked with the firm for a minimum period of 3 years.Participants were accessible as at the time of the survey and interviews.As regards retention, items used included: the main reasons for participants agreeing to work within the firm; whether a detailed job description was given on appointment with the organization, and if the job description tallied with the real job done; the existence of a clearly specified daily job description; retention strategies adopted; relevance of regularly conducted trainings/workshops; and the existence of the desire to change jobs.The section on commitment was adapted from a previously validated questionnaire – the Organizational Commitment Questionnaire, OCQ.The researchers ensured that respondents were well informed about the background and the purpose of this research and they were kept abreast with the participation process.Respondents were offered the opportunity to stay anonymous and their responses were treated confidentially.
The article presented an integrated data on organisational retention strategies and commitment of selected ICT and Accounting firms in Nigeria.The study adopted a quantitative approach with a survey research design to establish the major determinants of employee retention strategies.The population of this study included staff and management of the selected firms.Data was analysed with the use of structural equation modelling and the field data set is made widely accessible to enable critical or a more comprehensive investigation.The findings identified critical attraction factors for the retention of sampled firms.It was recommended that ICT firms will need to adopt consistent range of strategies to attract and retain people with the right ICT skills, in the right place and at the right time.
replacement of conventional domestic electric ovens by highly-efficient models would be beneficial from both the environmental and economic perspectives.However, to promote the uptake of HEO, the use of fiscal instruments may have to be considered.A number of options have been suggested by the European Committee of Domestic Equipment Manufacturers as effective for promoting the uptake of energy efficient household appliances including tax credits granted directly to consumers, consumer purchase rebate or cash-back schemes and tax credits to consumers coupled with tax credits to manufacturers.However, any such initiatives would have associated cost implications which should be considered carefully to avoid unintended consequences, often associated with fiscal instruments.Financial incentives should also be accompanied by a wide-ranging awareness raising among consumers as most are not aware that cooking, and particularly ovens, are significant energy consumers and that they could save money by switching to more efficient appliances.An example of a successful awareness raising campaign in the EU, accompanied by financial incentives as well as legislation, are light bulbs.To phase out energy-inefficient types, a concerted campaign involving awareness raising and free low-energy bulbs was rolled out across Europe, leading to a much faster uptake than would have happened otherwise.In addition, some types have been banned, notably 100 W incandescent bulbs.As it is going to be much more difficult to convince the consumer to replace household appliances than bulbs, similar ‘choice editing’ may be needed to help phase out energy-inefficient models more rapidly.This study has considered life cycle environmental and economic impacts of conventional and novel highly-efficient ovens.The GWP of the former ranges from 812–1478 kg CO2 eq. and of the latter between 576–738 kg CO2 eq. over the lifetime of 19 years.Therefore, HEO ovens have a potential to save up to 30% of energy and between 9% and 61% of the GWP, depending on the assumptions for the cleaning options for the conventional oven as well as on the amount of electricity used per cycle by HEO.Most of the GWP for both oven types is generated during the use stage, with the electricity contributing 53%–97% to the total.The raw materials contribute around 1%–2%, while the manufacture of the oven cavity accounts for less than 1% of the total impact.The other environmental impacts are reduced by 24%–62%.The LCC of HEO are also lower than for the conventional oven, ranging between €194–247 per oven over its lifetime, compared to €320–479 for the conventional oven.In the best case, the consumer could save 41%–61% over the lifetime of the oven, depending on the cleaning option assumed for the conventional oven.Even for the worst HEO option, 25%–50% of the lifetime costs would still be saved by the consumer.At the EU28 level, the results suggest that replacement of conventional domestic electric ovens by highly-efficient models would lead to significant environmental and cost savings ranging from 0.5–5.2 Mt CO2 eq./yr and €0.5–1.96 bn/yr, respectively.Most of the latter would be direct consumer savings because of lower energy consumption.Assuming an uptake rate of 5% per annum, it would take 20 years to achieve these benefits at the EU28 level.At 10% annual uptake per year, these savings would be realised in half the time while at 3% it would take 33 years.Therefore, policy makers should consider measures to encourage the uptake of energy efficient ovens, including financial incentives and ‘choice editing’ through legislation.
Electric ovens are among the least energy efficient appliances, with the efficiency of only 10%-12%.With new policy instruments in Europe requiring energy reduction, manufactures are seeking to develop more efficient domestic appliances.The aim of this paper is to aid sustainable manufacturing of an innovative, highly-efficient oven (HEO) by evaluating its life cycle environmental impacts and costs in comparison to conventional ovens.The results suggest that the HEO has 9%-62% lower environmental impacts than conventional ovens with the equivalent savings in the life cycle costs ranging from 25% to 61%.Replacement of conventional ovens by HEO in Europe (EU28) would save 0.5-5.2 Mt of CO2 eq.and the life cycle costs would be lower by €0.5-1.96 billion (109) per year.At the household level, energy consumption would be reduced by up to 30% and consumer costs by 25%-50%.These results suggest that policy measures should be put in place to encourage the uptake of energy efficient ovens by consumers.
not detected by FT-ICR-MS in our experiments, and therefore not targeted for quanification, this may not be the case for others using our approaches.Applications for the ability to detect phytohormones in plant growth media are far reaching, including developing understanding of interactions between soil microbes and plants, and understanding the potential uses of plant growth-promoting bacteria, for example in agriculture.Despite the obvious potential value in understanding these relationships in terms of advancing land management practices to maximise crop growth, little research has been previously conducted in this area.Applying the developed analytical methods, experiments were carried out aimed at investigating the impact of the presence of earthworms with and without plants on the concentrations of phytohormones within the growth media.Despite no measurable differences in plant biomass in the presence of earthworms, a significant increase in the presence of ABA was detected when earthworms were present in hydroponic solution together with plants.This suggests that there could be interactions between the earthworms and plants that cause ABA to be produced.The possibility that the presence of earthworms alters the regulation pathways of certain phytohormone-related genes was tested for by molecular biology methods.A search of the earthworm genome for genes related to ABA production revealed no matches, indicating that earthworms are unlikely to be able to directly produce ABA.We hypothesise instead that the increase in ABA we observed in our earthworm-present experiments in hydroponic solution was caused by indirect influence.Further research would need to be carried out in order to fully assess the mechanisms by which earthworms may be involved in ABA regulation in plants.An increase in ABA production in the presence of earthworms could be attributed to a range of indirect factors including increased competition for nutrients, or the chemical modification of the solution by earthworms.As ABA is frequently associated with abiotic or biotic stress, this seems the most obvious explanation for its increased presence.Analysis of the pH of the solutions did not reveal significant differences, although this is only a very broad measure of the degree to which the earthworms may have altered the environment.It is also possible that the presence of earthworms induced changes in the expression of genes known to be involved in plant stress responses.For example, in addition to affecting plant roots through burrowing, and physiological activities such as excretions, there is some evidence that earthworms also feed on living plant root material.A small-scale study was therefore conducted to see if genes known to be involved in stress responses, or in the biosynthesis of ABA, were upregulated in either the plant roots or plant shoots grown in hydroponic solution in the presence of earthworms.However, only a few genes were tested and in only one case was a significant difference seen between the presence/absence of earthworms.Whilst this may be related to the observed differences in ABA concentrations, the metabolic pathway of ABA production is complex and as such this requires further investigation.In particular, transcriptomic-based studies could be employed to assess the effect of earthworms on global plant metabolic pathways.Substantial evidence exists that earthworms benefit crop yields.However, the observations from our hydroponic experiment suggest that under some circumstances, for example in an already stressed system, earthworms may in fact cause greater levels of stress to plants, resulting in higher levels of ABA in the soil.To our knowledge, no study has effectively dismantled the effects of earthworms in systems with limited nutrient availability.Whilst the increase in ABA was not replicated in soil-based experiments, we did observe an increase in biological activity when both earthworms and plants were present.This could indicate a potential synergistic relationship between plants, soil microbes and earthworms, which could be further investigated using the developed methods.Differences between the results of the hydroponic and soil experiments may in part be due to the use of different earthworm species.E. fetida are litter feeders and L. terrestris are an anecic species, and consequently they will interact with the soil differently.Differences between the two species in terms of e.g. sensitivity to toxicants and biochemistry in addition to behavioural differences are well established in the literature.The additional complexity of a soil matrix compared to hydroponic solutions will inevitably increase associated difficulties in the extraction.It is also possible that increased biological activity in soils compared to hydroponic experiments leads to degradation or conversion of phytohormones during extraction.There is therefore scope to improve the extraction method to achieve better recovery, allowing the observation of more subtle changes in phytohormone concentrations within soils.
Phytohormones such as cytokinins, abscisic acid (ABA) and auxins play a vital role in plant development and regulatory processes.Despite these advances, external factors influencing the production of phytohormones are less well studied.The ability to detect phytohormones in matrices other than plant tissue presents the opportunity to study further the influence of factors such as below ground organisms and soil bacteria on phytohormone production.This novel approach was therefore applied to the plant growth media from a series of experiments comparing plant growth in the presence and absence of earthworms.A small but significant increase in ABA concentration was observed in the presence of earthworms, increasing even further when plants were also present.This finding suggests that earthworms could stimulate plant ABA production.This experiment and its outcomes demonstrate the value of studying phytohormones outside plant tissue, and the potential value of further research in this area.
on.Intriguingly, most induced polyploids that have been used in aquaculture are produced from natural polyploid fishes of the Cyprinidae and Salmonidae.In hexaploid gibel carp, several allopolyploid clones with 212 chromosomes or 208 chromosomes had been synthetized from different clones by incorporating an alien sperm genome, and a novel allopolyploid variety had been selected from clone D and used in aquaculture practice because of its superior growth performance.Through interspecific hybridization between natural tetraploid crucian carp with 100 chromosomes and natural tetraploid common carp with 100 chromosomes, fertile male allopolyploid hybrids with 200 chromosomes were synthesized and infertile allopolyploid hybrids with 150 chromosomes, named the “XiangYun” crucian carp, were massively propagated by crossing with natural tetraploid white crucian carp with 100 chromosomes.In addition, some fertile allopolyploid hybrids with 148 chromosomes were obtained in interspecific hybridization between red crucian carp with 100 chromosomes and bluntnose black bream with 48 chromosomes.Since the 1980s, European farmers preferred to culture all-female XX diploids of rainbow trout to solve the problems resulting from early sexual maturation of males at one year old.However, female diploids mature sexually when they are 14–16 months old and weigh about 450 g, which cannot satisfy the increasing consumer demands for fresh fillet and smoked fillet, since 1.2 kg or 2.5–3.0 kg fish are required, respectively.Subsequently, through thermal shock, hydrostatic pressure treatment, or interploid crossing, farmers and breeders successfully produced triploid trout with delayed gonadal development to produce bigger commercial fish before maturation.However, gonadal development in male triploid rainbow trout does not change much and growth is not improved.For these reasons, most European farmers prefer all-female triploid rainbow trout using a combination of pressure treatment or thermal shock with sex reversal and all-female allotriploid hybrids between rainbow trout and amago salmon, rainbow trout and Japanese charr, rainbow trout and masu salmon, as well as chum salmon and Japanese char since 1990s.However, the domestic trout production in the USA is still largely diploid because most trout are harvested prior to the onset of sexual maturation and so triploids do not provide any great advantage.Since natural tetraploid salmon and trout have diploidized and are commonly thought of as diploids, the artificially induced autopolyploids or allopolyploids are generally called triploids or triploid hybrids.Owing to their easier mass production from natural polyploid species than from common diploid species, and due to their better growth, survival and flesh quality than their original counterparts, these induced autopolyploids or allopolyploids in Cyprinidae and Salmonidae are commercially exploited.Over the past 3 decades, many natural polyploid aquatic animals and some artificially induced autopolyploids or allopolyploids have been utilized in aquaculture.To further explore and utilize natural and induced polyploids in aquaculture, we think that several basic and applied aspects should be emphasized in the future: Along with genome duplication or alien genome incorporation, many important biological issues including the cooperation between the incorporated genome and original genomes, the fate and role of duplicated genes in the offspring must be further studied in these polyploid aquatic animals. Complete genome sequencing and analysis of genome architecture should be carried out in these aquaculture polyploids despite its complexity. As an important genetic breeding biotechnology, it is also essential to develop more efficient and predictable techniques for producing artificially induced polyploids or synthetic polyploids in aquaculture animals. The difference in biological characteristics and aquaculture performances should be further analyzed between artificial polyploids and their diploid counterparts.
For this reason, numerous species of natural polyploid fishes, such as common carp, gibel carp, crucian carp, salmon, and sturgeon, were chosen as important target species for aquaculture.Many artificial polyploids have been commercially utilized for aquaculture and most of them were created from natural polyploid fishes of the Cyprinidae and Salmonidae.Thanks to the easy mass production and better economic traits in growth and flesh quality, the synthetized autopolyploids or allopolyploids from natural polyploid species in cyprinid fishes have been extensively applied to aquaculture throughout China.This review outlines polyploidy advantages and innovative opportunities, lists natural polyploid species used in aquaculture, and summarizes artificial polyploids that have been induced or synthetized, and used in aquaculture.Moreover, some main research trends on polyploid utilization and ploidy manipulation of aquaculture animals are also introduced and discussed in the review.
for the benefits of emergent role distribution, an interesting question for future research is how emergent and assigned role distributions relate to each other.For instance, in the present task role differentiation was not strictly required, whereas Richardson and colleagues’ task necessitated role distribution.It is possible that a strong need for role differentiation eliminates or reduces any negative effects of mutual feedback.This could be tested by investigating how followers’ behavior changes as a consequence of their belief about leaders’ ability to adapt to them, or their knowledge about instructions given to leaders’.Finally, it is important to note that role assignment in the context of reciprocal information flow specifically affected incongruent trials that involved a trade-off between spatial and temporal coordination.As the congruent task demands were much easier to deal with, synchronization on congruent trials was high across all three experiments, confirming that performance in these trials is not predictive of coordination when temporal and spatial dimensions of co-actors’ movements are incongruent.This reveals that coordination tasks involving trade-offs between spatial and temporal aspects are especially important for understanding how social context affects performance limits in joint action.Indeed, many of the joint actions we engage in cannot be performed without balancing different coordination demands.The present study reveals that trade-offs between spatial and temporal aspects of coordination can be managed both by mutually predicting and adjusting to each other’s actions, and by following a clear task distribution in terms of Leader-Follower.Questions for future research include whether some forms of coordination can only be achieved through mutual prediction and adaptation, and how interaction partners deal with other trade-offs, such as achieving high speed versus high accuracy.
Many joint actions require task partners to temporally coordinate actions that follow different spatial patterns.This creates the need to find trade-offs between temporal coordination and spatial alignment.To study coordination under incongruent spatial and temporal demands, we devised a novel coordination task that required task partners to synchronize their actions while tracing different shapes that implied conflicting velocity profiles.In three experiments, we investigated whether coordination under incongruent demands is best achieved through mutually coupled predictions or through a clear role distribution with only one task partner adjusting to the other.Participants solved the task of trading off spatial and temporal coordination demands equally well when mutually perceiving each other's actions without any role distribution, and when acting in a leader-follower configuration where the leader was unable to see the follower's actions.Coordination was significantly worse when task partners who had been assigned roles could see each other's actions.These findings make three contributions to our understanding of coordination mechanisms in joint action.First, they show that mutual prediction facilitates coordination under incongruent demands, demonstrating the importance of coupled predictive models in a wide range of coordination contexts.Second, they show that mutual alignment of velocity profiles in the absence of a leader-follower dynamic is more wide-spread than previously thought.Finally, they show that role distribution can result in equally effective coordination as mutual prediction without role assignment, provided that the role distribution is not arbitrarily imposed but determined by (lack of) perceptual access to a partner's actions.
lines.Taken together with the results of the current study, that has been unable to detect any evidence of off-target effects, we are confident that, although the chances of off target events are still present, the dangers of off-target site cutting by the CAS9 enzyme when co-injected as mRNA with gRNA are greatly exaggerated and do not constitute any more of a risk than that encountered using ES cell targeting approaches to deleting genome sequences.From these and other studies it is clear that CRISPR/CAS9 technology for deleting specific sequences from the genome will revolutionise our understanding of the genome and its role in health and disease thanks to the availability of the whole genome sequence of hundreds of vertebrate genomes and the ease and speed of CAS9/CRISPR genome editing.Although CRISPR/CA9 mediated sequence deletion is rapid and highly efficient, the introduction or alteration of sequences within the mouse genome using 1-cell embryos still remains a challenge.This is because these approaches require the co-injection of a DNA “repair template” designed to trigger the homologous end joining repair pathway in the cell to introduce targeted insertions or mutations.There may be several reasons why this approach is less effective that the deletion method described in the current study.The first is that, because of its relative toxicity compared to RNA, DNA decreases the viability of 1-cell embryos following microinjection.Secondly, it is essential that the repair template be injected into the pronucleus of the 1-cell mouse embryo which is trickier than microinjecting into the cytoplasm and results in reduced embryo viability.Thirdly, because of the perceived problem of off-target effects, these repair templates are most often injected with the mRNA of the mutated “nickase” version of CAS9 that only cuts one strand of the DNA target site thus only inducing the homologous repair pathways within the cell.However, the nickase enzyme is nearly an order of magnitude less efficient than the wild type CAS9 protein.One of the most important challenges that must be addressed when attempting to produce subtle mutations using CRISPR technology is that the injection of repair template, that involves the introduction of a glass needle into the pronucleus of the embryo, does not compromise the integrity of the embryo genome and only produces the designed outcome.If this integrity could be assured and the efficiency of homology directed repair increased, then the future for biology and CRISPR genome editing will be extremely bright.Despite these problems the use of targeted genome deletions using CAS9/CRISPR technologies is tremendously exciting and promises to revolutionise our understanding of the role of tissue specific enhancers in the cell specific regulation of neuropeptides and their receptors.We have used comparative genomics to rapidly identify highly conserved tissue specific enhancers of genes encoding neuropeptides that often lie at considerable distances from the start sites of these genes.Being able to delete these enhancers from the mouse genome using CAS9/CRISPR technology allows us to span the huge gap between in-vitro analysis of these enhancers and allows us, for the first time, to understand the role of these enhancers in-vivo.This novel ability will revolutionise our understanding of the regulation of neuropeptides and will permit a greater understanding of the roles of genetic and epigenetic variation in altering neuropeptide gene regulation in health and disease.
We have successfully used comparative genomics to identify putative regulatory elements within the human genome that contribute to the tissue specific expression of neuropeptides such as galanin and receptors such as CB1.However, a previous inability to rapidly delete these elements from the mouse genome has prevented optimal assessment of their function in-vivo.This has been solved using CAS9/CRISPR genome editing technology which uses a bacterial endonuclease called CAS9 that, in combination with specifically designed guide RNA (gRNA) molecules, cuts specific regions of the mouse genome.However, reports of “off target” effects, whereby the CAS9 endonuclease is able to cut sites other than those targeted, limits the appeal of this technology.We used cytoplasmic microinjection of gRNA and CAS9 mRNA into 1-cell mouse embryos to rapidly generate enhancer knockout mouse lines.The current study describes our analysis of the genomes of these enhancer knockout lines to detect possible off-target effects.Bioinformatic analysis was used to identify the most likely putative off-target sites and to design PCR primers that would amplify these sequences from genomic DNA of founder enhancer deletion mouse lines.Using this approach we were unable to detect any evidence of off-target effects in the genomes of three founder lines using any of the four gRNAs used in the analysis.This study suggests that the problem of off-target effects in transgenic mice have been exaggerated and that CAS9/CRISPR represents a highly effective and accurate method of deleting putative neuropeptide gene enhancer sequences from the mouse genome.
move outside of a cut off radius of 0.1 nm from the nearest lattice site.Fig. 8 represents this data, excluding the PKA energy 2 keV, producing a trend line with a gradient 16 keV−1, corresponding to Ed = 25 eV with κ = 0.8 equation.The function is continuous at E = 2Ed, and so is its first differential.The dependence of damage on the square root of energy near the threshold appears to be physically reasonable, since this is a measure of the momentum available to create damage once the threshold is breached.The displacement efficiencies, κ2 and κ3 for SKAs and TKAs, respectively, are well below the spherical average of 0.37 implied by the approximate fit in Fig. 6, but this could be because the collision processes that give rise to these knock-on events filter out directions which are more favourable to defect production.This MD simulation has found a room temperature displacement threshold to be 25 eV, increasing to 30 eV at 900 K, reproducing reasonably well the threshold for vacancy production measured by a direct technique involving temperatures in the region of 900 K .At 60 eV and above divacancies are produced, including interlayer divacancies for which there is direct experimental evidence .These are species that bridge the graphite gap , inhibit interlayer shear, and potentially buckle the graphite layers .The data suggest a new, continuous damage function), where the threshold region depends on the square root of the PKA energy in excess of the threshold, evolving to a linear dependence on PKA energy.Near the threshold, the constraint on displacement appears to be from momentum considerations, but at higher energies this constraint operates only through the displacement efficiency.The new function can be viewed as being constructed from three individual displacement functions of the same nature from three generations of knock-on events.A detailed analysis of keV-cascades in graphite has already shown the long predicted absence of thermal spike effects, and the relevance of the binary collision approximation, which are both also confirmed here.No physically meaningful sub-threshold defects or processes are observed for 20 eV or below; only the sp3–sp3 link forms, which is an artefact of the interatomic potential used.Nevertheless, some displacements give rise to intimate FPs at the threshold and above.Their collapse could return to any of four states: graphite, D defects, atom interchange, or the unphysical sp3–sp3 link.
The environment dependent interatomic potential (EDIP) including Ziegler-Biersack-Littermark (ZBL) interactions for close encounters is applied to cascades starting from a host atom and from an interstitial atom.We find the room temperature displacement threshold to be 25 eV, increasing to 30 eV at 900 K. The latter correlates well with the measured threshold for vacancy production.Additionally, divacancy production is found to occur, including interlayer divacancies from around 60 eV.The data suggest a new, continuous damage function applies, where the threshold region depends on the square root of the primary knock-on atom (PKA) energy in excess of the threshold, evolving to a linear dependence on PKA energy.
.Predicting bacterial terpene synthases is very challenging, but extensive HMM analysis of the Pfam database can be applied to identify new terpene synthases and test them in production systems .Once the monoterpene synthase of interest has been identified, it must be brought into genomic context by choosing the appropriate chassis, usually yeast or E. coli.Other host organisms engineered for the production of monoterpenoids include Corynebacterium glutamicum and Pseudomonas putida , which were developed for the production of pinene and geranic acid, respectively.In addition, the Gram-positive bacterium Bacillus subtilis, which is already widely used in biotechnological applications, has recently been promoted as a potential platform for the general production of terpenoids, although to date there are no published instances of mTS/C production in this species .The next step is the design of intrinsic regulation within the engineered biosynthetic gene cluster, where regulatory parts need to be selected carefully in order to reach the maximal efficiency of the selected parts .It has been demonstrated for limonene-producing E. coli strains that production is highly dependent on the number of plasmids per cell, which can be modulated by changing the selective pressure using different antibiotics concentrations .In yeast, inserting pathways on the chromosome has been shown to increase diterpenoid production up to threefold, and similar effects would be expected for monoterpenes/monoterpenoids .In addition, genomic insertion would help in reducing biological variation, making the whole system more productive, which was demonstrated also in E. coli, where a threefold increase of production levels was observed for the tetraterpene, β-carotene .With the emergence of the CRISPR-Cas9 technology, genome editing on a large scale has become more timely and affordable .This technology allows biosystems engineers to insert de novo synthesized genes of up to 8 kbp and produce knock-outs of up to 18 kbp on the E. coli chromosome .Other production chassis, such as S. cerevisiae , C. glutamicum and Streptomyces sp. can be CRISPR-Cas9 genome edited in a similar fashion.Additionally, various conventional methods of genome editing can be employed in Pseudomonas putida and many other potential microbial production hosts .The new opportunities created by the CRISPR/Cas technology have been strikingly demonstrated by engineering yeast for the production of farnesol, a sesquiterpene, which could not be produced if the pathway was encoded on a plasmid .For E. coli it has been demonstrated that limonene is converted spontaneously to its toxic hydroperoxide form, causing severe growth retardation .A natural point mutation in the gene for alkyl-hydroperoxidase decreased the formation of limonene hydroperoxide, resulting in improved limonene tolerance.Targeted genome editing will play a considerable role in engineering tolerant strains for improved production.Another strategy to overcome general cytotoxic effects of chemicals produced in a production host is the compartmentalization of the pathway, thus reducing the active concentration and intrinsic toxicity of the produced chemical or the pathway intermediates.Suitable compartments that are being explored for this purpose include peroxisomes in yeast and proteinaceous micro-compartments in bacteria .The synthetic biology of monoterpene/monoterpenoid production has already made substantial progress in recent years, promising sustainable and economically viable new routes to industrial-scale production of these valuable chemicals.However, this is only the beginning: in the near future, we expect to see new computational tools identifying even more genes to add to the monoterpene/monoterpenoid diversification toolbox; advances in metabolomics and proteomics that will more rapidly identify bottlenecks in engineered biosynthetic pathways; progress in directed protein evolution that will increase product purity and chemical diversity; and ever faster and more robust genome editing techniques that will facilitate the rapid and automated introduction and combinatorial assembly of biosynthetic pathway variants into tailor-made high-performance industrial chassis strains.Together, these tools will enable a profound transformation in the bio-industrial production of an increasingly diverse range of monoterpenes and their derivatives.Papers of particular interest, published within the period of review, have been highlighted as:• of special interest,•• of outstanding interest
Synthetic biology is opening up new opportunities for the sustainable and efficient production of valuable chemicals in engineered microbial factories.Here we review the application of synthetic biology approaches to the engineering of monoterpene/monoterpenoid production, highlighting the discovery of novel catalytic building blocks, their accelerated assembly into functional pathways, general strategies for product diversification, and new methods for the optimization of productivity to economically viable levels.Together, these emerging tools allow the rapid creation of microbial production systems for a wide range of monoterpenes and their derivatives for a diversity of industrial applications.
The proposed dataset is composed of seven tables in csv format.Table “activity” lists the data related to activities.In addition to the field dedicated to a unique identifier, the following two fields express the cost of activities: a fixed cost and a variable cost.PV is offered every year, but some costly activities can be assigned to a particular child only once.The number of children per activity is also limited and a minimal number of children is required to open an activity.Minage and maxage express the minimal required age and the maximal age for each activity respectively.The similarity field is used to group activities together in order to restrict the number of similar activities assigned to the same child.Activities can be organized several times during the entire PV period.Each occurrence of an activity is described in the “occurrence” table.In addition to its identifier, its associated activity may be retrieved by the idactivity field.The beginning and end of each occurrence are given.Some activities are organized over several days: the values of next and previous are respectively the identifier of the next and the previous day.For activities that require only one occurrence, their value is the identifier of the occurrence itself.The inactive field is for cancelled occurrences."Information specific to children is grouped in the child's table.Each date of birth has been changed to the first day of the month and identifiers have been set from 1 to the number of children, for anonymization purposes.So the dataset can be considered as fully anonymized.Some children want to participate in the PV with friends.For such cases, the “knome” table lists all the pairs participating together, referenced by their id.The preferences expressed by children for activities can be found in table “preference”.For each day of the PV period, a child must specify a set of up to four activities, each of which is evaluated with a priority value between 1 and 4.The occurrence concerned will therefore receive one of the numbers from 1 to 4.The reference to the child is idchild and a unique identifier for the expressed preference is idpreference.Some “lifelong” activities are prohibited for children who have already obtained them during previous PVs.Therefore, all past assignments of such activities during the previous PV are kept in the “lifetime” table which is composed of the identification of a child and the identification of an activity.Table “period” limits the number of activities assigned over a given period for each child.It is composed of an identifier, a start and end time and a maximum number of assignments per child.The PV organizers construct the activity and occurrence tables to reflect all available activities.The lifetime table come from previous PVs, which lists the lifelong activities formerly assigned to children.The period table is defined by the organizers to specify the age category.Tables “child”, “preference” and “knome” are provided using an online registration form, which is open for nearly 4 months before the event .This dataset contains 1121 activities, 634 children, which leads to 16621 ranked preferences.A linear programming model using this dataset is available in .
“Passeport Vacances”, abbreviated PV, is a set of leisure activities proposed to children to discover and enjoy during school holidays.During PV, activities are proposed several times, each one being an occurrence.This data set contains real data, collected by online registration during the summer of 2017.Children express their preferences for each available time slot.Organizers should assign activities to children by maximizing their expressed preferences, subject to several types of constraints: age limit, group size limit for each occurrence of an activity, diversification of the type of activities for each child, restrictions on costly activities, restrictions on the number of activities per period, and cost balancing.The CSV files in this data set represent the preferences of 634 children for 1121 activities over a two-week period.These data were used to develop the Morges 2017 Vacation Passport model, which is associated with the research article entitled ““Passeport Vacances”: an assignment problem with cost balancing” Beffa and Varone, 2018.
1H NMR, 13C NMR, DEPT, HSQC, 1H-1H COSY, HMBC, NOESY, HRESIMS, and IR spectra of Ganodermanol A–H, together with Mo24-induced CD spectrum of Ganodermanol A, CD spectra of Ganodermanol D–E were presented in Figs. S1–109, the cytotoxicities and anti-HIV-1 activity of isolated compounds were presented in Tables S1 and 2.Optical rotations were measured on a Perkin-Elmer Model-343 digital polarimeter.The CD spectra and ORD spectra were recorded on a JASCO J-815 spectropolarimeter.IR spectra were acquired on a Nicolet 5700 FT-IR microscope spectrometer.1D and 2D NMR spectra were obtained on a Bruker AVIIID 400/500/600 spectrometers.Chemical shifts are given in ppm, and coupling constants are given in hertz.HRESIMS data were measured using an ESI-FTICR-MS.Silica gel and Sephadex LH-20 gel and MCI gel were used for column chromatography.Semi-preparative reversed phase and normal phase HPLC were performed on a Shimadzu HPLC instrument equipped with a Shimadzu RID-10A detector and a Shiseido Capcell Pak C18 column by eluting with mixtures of methanol and H2O at 4.0 mL/min, or a YMC silica column by eluting with mixtures of n-hexane and EtOAc or n-hexane and isopropyl alcohol at 4.0 mL/min, respectively.Analytical TLC was carried out on pre-coated silica gel GF254 plates, and spots were visualized under UV light or by spraying with 5% H2SO4 in EtOH followed by heating at 120 °C.The cytotoxicity of the compounds against the human cancer cell lines was measured using the MTT assay .Briefly, the cells were maintained in an RRMI S7 1640 medium supplemented with 10% fetal bovine serum, 100 units/mL penicillin, and 100 μg/mL streptomycin.Cultures were incubated at 37 °C in a humidified atmosphere of 5% CO2.Tumor cells were seeded in 96-well microtiter plates at 1200 cells/well.After 24 h, compounds were added to the wells.After incubation for 96 h, cell viability was determined by measuring the metabolic conversion of 3--2,5-diphenyltetrazolium bromide into purple formazan crystals by viable cells.The MTT assay results were read using an ELISA reader at 570 nm.All compounds were tested at five concentrations in 100% DMSO with a final concentration of DMSO of 0.1% in each well.Paclitaxel was used as a positive control.Each concentration of the compounds was tested in three parallel experiments.IC50 values were calculated using Microsoft Excel software.293 T cells were co-transfected with 0.6 μg of pNL–Luv-E−–Vpu− and 0.4 μg of pHIT/G.After 48 h, the VSV-G pseudotyped viral supernatant was harvested by filtration through a 0.45 μm filter and the concentration of viral capsid protein was determined by p24 antigen capture ELISA.SupT1 cells were exposed to VSV-G pseudotyped HIV-1 at 37 °C for 48 h in the absence or presence of test compounds.The inhibition rate was determined by using a firefly luciferase assay system .
The data included in this paper are associated with the research article entitled “Sesquiterpenoids from the cultured mycelia of Ganoderma capense” [1].1H NMR, 13C NMR, DEPT, HSQC, 1H–1H COSY, HMBC, NOESY, HRESIMS, and IR spectra of Ganodermanol A–H (1–11), together with Mo2(AcO)4-induced CD spectrum of Ganodermanol A, CD spectra of Ganodermanol D–E were included in the Data in Brief article.In addition, the cytotoxicities and anti-HIV-1 activity of isolated compounds were also included in the Data in Brief article.
autopolyubiquitination of TRAF6 itself, which enhances autophosphorylation of TAK1 and subsequently activates NF-κB.Unlike Lys48-linked polyubiquitination, which is operational in ubiquitin–proteasome-dependent protein degradation, Lys63-linked polyubiquitination serves to modulate a signaling activity of the polyubiquitination-bearing molecule.We found that lansoprazole increased the NF-κB-responsive luciferase reporter activity.Introduction of a dominant-negative TRAF6 inhibited the effect of lansoprazole by 60.8/1.308) ± 13.9%.As these experiments pointed to the notion that TRAF6 is likely to be a direct target of lansoprazole, we next performed a ubiquitination assay of TRAF6.Upon overexpression of TRAF6 and BMP-2 induction, lansoprazole enhanced TRAF6-anchored polyubiquitination in HEK293 cells and primary osteoblasts.Furthermore, lansoprazole induced TRAF6-anchored polyubiquitination in a serum-free medium and without BMP-2 induction, which suggested that TRAF6 was likely to be a direct target molecule activated by lansoprazole.We also found that lansoprazole-mediated TRAF6-anchored polyubiquitination was indeed linked to Lys63 by immunoblotting.TRAF6 can bind to BMP type II receptor in the absence of BMPRI.On the other hand, TRAF6 can bind to BMPRI only when a BMPRI-II complex is formed by BMPs.As lansoprazole enhanced polyubiquitination of TRAF6 even in the absence of BMP ligands, we examined if lansoprazole is able to activate TRAF6 tethered on BMPRII.To this end, we added a TRAF6-inhibitory peptide, which functions as a TRAF6 decoy by binding to the TRAF6-binding motif on BMPRI.As expected, we found that the TRAF6-inhibitory peptide did not compromise lansoprazole-mediated activation of RUNX2 P1 promoter activity.Lansoprazole is thus likely to operate on BMPRII-engaged TRAF6 in the absence of BMP ligands.TRAF6 can bind to BMP type II receptor in the absence of BMPRI.On the other hand, TRAF6 can bind to BMPRI only when a BMPRI-II complex is formed by BMPs.As lansoprazole enhanced polyubiquitination of TRAF6 even in the absence of BMP ligands, we examined if lansoprazole is able to activate TRAF6 tethered on BMPRII.To this end, we added a TRAF6-inhibitory peptide, which functions as a TRAF6 decoy by binding to the TRAF6-binding motif on BMPRI.As expected, we found that the TRAF6-inhibitory peptide did not compromise lansoprazole-mediated activation of RUNX2 P1 promoter activity.Lansoprazole is thus likely to operate on BMPRII-engaged TRAF6 in the absence of BMP ligands.To prove that the TRAF6 autopolyubiquitination is indeed a target of lansoprazole, we examined the effect of lansoprazole by an in vitro ubiquitination assay.Unexpectedly, however, lansoprazole failed to enhance the synthesis of TRAF6-anchored polyubiquitination.As binding of a small compound to a specific domain of a target molecule mostly inhibits the target molecule rather than activating it, we searched for a molecule that antagonizes TRAF6 polyubiquitination.We found that a deubiquitination enzyme, CYLD, specifically cleaves Lys63-linked polyubiquitin chains, and downregulates TRAF6-mediated signal transduction, which has been characterized in NF-κB activation.As has been previously reported, CYLD was able to cleave unanchored polyubiquitin chains but not TRAF6-anchored ones in vitro.We then examined the effect of lansoprazole on CYLD using unanchored polyubiquitin chains, and found that the cleavage was inhibited by pretreatment of lansoprazole in a dose-dependent manner.To prove that the TRAF6 autopolyubiquitination is indeed a target of lansoprazole, we examined the effect of lansoprazole by an in vitro ubiquitination assay.Unexpectedly, however, lansoprazole failed to enhance the synthesis of TRAF6-anchored polyubiquitination.As binding of a small compound to a specific domain of a target molecule mostly inhibits the target molecule rather than activating it, we searched for a molecule that antagonizes TRAF6 polyubiquitination.We found that a deubiquitination enzyme, CYLD, specifically cleaves Lys63-linked polyubiquitin chains, and downregulates TRAF6-mediated signal transduction, which has been characterized in NF-κB activation.As has been previously reported, CYLD was able to cleave unanchored polyubiquitin chains but not TRAF6-anchored ones in vitro.We then examined the effect of lansoprazole on CYLD using unanchored polyubiquitin chains, and found that the cleavage was inhibited by pretreatment of lansoprazole in a dose-dependent manner.We next asked if lansoprazole is able to bind to CYLD.An in silico search for ligand-binding sites of CYLD disclosed a unique pocket.The pocket was located across which the C-terminal tail of ubiquitin was predicted to lie according to structural alignment of CYLD to homologous HAUSP-USP7 that was crystallized with ubiquitin.Optimization of the docking structures predicted that lansoprazole linked to CYLD by hydrogen bond, σ–п conjugation, and п–п interaction.The docked structure model suggests that lansoprazole suppresses deubiquitination activity of CYLD by inhibiting the binding of the C-terminal tail of ubiquitin to CYLD.The active site of CYLD is predicted to be located where the C-terminal tail of ubiquitin ends.To prove that lansoprazole binds to the predicted pocket of CYLD, and inhibits its deubiquitination activity, we next employed the Ubc13–Uev1a complex as an E2 enzyme, which specifically catalyzes unanchored Lys63-linked polyubiquitin chains.We first confirmed that lansoprazole attenuates wild-type CYLD-mediated cleavage of the polyubiquitin chains.As simulation of serial alanine substitutions in the identified CYLD pocket predicted that R758 and R766 were essential residues for the binding to lansoprazole, we engineered an R758A-single-mutant CYLD and an R758A and F766A-double-mutant CYLD.Each mutant CYLD retained a dose-dependent deubiquitination activity in vitro, whereas the single mutant CYLD partly and the double mutant CYLD completely abolished lansoprazole-mediated suppression of the deubiquitination activities.Thus, the mutations exclusively affected responsiveness of lansoprazole but not the deubiquitination activity of CYLD in the absence of lansoprazole.These results indicated that specific binding of lansoprazole to the CYLD pocket prevents the C-terminal tail from reaching the active site, which then facilitates TRAF6-mediated polyubiquitination.We next asked if lansoprazole is able to bind to CYLD.An in silico search for ligand-binding sites of CYLD disclosed a unique pocket.The pocket was located across which the C-terminal tail of ubiquitin
We found by in cellulo ubiquitination studies that lansoprazole enhances polyubiquitination of the TNF receptor-associated factor 6 (TRAF6) and by in vitro ubiquitination studies that the enhanced polyubiquitination of TRAF6 is attributed to the blocking of a deubiquitination enzyme, cylindromatosis (CYLD).Structural modeling and site-directed mutagenesis of CYLD demonstrated that lansoprazole tightly fits in a pocket of CYLD where the C-terminal tail of ubiquitin lies.
of medicines and food.In a conflict zone such as Turtuk, the interdependence of the Indian Army and the local population is deeply embedded for survival.Most, small towns and villages are cut off from the main supply routes for up to six months in a year.Even during the regular winter season, the Army provides all emergency care to the local population, and it is the same Army which shows up first in any disaster incident.The Army is well equipped and prepared for handling emergencies that can never be matched by the civilian authorities both financially or logistically.Whether it is jobs, medical, food supplies, school education for young children or medical emergencies, it is the Indian Army which remains at the forefront."The Indian Government's policy on compensation for disasters needs reviewing, as people are dissatisfied with current levels.Local people must be trained in rescue and relief work.The religious leaders can also be instrumental because most of the people in Turtuk are religious.They perform many rituals for protection from environmental disasters, and people have faith in them.People can also use their indigenous knowledge and traditional methods along with modern equipment to enhance DRR.The need of the hour is to prepare a risk-sensitive land use plan for the Turtuk village to control the spontaneous growth and tackle the impacts of tourism and global climate change.Future research should cover other mountain communities on the other side of the LOC, building on the work of Azhar-Hewitt and Hewitt .Findings from such research would be useful for the development of an integrated disaster and conflict resilient master plan for the HKH region and could contribute to achieving the UN Sustainable Development Goals and the Sendai Framework for Disaster Risk Reduction.Indigenous mountain communities in the Hindu-Kush Himalaya region suffer frequent disasters and economic/political marginalisation, especially in border conflict zones.This paper aimed to understand community perception of risk and vulnerability to environmental hazards in a remote border conflict zone in Ladakh.Turtuk lies beside the Shyok River and near the LOC between India and Pakistan.The case study area is frequently affected by flooding, rock falls, landslides, border conflict and has high earthquake risk.India took control of Turtuk from Pakistan in 1971.Restricted movement across the LOC further isolates the community.This work is entirely relying on primary data.Using participatory rural appraisal tools, community stakeholder groups of local men, women and girls, administrative officials and political/religious leaders drew maps.Hazard maps depicted the location of settlements, fields/orchards, amenities, rivers/streams and mountains and indicated flood, rock-fall and landslide hazard areas.Dream maps depicted groups’ aspirations to decrease vulnerability to hazards and improve their lives.Meanwhile, specialists prepared a geological hazard map by integrating field-based investigations with remote sensing data.Specialist- and community-produced hazard maps matched in the location of high-risk areas for flood, landslide and rock-fall and potential for damage to settlements, infrastructure and agriculture, though community awareness of risk from earthquakes and poor-quality construction was low.Community members were aware that the positioning of essential government facilities on the flood-prone river plain, poorly constructed bridges close to the water, and climate change increased their vulnerability.However, belief in the effectiveness of proposed mitigation strategies such as concrete levees for flood control and fences for prevention of rock-fall damage probably surpassed their capacity to prevent damage.Lack of availability of flat, lower-risk land for building, resource limitation and lack of awareness of locally appropriate low-cost options prevent the implementation of disaster risk mitigation.A disaster preparedness plan is needed which should cover: monitoring hazards and climate change, shifting emergency infrastructure to lower risk areas, and training engineers and masons in- and implementation of - appropriate regulations on disaster resilient building.It should also include awareness-raising on appropriate low-cost community DRR strategies, sustainable tourism development, search and rescue training, road building and improved phone/internet connections, initiate peace talks to resolve border conflicts, and building of levees, slope stabilisation and protective barriers to rock-fall where appropriate and feasible.At the local scale, this research demonstrates how a particular community deals with extreme hazards and conflicts in a mountainous environment.At the national scale, it promotes awareness of the value of risk perception studies by incorporating participatory maps into the gazetted land-use master plans, and traditional cultural knowledge in DRR initiatives.At the regional and global scales, this work provides an understanding of the root causes of disaster vulnerability and the characteristics required by a community to tackle them.Scrutinising various components of environmental disasters applying the proposed method represents an advancement and original contribution to the existing body of knowledge in DRR field.Explicitly, this paper fills gaps linked to risk communication solutions between the indigenous mountain people and decision makers, cultures and disasters, and tackling catastrophe in fragile and conflict-affected contexts.
This study aims to understand community risk perception to environmental hazards in a border conflict zone context in high-mountain areas.The villagers were able to identify various environmental hazards and associated risk zones through participatory timeline diagram, and hazard and dream mapping exercises.They apply indigenous knowledge to deal with the adverse climate and calamities.The technique, of analysing community vulnerability in the context of conflict and disasters by applying qualitative PRA tools and validating the mapping results, as piloted in this study is novel and replicable in any disaster setting.
load to cause first cracking.For the ultimate load that the masonry wall panel can carry, the contribution of the inelastic joint interface parameters are ranked in order of importance as: joint cohesion; joint friction angle; and joint tension.The influence of the cohesive strength and friction angle of the interface together exhibits a significant interaction capable of influencing the mechanical response of the wall panel at the near-failure condition.With the application of the external load, hinges are formed as bricks slide and rotate against each other.Hinge development influence the ductility and the ultimate load carrying capacity of the low bond strength masonry wall panel.In the current parametric study, the assessment of load at first crack and ultimate load carrying capacity is based on a specific in geometry and material properties low bond strength masonry wall panel.In addition, the masonry wall panel studied has been subjected only to vertical in-plane load.Therefore, conclusions obtained are not applicable to any wall panel containing an opening.Extrapolating the results to different geometries and different load configurations would require additional numerical and experimental investigations.The next phase of the research will focus on the experimental behaviour of different in geometry plain and reinforced masonry wall panels subjected to various types of loading.
A study of the influence of the brick-mortar interface on the pre- and post-cracking behaviour of low bond strength masonry wall panels subjected to vertical in plane load is presented.Using software based on the Distinct Element Method (DEM), a series of computational models have been developed to represent low bond strength masonry wall panels containing an opening.Bricks were represented as an assemblage of distinct blocks separated by zero thickness interfaces at each mortar joint.A series of sensitivity studies were performed supported with regression analysis to investigate the significance of the brick-mortar interface properties (normal and shear stiffnesses, tensile strength, cohesive strength and frictional resistance) on the load at first cracking and ultimate load that the panel can carry.Computational results were also compared against full scale experimental tests carried out in the laboratory.From the sensitivity analyses it was found that the joint tensile strength is the predominant factor that influences the occurrence of first cracking in the panel, while the cohesive strength and friction angle of the interface influence the behaviour of the panel from the onset of cracking up to collapse.
increased after treatment with MLN4924 as measured by an HRE luciferase reporter assay, and the HIF-1α target genes BNIP3L and enolase-1 were significantly upregulated 1 and 2 h after treatment with MLN4924.The knockdown of DEN-1 in T84 cells, an intestinal epithelial cell line, via lentivirus shRNA revealed decreased Cul-2 neddylation at baseline, increased barrier formation over time and an increased rate of resolution after a calcium switch assay, compared to cells infected with a non-targeting control lentivirus shRNA.The protective role of HIF and the contribution of the neddylation pathway were further confirmed in a DSS-colitis murine model.Mice treated with DSS that received a pre-treatment of MLN4924 displayed decreased percent body weight loss, decreased colon shortening and decreased histologic injury scores, compared to mice receiving DSS and vehicle treatment.These results suggest that modulation of the neddylation pathway via MLN4924 under inflammatory conditions can be protective when HIF stabilization is promoted.Interestingly, HIF hydroxylases can also regulate NF-κB, and NF-κB is thought to be protective in the intestinal epithelium via the inhibition of enterocyte apoptosis .Therefore, the ability of MLN4924 to promote HIF signaling and inhibit NF-κB signaling should be carefully balanced for the neddylation pathway to represent a novel therapeutic target for inflammatory bowel diseases.Studies to address these concepts are currently ongoing.The authors declare no financial interests in any of the work submitted here.
There is intense interest in understanding how the purine nucleoside adenosine functions in health and during disease.In this review, we outline some of the evidence that implicates adenosine signaling as an important metabolic signature to promote inflammatory resolution.Studies derived from cultured cell systems, animal models and human patients have revealed that nucleotide metabolism is significant component of the overall inflammatory microenvironment.These studies have revealed a prominent role for the transcription factors NF-κB and hypoxia-inducible factor (HIF) and that these molecules are post-translationally regulated through similar components, namely the neddylation of cullins within the E3 ligase that are controlled through adenosine receptor signaling.Studies defining differences and similarities between these responses have taught us a number of important lessons about the complexity of the inflammatory response.A clearer definition of these pathways has provided new insight into disease pathogenesis and importantly, the potential for new therapeutic targets.
differences from and similarities with previous MRI and nuclear imaging studies.Our study contradicts a recent morphometric study, which showed no significant associations between GM changes and UPRDS III using MRI surface displacements information.Surface displacement captures disease related changes in the shape of the subcortical structures.This could be owing to the fact that the surface displacements were obtained from an image registration method that is sensitive to scale related changes.Our results also differ from other MRI studies, which found absence of correlations or between left caudate volume and motor severity.This could be due to the methodological differences or reflect true differences in patient populations.Interestingly, we observed a left-sided predominance in the association pattern with axial symptoms specifically affecting the left caudate, which is partially in line with Zarei et al.Our results are in good agreement with SPECT/PET studies with 18Fluorodopa and dopamine transporter tracers.F-dopa and DAT studies have traditionally been used to evaluate the disease severity of PD by assessing the integrity of dopaminergic terminals.PET/SPECT studies demonstrated negative correlation between MDS-UPDRS motor score and F-dopa and DAT concentration in caudate and putamen regions.The available data on the PPMI repository did not allow to undertake a voxel-based analysis in this cohort of patients and may well explain the lack to observe correlations between motor severity and regionally averaged SBR.
Classical motor symptoms of Parkinson's disease (PD) such as tremor, rigidity, bradykinesia, and axial symptoms are graded in the Movement Disorders Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS) III.It is yet to be ascertained whether parkinsonian motor symptoms are associated with different anatomical patterns of neurodegeneration as reflected by brain grey matter (GM) alteration.This study aimed to investigate associations between motor subscores and brain GM at voxel level.High resolution structural MRI T1 scans from the Parkinson's Progression Markers Initiative (PPMI) repository were employed to estimate brain GM intensity of PD subjects.Correlations between GM intensity and total MDS-UPDRS III and its four subscores were computed.The total MDS-UPDRS III score was significantly negatively correlated bilaterally with putamen and caudate GM density.Lower anterior striatal GM intensity was significantly associated with higher rigidity subscores, whereas left-sided anterior striatal and precentral cortical GM reduction were correlated with severity of axial symptoms.No significant morphometric associations were demonstrated for tremor subscores.In conclusion, we provide evidence for neuroanatomical patterns underpinning motor symptoms in early PD.
Genetic modification of conventional crops enables safe and sustainable insect pest control in crop production systems.One of the more common approaches in insect-protected GM crops is expression of insecticidal proteins derived from the commonly occurring soil bacterium Bacillus thuringiensis to provide intrinsic insect protection.Insecticidal activity of Bt proteins has been known for almost 100 years and more than 700 genes coding for insecticidal Bt crystal proteins have been identified.Commercial Bt-derived products have been used worldwide for over 50 years and hundreds of Bt biopesticide formulation registrations are in place globally including in the United States, European Union, and China for the control of a wide variety of insect pests.These products can be safely applied in both conventional and organic agricultural production.Proteins from Bt microbes, which include the Cry proteins as well as the cytolytic proteins, vegetative insecticidal proteins, and others, provide control against insects from the lepidopteran, coleopteran, hemipteran, and dipteran orders, as well as against nematodes.The selective toxicity of Cry proteins to insects is facilitated through specific receptor-mediated interactions.The extensive use of Bt-derived biopesticide products worldwide is due in part to their specificity against target insect pest species and lack of mammalian toxicity, assuring limited potential for impacts to beneficial insect species and non-target organisms, including other invertebrates, birds, aquatic organisms and mammals.Due to their efficacy and extensive history of safe use and a favorable environmental degradation profile, Bt Cry proteins have been expressed in GM crops to confer insect-protection.Effective insect resistance management in agriculture and the ability to achieve the desired insect activity spectrum can be accomplished through the addition of other proteins outside of the classical three domain α pore-forming Cry protein structural class.Pore-forming proteins are nearly ubiquitous in nature, produced by bacteria, fungi, and plants, as well as fish and amphibians.Proteins in this class confer insect control by inserting into cell membranes within the insect intestinal tract leading to formation of pores that impact the integrity of the insect midgut and enable colonization by Bt spores, culminating in insect death.Specificity of insecticidal Bt proteins is mediated in part by their activation by proteases and their binding to specific receptors along the brush border membrane within the insect midgut.These specific Bt-toxin receptors are not present in humans or other mammals nor in non-target insects), eliminating potential hazards related to exposure in humans, animals and the majority of non-target insects.PFPs can be classified into two large groups α-PFPs and β-PFPs, based on the structures utilized to form a membrane-integrated pore and these mechanisms can be classified with a high rate of accuracy through the use of bioinformatics.β-PFPs potentially provide a diverse source of new insecticidal proteins for commercial pest control in a wide variety of crops.Safety of β-PFPs in crops used for food and feed is demonstrated by the safe use of a binary toxin complex that includes Cry35Ab1, a Toxin_10 family β-PFP.Cry35Ab1 works together with a non-β-PFP partner protein Cry34Ab1 to control coleopteran pest species such as corn rootworms, yet this binary protein complex is safe for humans and animals as evidenced by its narrow activity spectrum, digestibility, empirical mammalian safety data, and safe use on millions of acres in the U.S. annually since 2006.β-PFP safety is also supported by the presence of these proteins in safely consumed foods including: spinach, cucumber, wheat, and fish such as cod and tilapia.A variety of insects can cause significant damage to cotton crops.Currently, commercial cotton varieties expressing Bt proteins provide excellent control of traditional cotton pests such as lepidopteran insects, but have limited efficacy for control of hemipteran insects that are emerging as economically important pests of cotton in the United States.The Bt protein Cry51Aa2 was shown to have insecticidal activity against two hemipteran pests in cotton; Lygus lineolaris and Lygus hesperus.Cry51Aa2 is a member of the expansive ETX_MTX2 β-PFP protein family and accordingly shares a similar structure and general functional similarity with a number of other insecticidal Cry proteins and ETX_MTX2 members.Millions of pounds of biopesticides that contain the β-PFPs Cry60Aa1 and Cry60Ba1 from Bt israeliensis and the β-PFPs BinB, MTX2, and MTX3 from Lysinobacillus sphaericus have been widely used to control mosquitoes in the U.S. and have been extensively used in potable water to control disease vector mosquitoes and blackflies in Africa.This demonstrates their history of safe use and strongly supports the safety of this structural class of insecticidal proteins for use in GM crops.Although Cry51Aa2 is a member of the ETX_MTX2 β-pore forming protein family, this protein has significant sequence divergence from other ETX_MTX2 family members, enabling its specificity and limiting its activity spectrum, thereby limiting its potential to impact non-target organisms.Enhancing control of Lygus pests to a commercial level of efficacy was achieved through selected amino acid modifications to the Cry51Aa2 protein, which substantially increased the insecticidal activity of the resulting variant Cry51Aa2.834_16 relative to the wild-type Cry51Aa2 protein.As described by Gowda and colleagues, these modifications to Cry51Aa2 consisted of eight amino acid substitutions and the deletion of a single HYS motif from the HYS repeat in residues 196–201.These iterative amino acid substitutions and deletions were made through targeted DNA sequence changes to the Cry51Aa2 coding sequence followed by bioassay screening of each protein variant.These sequence changes also ensured effective control of targeted hemipteran and thysanopteran insect pests.When mapped to the three-dimensional crystal structure of the protein, these modifications were primarily but not exclusively localized to the receptor binding region, or “head region”, of the protein.This
Many insect-protected crops express insecticidal crystal (Cry) proteins derived from the soil bacterium Bacillus thuringiensis (Bt), including both naturally-occurring Cry proteins and chimeric Cry proteins created through biotechnology.The Cry51Aa2 protein is a naturally-occurring Cry protein that was modified to increase its potency and expand its insect activity spectrum through amino acid sequence changes.
μg Cry51Aa2.834_16/mL diet, respectively.However, when heated to temperatures of 55, 75 and 95 °C for 15 min, the LC50 values were >60.0 μg Cry51Aa2.834_16/mL diet, a reduction in activity of >95% relative to the control Cry51Aa2.834_16 protein.For Cry51Aa2.834_16 and the other Cry51Aa2 variants, the Tier 1 assessment indicates that dietary exposure to these proteins will not pose a hazard to humans or animals.Nevertheless, acute oral toxicity studies in mice were conducted on the three Cry51Aa2 variants for further assurances of mammalian safety of these members of the β-PFP/ETX_MTX2 family of proteins.Mice in the test groups were dosed with either Cry51Aa2.834, Cry51Aa2.834_2, or Cry51Aa2.834_16 at doses of 1332, 1332 and 5000 mg/kg, respectively.The three test proteins were active against L. hesperus and Cry51Aa2.834_16 was also active against L. lineolaris.Although not required in acute toxicity testing guidelines, homogeneity, stability, concentration, and insecticidal activity of dosing solutions were analytically confirmed for the GLP toxicology study with Cry51Aa2.834_16 and all analyses met the acceptability criteria for the respective analyses.The test, control and vehicle dosing solutions were administered by oral gavage on study day 0 to the appropriate group of mice and the animals were observed for 14 additional days.No mortality occurred during these studies.There were no test-related differences in body weight or body weight gain observed in either of these studies and no statistically significant differences in these parameters were observed.There were no test substance-related differences in body weight or body weight gain observed in either of these studies and no statistically significant differences in these parameters were observed.There were no test substance-related differences in food consumption in the study with Cry51Aa2.834_16.Although food consumption was higher in the Cry51Aa2.834_16 test group males relative to the vehicle control group during the interval from study days 0 to 7, there was no significant difference relative to the BSA protein control, this difference was observed in males only during a single time interval, and was thus not considered to be treatment related or adverse.No test substance-related clinical or gross necropsy findings were observed.Therefore, neither Cry51Aa2.834, Cry51Aa2.834_2, or Cry51Aa2.834_16 exhibited any toxicity at 1332, 1332 and 5000 mg/kg respectively, the highest dose levels tested.Insect pests can present significant challenges to crop production and continued innovation of biotechnology solutions is needed to ensure multiple crop-protection options for insect control are available to growers.Some of these biotechnology solutions may include GM crops expressing beta pore forming proteins that contain structural motifs that are ubiquitous in nature and therefore have a history of safe use but are relatively new to agricultural biotechnology.These proteins share functional similarity with existing three-domain Cry proteins from Bt in that they bind to specific receptors in the insect gut and form pores that enable insect control.As proteins with a familiar function and biological activity, the safety of β-PFPs can be evaluated within the existing weight-of-evidence protein safety assessment framework as described by Delaney and colleagues.The data presented in this report represents a comprehensive safety assessment of Cry51Aa2.834_16 using a multistep tiered weight-of-evidence approach.Cry51Aa2.834_16 is expressed in a new insect-protected cotton product, MON 88702, for the control of piercing-sucking insect pests.The efficacy of this protein was improved through the substitution of eight amino acids and a targeted deletion of three amino acids, resulting in 96% amino acid sequence identity of Cry51Aa2.834_16 to the wild-type Cry51Aa2 protein.As presented in this report, the improvements made to the efficacy of this protein were achieved without any impact to protein safety.Bioinformatic analyses were conducted to determine if Cry51Aa2.834_16 had any sequence or structural similarity proteins having allergenic potential or the potential to elicit a toxic response.These analyses demonstrated that Cry51Aa2.834_16 and the other Cry51Aa2 variants are members of the β-pore forming family of proteins which includes ETX_MTX2 and aerolysin variants.This is further illustrated by comparing the structures of the Cry51Aa2 variants to other proteins in this family."There is limited sequence identity between these three proteins and the conserved tail region of known mammalian toxins, however, the tail region defines the protein family but not each protein's specificity.Cry51Aa2.834_16 and its companion developmental variants have significant sequence and structural diversity from other ETX_MTX2 family members in the receptor binding head domain that confers species specificity and consequently defines insecticidal activity spectrum.The relevance of these data to specificity is empirically demonstrated by the limited activity spectrum of this protein and by the mammalian safety data presented herein for three Cry51Aa2 variants.Therefore, the low-level alignment of Cry51Aa2.834_16 to pore forming and oligomerization regions of GenBank entry GI-1102943401 is consistent with the known domain architecture of Cry51Aa2 protein variants and other β-PFP family proteins, and when analyzed from a domain-based perspective, sequence diversity in the head domain illustrates the rationale for safety of these Cry51Aa2 variants.Taken together, this analysis demonstrates that integrating known domain-based architecture with amino acid sequence alignments provides additional information to enable the evaluation of protein safety.Jerga et al. demonstrated how the Cry51Aa2 protein variant Cry51Aa2.834_16 exerts its insecticidal effects in Lygus species and characterized the mode of action and specificity of Cry51Aa2.834_16 by elucidating how this protein binds to a specific receptor in brush border membranes of the gastrointestinal tract in L. lineolaris and L. hesperus.The importance of the receptor binding domains in contributing to specificity of β-PFPs has also been illustrated by a series of in vitro studies with another β-PFP, Cry41Aa, in which aromatic amino acid substitutions in the receptor binding region eliminated cytotoxicity to the cancer cell line HepG2.In
The improved Cry51Aa2 variant, Cry51Aa2.834_16, and other developmental variants belong to the ETX_MTX2 family of proteins but share a low level of sequence similarity to other members of this family.This similarity is largely localized to the pore-forming and oligomerization protein domains, while sequence divergence is observed within the head domain that confers receptor binding specificity.
contrast, unpublished dissertation research indicates that Cry51 is not toxic to the HepG2 cell line despite some structural similarity to Cry41Aa.The results of these in vitro studies confirm the importance of the receptor binding head domain for specificity of β-PFPs, lending further support to the concept of leveraging domain-based and structural analyses to complement primary sequence analysis as part of a comprehensive safety assessment.Although the history of safe use, bioinformatic analyses, mode of action and specificity data strongly support the safety of Cry51Aa2.834_16, additional tests were conducted to determine the fate of the protein in the presence of heat or digestive enzymes.Protein structure can be lost in food processing due to changes in temperature, pH and other physical disruptions.As protein structure is a key determinant of potential toxicity, understanding the fate of a protein after exposure to heat and digestive enzymes provides additional information about the potential for exposure or toxicity following processing, cooking, and consumption of food and feed.Information regarding the susceptibility of a protein to degradation by digestive enzymes can be leveraged to address the potential for exposure to intact proteins when consumed as part of a diet.Most proteins consumed in the diet are completely degraded as part of the digestive process through exposure to acidic conditions and digestive enzymes such as various pepsins in the stomach, or through sequential exposure to acid/pepsin in the stomach followed by pancreatic proteases secreted into the small intestine.The resulting small peptides and amino acids are absorbed in the small intestine and are ultimately utilized as an energy source and as building blocks for the synthesis of new proteins.However, some proteins exhibit resistance to proteolytic degradation, which suggests a potential correlation with the allergenic potential of such proteins.The results presented here indicate that the Cry51Aa2.834_16 protein is rapidly degraded upon exposure to pepsin under physiological conditions.Thus, it is highly unlikely that the Cry51Aa2.834_16 protein will pose any safety concern to human health upon exposure to the intact, full-length protein.The Cry51Aa2.834_16 protein will be expressed in insect-protected GM cotton MON 87702.In addition to the ready degradation of Cry51Aa2.834_16, dietary exposure to protein from cotton products derived from MON 87702 in commerce will be negligible because processed fractions of cotton consumed by humans is limited to refined cottonseed oil and linters that contain negligible amounts of protein.The processing conditions for producing cottonseed oil are far harsher than the temperature conditions of 55 °C or greater that lead to rapid loss of Cry51Aa2.834_16 insecticidal activity.These data indicate that any protein exposure to Cry51Aa2.834_16 in food or feed derived from MON 88702 would be to a denatured/inactive form of the protein.The weight-of-evidence from the first-tier safety assessment and the negligible potential for exposure strongly support the conclusion that dietary exposure to the Cry51Aa2.834_16 protein will not adversely affect the health of humans or animals.This is consistent with the extensive testing of Cry proteins expressed in GM crops showing no evidence of toxicity towards humans or animals.Nevertheless, a toxicity study was conducted with Cry51Aa2.834_16, to confirm the safety of this protein and support international regulatory requirements for the approval of GM cotton.Most known protein toxins exert their toxicity after acute dietary exposure, therefore an acute mouse oral gavage toxicity study was considered appropriate to assess the toxicity of Cry51Aa2.834_16.Whereas the process of modifying Cry51Aa2 to contain the amino acid modifications present in Cry51Aa2.834_16 utilized selected amino acid sequence changes to significantly increase insecticidal activity against targeted insect pests, no evidence of toxicity was observed when the Cry51Aa2 variants Cry51Aa2.834 and Cry51Aa2.834_2 were tested in mice at doses in excess of 1000 mg/kg body weight or when Cry51Aa2.834_16 was administered orally to mice at a dose of 5000 mg/kg body weight.In the case of Cry51Aa2.834_16, the 5000 mg/kg dose level exceeds the standard limit dose guidance of 2000 mg/kg for acute toxicity studies.These dose levels are exceedingly high relative to low levels of anticipated human exposures due to limited human consumption of protein containing fractions from cotton and the tested dose levels therefore represent robust assessments of the evaluated proteins.The data presented herein provide direct experimental evidence that despite changes introduced into the Cry51Aa2 protein amino acid sequence resulting in significant enhancement of insecticidal activity, the resulting variant proteins retain the safety profile of the wild-type progenitor, presenting no additional hazard to humans or livestock.Amino acid modifications made to Cry51Aa2.834_16 thus do not make this non-toxic protein into a toxic protein from a mammalian safety perspective, nor do these changes broadly expand the insecticidal activity spectrum unexpectedly beyond that of the wild type progenitor protein.The lack of mammalian toxicity of these Cry51Aa2 proteins to mammals is consistent with the similarity to the ETX_MTX2 family of proteins being largely localized to the pore-forming and oligomerization domains and with sequence diversity found within the specificity-conferring head region of these β-PFPs.Evidence from both Tier I evaluation and Tier II toxicity studies provide a case study validating the use of a domain-based approach to complement the existing weight of the evidence safety assessment for pore forming protein toxins expressed in GM crops.As validated for Cry51Aa2.834_16 by the data presented in this report, a weight-of-evidence approach that leverages structural and mechanistic knowledge enables and promotes hypothesis-based toxicological evaluations of new proteins for use in the agricultural sector and beyond.bw, body weight; PFP, pore forming proteins; GM, genetically modified; Bt, Bacillus thuringiensis; Cry, crystal.
The intact Cry51Aa2.834_16 protein was heat labile at temperatures ≥55 °C, and was rapidly degraded after exposure to the gastrointestinal protease pepsin.The weight-of-evidence therefore supports the conclusion of safety for Cry51Aa2.834_16 and demonstrates that amino acid sequence modifications can be used to substantially increase insecticidal activity of a protein without an increased hazard to mammals.
this pre-retrieval effect and reduced source memory accuracy.Due to the temporal and topographic differences between oscillatory and ERP preparatory memory effects, it seems unlikely that they are indexing exactly the same process, but it is evident that both measures of electrophysiological activity indicate an important role for preparatory processes linked to neurocognitive states during retrieval.They also echo electrophysiological studies of memory encoding, which show that pre-stimulus neural activity at study predicts subsequent memory accuracy at test.Taken in combination, data from the small number of pre-retrieval EEG studies conducted thus far indicate that preparatory memory effects are not equivalent across different retrieval tasks, but that they instead reflect variations in task-specific retrieval orientations, with pre-retrieval ERPs predicting memory accuracy in a content-specific manner.While theoretical accounts of pre-retrieval propose that prefrontal cortex is involved in setting retrieval goals and initiating memory searches, Polyn et al. reported fMRI evidence for the precise cortical implementation of task-specific memory searches during pre-retrieval.Participants studied three classes of words and a multivoxel pattern classification algorithm identified distinct patterns of neural activity associated with each class of item during encoding.The classifier was then used during a free recall task, and activation of category-specific patterns of neural activity were observed in the seconds before items from that category were recalled.These activations were observed in ventral temporal cortex, medial temporal cortex and prefrontal cortex.Similarly, Sederberg et al. found that intracranial EEG gamma oscillations that predicted memory during encoding also reactivated in the 500 ms prior to recollection in a free recall task, differentiating correct recall from memory errors.These oscillations were observed in regions corresponding with those identified by Polyn et al., including left hippocampal, left temporal and left prefrontal regions.Although these experiments examined self-initiated free recall as opposed to the criterial source memory tasks used here, they provide strong evidence that content-specific pre-retrieval processes guide memory retrieval by initiating memory states that correspond with those active during encoding.In conclusion, we have demonstrated that pre-retrieval ERP correlates of orientation were evident prior to memory probes eliciting correct source judgments but not prior to test items eliciting memory errors.The fact that this effect predicted memory success on switch trials suggests that the initiation of appropriate retrieval orientations influences the successful recovery of criterial contextual information.Furthermore, the present findings in conjunction with those from other studies demonstrate that pre-stimulus ERPs not only predict whether source information will be recollected, but do so differentially depending on the memory contents that are to be recovered.Preparatory correlates of retrieval orientation on stay trials were i) of reversed polarity to those observed on switch trials, and ii) insensitive to subsequent memory accuracy.We propose that this effect reflects the maintenance of retrieval orientations across subsequent stimuli, and illustrates the transition from initiation to maintenance of orientations within the same experimental context for the first time.The frontal scalp distributions of these effects are consistent with the view that regions within prefrontal cortex implement task-specific memory searches during pre-retrieval, although further studies combining high density electrophysiological recordings with source localisation analyses are required to confirm this link.
Neural activity preceding memory probes differs according to retrieval goals.These divergences have been linked to retrieval orientations, which are content-specific memory states that bias retrieval towards specific contents.On the first trial of each memory task ('switch’ trials), preparatory ERPs preceding correct source memory judgments differed according to retrieval goal, but this effect was absent preceding memory errors.Initiating appropriate retrieval orientations therefore predicted criterial recollection.Preparatory ERPs on the second trial of each memory task (i.e.'stay’ trials) also differed according to retrieval goal, but the polarity of this effect was reversed from that observed on switch trials and the effect did not predict memory accuracy.This was interpreted as a correlate of retrieval orientation maintenance, with initiation and maintenance forming dissociable components of these goal-directed memory states.More generally, these findings highlight the importance of pre-retrieval processes in episodic memory.
significant pattern emerged.First, colon biopsies from patients with ulcerative colitis confirmed the global increase in expression of CHRFAM7A, but now the analyses revealed that there was a concomitant and significant decrease in CHRNA7 expression in ulcerative colitis.These changes in ulcerative colitis were also highly significant when comparing CHRNA7 expression to that of CHRFAM7A."In Crohn's disease, there was a small but significant increase in CHRFAM7A gene expression but no significant change in CHRNA7 expression.As earlier, normalization of CHRFAM7A with CHRNA7 expression increased the significance of the difference."Ulcerative colitis affects colon and not small intestine but Crohn's disease can affect any portion of the gastrointestinal tract . "In analyzing the source of biopsy, we observed a significant up-regulation in CHRFAM7A gene expression in colon from patients with Crohn's disease, but there was also a concomitant and significant down-regulation in CHRNA7 underscored by the significant change of CHRFAM7A when normalized with CHRNA7. "In small intestine biopsies of Crohn's disease, the change in CHRFAM7A, CHRNA7 or in the ratio of CHRFAM7A to CHRNA7 was not significant.We used two approaches to establish specificity of differential expression in diseased colon.First, we evaluated the expression of CHRFAM7A, CHRNA7, and CHRFAM7A-to-CHRNA7 ratio in colon cancer biopsies.No differences were detected when all colon cancer biopsies were evaluated together or when analyzed according to the stage of disease.Second, we evaluated the expression of a second human-specific gene called TBC1D3, that is associated with macro-pinocytosis and epidermal growth factor signaling ."There were no differences in IBD, no differences in either ulcerative colitis or Crohn's disease when examined separately.There were also no differences in gene expression of TBC1D3 in biopsies from colon cancer.This concordance of CHRFAM7A and CHRNA7 expression in colon cancer is also evident in curated public databases like the Cancer Genome Atlas, which enables mining gene expression patterns in different epithelial cancers.Correlations between CHRFAM7A and CHRNA7 in these databases are > 0.87 and > 0.77 in uterine, stomach, and colorectal cancers)."Unfortunately, no analogous public databases with RNAseq data exist for inflammatory bowel disease, Crohn's disease, or ulcerative colitis, although several studies have evaluated whole genome gene expression and found changes in, and effects of, traditional inflammatory products like TNF and HMBG1 .In 2011, Cooper and Kehrer-Sawatzki reported that new human genes are over-represented among genes tied to complex human disease and more recently described how newly evolved human genes can drive gene interaction networks associated with critical phenotypes.It is in this vein that the results presented here suggest that the up-regulation of pro-inflammatory CHRFAM7A in humans could exacerbate the down-regulation of anti-inflammatory α7-nAChR in IBD.If so, it is interesting to speculate that this pro-inflammatory effect of CHRFAM7A expression is an “off-target” contributor to human IBD that arose as a function of adaptation.In this paradigm, a human-specific gene like CHRFAM7A could have originally arisen as an evolutionary pro-inflammatory and adaptative response to newly emerging human behaviors like bipedal walking or the harnessing of fire but retained for CNS activities regulating neurotransmitter activity.Interestingly, human-specific responses in gene expression after trauma, burn, and infection have been previously described , although they remain controversial largely because of their implications to animal modeling of human injury .Ironically, the putative adaptive pro-inflammatory origin of a hominid gene like CHRFAM7A may ultimately be ancillary to its physiological significance to human speciation because CHRFAM7A in the brain is tied to regulating α7-nAChR, a ligand-gated neurotransmitter channel that itself regulates human cognition, attention, memory, and mental health.In this model, the up-regulation of CHRFAM7A in peripheral tissues of modern humans could reflect vestigial pro-inflammatory activity.Such a possible paradigm underscores the importance of understanding the role of human evolution in the etiology of human disease, the role of HSGs, and ultimately, their function when modeling human disease.On a final note, the differential expression of the CHRFAM7A human-specific gene in a prototypic human disease like IBD underscores the importance of better understanding the contribution of this class of genes to the onset, development, and progression human disease when diseases are modeled in experimental animals.With newly emerged human-specific genes like ARHGAP11B promoting neocortex expansion in vivo , c20orf203 eliciting differential gene function , human-specific defensins conferring differential resistance to infection , and CD33 providing cognitive protection , it is critical to understand the possible contributions of newly evolved gene interaction networks to human disease when they differ in humans from all other species and create unique phenotypes .Conceived of experiments and wrote first drafts of manuscript, designed PCR and validated qPCR, assisted in interpretation of data and assisted in preparation of brief report.The authors have declared that no conflict of interest exists.
But in humans, there exists a second gene called CHRFAM7A that encodes a dominant negative α7-nAChR inhibitor.Here, we investigated whether their expression was altered in inflammatory bowel disease (IBD) and colon cancer.Methods: Quantitative RT-PCR measured gene expression of human α7-nAChR gene (CHRNA7), CHRFAM7A, TBC3D1, and actin in biopsies of normal large and small intestine, and compared to their expression in biopsies of ulcerative colitis, Crohn's disease, and colon cancer.Results: qRT-PCR showed that CHRFAM7A and CHRNA7 gene expression was significantly (p < .02) up-regulated in IBD (N = 64).Gene expression was unchanged in colon cancer.Further analyses revealed that there were differences in ulcerative colitis and Crohn's Disease.Colon biopsies of ulcerative colitis (N = 33) confirmed increased expression of CHRFAM7A and decreased in CHRNA7 expression (p < 0.001).Biopsies of Crohn's disease (N = 31), however, showed only small changes in CHRFAM7A expression (p < 0.04) and no change in CHRNA7.When segregated by tissue source, both CHRFAM7A up-regulation (p < 0.02) and CHRNA7 down-regulation (p < 0.001) were measured in colon, but not in small intestine.Conclusion: The human-specific CHRFAM7A gene is up-regulated, and its target, CHRNA7, down-regulated, in IBD.Differences between ulcerative colitis and Crohn's disease tie to location of disease.Significance: The appearance of IBD in modern humans may be consequent to the emergence of CHRFAM7A, a human-specific α7-nAChR antagonist.CHRFAM7A could present a new, unrecognized target for development of IBD therapeutics.
remodelling in the hRI group .Further definition of this group may allow us to study firstly those with a very high likelihood of developing PE and secondly those where the initial impairment in remodelling is not sufficient to lead to the clinical condition.In this second group we could postulate that this is because remodelling was merely delayed and there are compensatory mechanisms which overcome this.Additionally there are likely to be inherent differences in how different mothers respond to factors produced by an intermittently perfused and stressed placenta, perhaps reflecting differences at a cardiovascular level .This group is particularly interesting as it may help us to define how some women are more able to tolerate or compensate for early problems.It is only when we can grasp this that we can consider how we might therapeutically make one of the groups resemble the other in outcome.This is summarised in Fig. 2.The complexities of modelling cellular interactions in a more 3-dimensional environment also provide challenges and potential for development.Immunohistochemical studies can provide much information about early human pregnancy however in vitro culture systems are needed to look at dynamic multi-cellular interactions.Some of the general processes involved in spiral artery remodelling have been reported by a number of groups, including our own, using monolayer co-cultures and explant cultures , and have revealed roles for EVT-dependent apoptotic vascular cell loss , VSMC de-differentiation and disruption of cellular interactions through proteases .Growing vascular cells in a 3D spheroidal model that recapitulates EC/VSMC interactions seen in vivo allowed us to determine which genes were differentially expressed following stimulation by trophoblast and show VSMC de-differentiation .The future application of some of the newer 3D technologies, many of which stem from developments in the cancer field, will allow even more accurate placental and decidual modelling.These developments include sophisticated synthetic matrices and scaffolds and approaches such as Real Architecture for 3D tissues technology .Advances in isolation and culture methods for primary cells means that multiple cell types can be isolated from first trimester tissue from an individual pregnancy.We routinely isolate stromal cells, dNK cells and macrophages as well as endothelial cells from decidual tissue of the uterine artery Doppler screened women.From the placenta we can isolate trophoblast, stromal, endothelial and macrophages.In the future the ability to look at multiple cell types across an individual pregnancy provides an opportunity for some complex profiling integrating information about genes, microRNAs, proteome and secretome with readouts from functional and biochemical cell based assays.This will allow us to start to get an overview of the maternal-fetal interface in an individual pregnancy.Determining whether this complex information can be integrated to model both normal and PE pregnancies will require strong collaborations with the bioinformatics and mathematical modelling fields.We will then be able to interrogate these models asking questions such as the importance of particular cellular or molecular interactions to successful placentation.The ultimate aim of developing robust models of the maternal-fetal interface would be to help in the identification of novel targets and the safe design of therapies.There is no conflict of interests.
The pathologies of the pregnancy complications pre-eclampsia (PE) and fetal growth restriction (FGR) are established in the first trimester of human pregnancy.In a normal pregnancy, decidual spiral arteries are transformed into wide diameter, non-vasoactive vessels capable of meeting the increased demands of the developing fetus for nutrients and oxygen.Disruption of this transformation is associated with PE and FGR.Very little is known of how these first trimester changes are regulated normally and even less is known about how they are compromised in complicated pregnancies.Interactions between maternal and placental cells are essential for pregnancy to progress and this review will summarise the challenges in investigating this area.We will discuss how first trimester studies of pregnancies with an increased risk of developing PE/FGR have started to provide valuable information about pregnancy at this most dynamic and crucial time.We will discuss where there is scope to progress these studies further by refining the ability to identify compromised pregnancies at an early stage, by integrating information from many cell types from the same pregnancy, and by improving our methods for modelling the maternal-fetal interface in vitro.
Mindfulness is a translation of the Pali term sati that in meditation context refers to remembering to keep awareness of one's practice.Mindfulness practice normally proceeds in stages, starting from the mindfulness of bodily sensations to awareness of feelings and thoughts, ultimately aimed at developing a present-centered awareness without an explicit focus.These stages are apparent in most schools of Buddhism, as well as in Mindfulness-Based Interventions such as Mindfulness-Based Stress Reduction and Mindfulness-Based Cognitive Therapy.Mindfulness practice as incorporated in MBIs is often contrasted with more effortful concentration-based practices such as those taught in Theravada Buddhism.The traditions of Buddhism most closely aligned with mindfulness as taught in MBSR and MBCT are Dzogchen and Mahamudra, which take a gentle approach to practice by letting go of any striving to achieve a particular mental state, and simply resting in a present-centered awareness free of emotional reactivity and conceptual elaboration.MBSR has been shown to reduce stress, depression, and anxiety, and to improve general wellbeing in a number of physical and psychological conditions, as well as in healthy populations.MBCT has been reported to prevent depression relapse, and to be at least as effective as anti-depressants.Mindfulness as a trait also inversely correlates with anxiety and depression in healthy individuals.Despite this transdiagnostic efficacy of MBIs, the relationship between mindfulness and psychosis is currently unclear.There are persistent concerns that mindfulness might induce psychosis in vulnerable individuals and even in people with no previous history or known vulnerability to psychosis based on a number of single-case studies that appear to suggest that meditation can induce acute psychotic episodes in individuals with a history of schizophrenia, as well as in people without a history of psychiatric illness.However, as discussed in more detail by Shonin et al., in all these cases the individuals were involved in intensive meditation retreats, and it is unclear to what extent the meditation practices that the described cases were engaged with are in line with the approach employed in MBIs.A number of MBIs for psychosis conducted to date, although mostly preliminary, suggest that mindfulness practice of short duration can actually alleviate the distress associated with psychotic symptoms, such as hearing voices, and reduce depression and anxiety.With a clinical prevalence of about 7 per 1000 in the adult population, psychosis is more common among the general population than previously assumed and is expressed along a continuum.Schizotypy is a psychological construct, encompassing a range of personality traits and cognitions that are similar to psychosis but less severe in nature.According to Raine et al., schizotypy is characterized by nine dimensions: ideas of reference, excessive social anxiety, magical thinking, unusual perceptual experiences, eccentric behavior or appearance, no close friends or confidants, odd speech, constricted affect and suspiciousness.Schizotypy clearly encompasses both psychosis-like symptoms and symptoms related to anxiety and depression.The main aim of the present study therefore was to examine the relationship between regular long-term practice of mindfulness and the dimensions of schizotypy in two independent studies.Based on the reviewed evidence for the positive effects of mindfulness on anxiety and depression, it was hypothesized that experienced meditators will score lower on the excessive social anxiety and constricted affect compared to meditation-naïve individuals.Given the lack of any direct data on this topic, no specific predictions were made in relation to other schizotypy dimensions.It was, however, anticipated that any associations present in both studies, even if with a small effect size, would represent true effects.Study 2, in addition to aiming to replicate the findings of Study 1, explored the relationship between the dimensions of schizotypy and the facets of trait mindfulness indexed by the Five Facets Mindfulness questionnaire.This investigation included two independent studies.Study 1 included 24 experienced lay meditators and 24 meditation-naïve individuals.The meditators were recruited from Buddhist centers across the UK via posters and advertisements.Meditators had to have been consistent in their practice for over 2 years, practicing at least 6 days a week for a minimum of 45 min a day, and were drawn from Dzogchen and Mahamudra traditions of Tibetan Buddhism."Meditation-naïve individuals had to have no experience of mindfulness-related practices including meditation, yoga, tai chi, chi gong, or martial arts and were recruited from a database of healthy volunteers as well as emails and circulars sent to the students and staff of King's College London.Study 2 included 28 experienced male meditators mainly from Zen, Theravada, Vajrayana and Triratna traditions of Buddhism, and 28 meditation-naïve male individuals, recruited in the same way as Study 1 using the same criteria.Additional inclusion criteria for both studies included IQ > 80 as assessed by Wechsler Abbreviated Scale of Intelligence, age between 18 and 60 years, non-smoking and not drinking more than 28 units of alcohol per week.Participants with diagnosis of neuropsychiatric disorders, current or past, substance abuse and/or regular prescription medication as assessed by the screening interview were excluded."The study procedures were approved by King's College London research ethics committee.Participants provided written informed consent to their participation and were compensated for their time.All participants completed the Schizotypal Personality Questionnaire which contains 9 subscales: ideas of reference, excessive social anxiety, magical thinking, unusual perceptual experiences, odd/eccentric behavior, no close friends, odd speech, constricted affect and suspiciousness.This 74-item assessment of DSM-III-R schizotypal personality disorder provides an overall score of individual differences in schizotypal personality in addition to the scores of the above-mentioned subscales.With high internal reliability, test–retest reliability, convergent validity and discriminant and criterion validity, it is considered a well-validated measure of schizotypy.All participants of Study 2
Despite growing evidence for demonstrated efficacy of mindfulness in various disorders, there is a continuous concern about the relationship between mindfulness practice and psychosis.As schizotypy is part of the psychosis spectrum, we examined the relationship between long-term mindfulness practice and schizotypy in two independent studies.Study 1 included 24 experienced mindfulness practitioners (19 males) from the Buddhist tradition (meditators) and 24 meditation-naïve individuals (all males).Study 2 consisted of 28 meditators and 28 meditation-naïve individuals (all males).All participants completed the Schizotypal Personality Questionnaire (Raine, 1991), a self-report scale containing 9 subscales (ideas of reference, excessive social anxiety, magical thinking, unusual perceptual experiences, odd/eccentric behavior, no close friends, odd speech, constricted affect, suspiciousness).
also completed the FFMQ to investigate the relationship between trait mindfullness and schizotypy.FFMQ has been derived from factor analysis performed on five of the most commonly used mindfulness measures.The five facets are observing, describing, acting with awareness, non-judging of inner experience, and non-reactivity to inner experience as assessed using Likert scale with 39 items.FFMQ has high internal consistency, ranging from 0.75 to 0.91.Group differences in age, IQ, FFMQ and SPQ scores were examined using independent sample t-tests, run separately for the two studies.Given the significant difference in age and IQ between the meditator and meditation-naïve groups in Study 1, we examined the relationship of SPQ scores with age and IQ, and then re-evaluated the group difference in one of the SPQ subscales that showed a positive association with IQ, using analysis of covariance co-varying for IQ.In Study 2, we examined the correlations between trait mindfulness and SPQ scores across both samples, and then separately in the meditator and meditation-naïve groups."Given the limited range of scores on some SPQ subscales, we report Spearman correlations.All data analysis was conducted using IBM Statistical Package for Social Sciences.The alpha level of significance was set at p = 0.05 in all analyses unless specified otherwise.Demographic characteristics of the meditator and meditation-naïve groups, along with the descriptive statistics and group differences in SPQ and FFMQ scores, are presented in Table 1.Meditators were older and had higher IQ than meditation-naïve individuals.Meditators scored significantly higher on ‘magical thinking’ and significantly lower on the ‘suspiciousness’, ‘constricted affect’ and ‘no close friends’ subscales of the SPQ relative to meditation-naïve individuals.The two groups did not differ in total schizotypy scores.Unexpectedly, there was a significant negative correlation between IQ and ‘no close friends’ subscale scores and the significant difference between the meditator and meditation-naïve groups in ‘no close friends’ scores was abolished when we controlled for IQ.IQ and age were not correlated with ‘excessive social anxiety’, ‘magical thinking’ or ‘suspiciousness’ scores.The meditator and meditation-naïve groups were comparable on age and IQ.Replicating the observations of Study 1, meditators scored significantly higher on ‘magical thinking’ and lower on ‘suspiciousness’ relative to meditation-naïve individuals.They also scored lower, at trend-level, on ‘excessive social anxiety’.As in Study 1, total SPQ profile did not significantly differ between the two groups.Age and IQ did not correlate with SPQ scores.Meditators scored significantly higher on the Observe, Non-judgment and Non-reactivity mindfulness facets of FFMQ compared to meditation-naïve individuals.Across all participants, there were negative correlations of ‘excessive social anxiety’ with Awareness and Non-judgment; ‘odd speech’ with Describe and Awareness; ‘constricted affect’ with Awareness and Non-judgment; and ‘suspiciousness’ with Awareness, Non-judgment and Non-reactivity.Both the meditator and meditation-naïve groups contributed to all these relationships, except for the negative correlation between suspiciousness and Non-reactivity, which was present mainly in the meditator group.In line with our a priori hypothesis, meditators, compared to the meditation-naïve individuals, scored significantly lower on ‘constricted affect’ in Study 1, and showed a trend level for lower scores on ‘excessive social anxiety’ in both studies.In addition, meditators scored significantly higher on ‘magical thinking’, and significantly lower on ‘suspiciousness’ in both studies.In relation to the association between trait mindfulness and schizotypy dimensions, lower ‘excessive social anxiety’ and ‘constricted affect’ scores were associated with higher Awareness and Non-judgment; lower ‘odd speech’ with higher Awareness and Describe; and lower ‘suspiciousness’ with higher Awareness, Non-judgment and Non-reactivity scores.‘Constricted affect’ relating to a form of emotional blunting appears to be positively affected only by the mindfulness practice styles of Dzogchen and Mahamudra which are most similar to the MBSR/MBCT approach, as this schizotypy dimension did not significantly differentiate the long-term meditators drawn from Zen, Vipassana, Theravada, Vajrayana and Triratna traditions of Buddhism from the meditation-naïve participants in Study 2.The ‘excessive social anxiety’ subscale relates to overt physiological changes along with a high degree of nervousness and anxiety.The finding of lower scores in meditators on this subscale, albeit non-significant, is in line with the notion that mindfulness training reduces anxiety.Significant inverse correlations of lower ‘excessive social anxiety’ and ‘constricted affect’ scores with higher Awareness and Non-judgment scores suggest that mindfulness trait alleviates the so called negative symptoms of schizotypy via non-judgemental present-centered awareness, and this effect could be strengthened by mindfulness practice as suggested by significantly higher scores on the Non-judgment facet in long-term meditators, compared to mediation-naïve individuals.This is in line with preliminary evidence showing ameliorating effects of mindfulness training on symptoms of anxiety and depression in people with psychosis by reducing self-critical attitudes and developing non-judgmental present-centered awareness, as well as self-acceptance and self-compassion.One of our novel findings is that meditators scored significantly lower on ‘suspiciousness’ in both samples.Although not specifically hypothesized, this finding is highly relevant to the clinical applications of mindfulness for the prevention and treatment of psychosis.From the time of Kraepelin, suspiciousness and paranoia have been considered to be among the main symptoms of psychosis.These symptoms may stem from the avoidance of personal exposure and negative self-image, distorting reality in the process so as to strengthen impaired self-esteem.This avoidant nature is in contrast to mindfulness, which promotes direct engagement with reality and attention to all aspects of the present-moment experience non-judgmentally and non-reactively.Similarly, the distorted view of one-self and the characteristics of suspiciousness and paranoia are in contrast to greater empathy, compassion, and prosocial behavior associated with mindfulness.Given that a) paranoid schizophrenia is the most common type of psychosis experienced, b) suspiciousness/paranoia carries a high predictive power for conversion to psychosis in high
Participants of study 2 also completed the Five-Facet Mindfulness Questionnaire which assesses observing (Observe), describing (Describe), acting with awareness (Awareness), non-judging of (Non-judgment) and non-reactivity to inner experience (Non-reactivity) facets of trait mindfulness.In both studies, meditators scored significantly lower on suspiciousness and higher on magical thinking compared to meditation-naïve individuals and showed a trend towards lower scores on excessive social anxiety.Excessive social anxiety correlated negatively with Awareness and Non-judgment; and suspiciousness with Awareness, Non-judgment and Non-reactivity facets across both groups.The two groups did not differ in their total schizotypy score.
risk individuals, alongside genetic risk and unusual thought content, and c) we found inverse correlation between ‘suspiciousness’ and Non-judgement in meditators only, MBIs might hold promise in preventing psychosis in high-risk individuals.Another novel finding of our investigation is higher score on ‘magical thinking’ in meditators in both studies.Given that ‘magical thinking’ was not associated with any of the FFMQ facets that were significantly higher in meditators compared to meditation-naïve individuals in study 2 and, as such, does not appear to develop due to mindfulness practice per se, the most likely explanation for this finding is that our mindfulness meditators were mainly practicing within Buddhist tradition.The ‘magical thinking’ subscale measures beliefs into such supernatural experiences as telepathy, clairvoyance, astrology, and sixth sense, which are incorporated into Buddhist psychology and metaphysics, particularly in the Tibetan Buddhist tradition."The higher scores on ‘magical thinking’ in the face of low scores on other schizotypy dimensions are in line with research showing that having a context for unusual experiences and/or beliefs makes a difference in terms of whether they lead to diagnosable mental health difficulties or whether they become integrated into one's life without causing a functional disruption.It is also possible that people attracted to meditation practice within the context of Buddhist beliefs and metaphysics are higher on magical thinking to begin with, or that higher score on ‘magical thinking’ simply reflects greater openness to experience in meditators, rather than actual beliefs in these ‘supernatural’ constructs.The latter possibility is more likely given that trait mindfulness has been shown to be associated with greater openness to experience, and there is an association between schizotypy and openness to experience.Mindfulness meditators thus may simply have greater open-mindedness towards what constitutes ‘magical thinking’ in the SPQ than the average population.Whether higher ‘magical thinking’ is an ‘artifact’ of Buddhist belief system or whether it indexes greater open-mindedness of mindfulness practitioners could be addressed by further research by recruiting long-term meditators that practice mindfulness within a secular setting.Particularly relevant to psychosis is our finding that higher ‘magical thinking’ in meditators was not accompanied with higher ‘ideas of reference’.The ‘ideas of reference’ subscale measures the tendency to self-reference the experience, i.e. over-subscribe personal relevance and meaning to inner experiences and external events.Mindfulness practice, on the other hand, attenuates self-referential tendencies and associated brain dynamics; the same brain networks are found to be hyperactive in people with schizophrenia.The combination of high magical thinking and low ideas of reference is in alignment with the frameworks of psychosis that suggest that it is not unusual beliefs and/or experiences per se that constitute a risk for psychosis, it is rather their interpretation and hyper self-referencing.Given that unusual beliefs and thought content constitute risk for psychosis conversion, the reduction of self-referencing might be another rationale for mindfulness-based psychosis prevention.The observed pattern of inverse associations between the dimensions of schizotypy and the Awareness, Non-judgment and Non-reactivity facets of mindfulness suggests that trait mindfulness reduces negative dimensions of schizotypy, whereas mindfulness practice might have further ameliorating effects on ‘excessive social anxiety’ and ‘suspiciousness’ as these were lower in meditators compared to meditation-naïve participants.These findings may have important therapeutic implications, suggesting that a) future MBIs with a strong emphasis on the Awareness, Non-judgment and Non-reactivity aspects of mindfulness may be particularly effective in reducing anxiety-related symptoms, depression, and suspiciousness in psychosis; and b) mindfulness could be used as a therapeutic tool for psychosis prevention by addressing suspiciousness and paranoia in high risk populations.Our investigation has a number of limitations."First, it examined the relationship between schizotypy dimensions and mindfulness in a cross-sectional correlational design, without any knowledge of the meditators' schizotypy scores prior to them starting mindfulness practice.Future research could examine the effects of shorter duration MBIs on the relationship between these traits.Second, this investigation was opportunistic, using two existing data sets, consisting of mostly or only men.Our findings thus cannot be generalized to women.Third, both schizotypy and mindfulness were assessed using self-report methods."While self-reports have their strengths, such as in-depth, detailed data gathered directly from the participant, whilst limiting experimenters' bias, they also have limitations, such as socially desired responses resulting in underestimation or overestimation of actual traits. "Given that people's perceptions of themselves are known to be poor predictors of their behavior, future studies, wherever possible, should incorporate experimental analogues of relevant phenomena.In conclusion, to our knowledge, this is the first investigation to have focused on the schizotypy profiles of experienced mindfulness practitioners."The findings demonstrated lower 'excessive social anxiety, as well as significantly lower 'suspiciousness' and higher 'magical thinking' in meditators relative to meditation-naïve individuals. "These differences, taken together with the pattern of correlational observations, suggest that mindfulness training with emphasis on developing the facets of Awareness, Non-judgment and Non-reactivity may help to reduce social anxiety and suspiciousness in psychosis and related populations.The sponsors had no role in study design; in the collection, analysis and interpretation of data; in the writing of the report; or in the decision to submit the paper for publication.Elena Antonova and Veena Kumari conceptualized the study.Elena Antonova and Bernice Wright assisted with participant recruitment and data collection.Veena Kumari and Elena Antonova undertook the statistical analysis and prepared the first draft.All authors contributed to the final version.The authors report no biomedical financial interests or potential conflicts of interest.
We conclude that mindfulness practice is not associated with an overall increase in schizotypal traits.Instead, the pattern suggests that mindfulness meditation, particularly with an emphasis on the Awareness, Non-judgment and Non-reactivity aspects, may help to reduce suspiciousness and excessive social anxiety.
Porcine epidemic diarrhea virus is the causative agent of porcine epidemic diarrhea, an enteric disease affecting pigs of all ages.The disease is characterized by acute watery diarrhea, dehydration and vomiting, with high mortality in neonatal piglets.Devastating outbreaks of PED in East Asia and in North America have revitalized the research into this porcine coronavirus that was first identified in 1978.PEDV primarily replicates in the villous enterocytes of the small intestine.Its entry into host cells is mediated by the spike glycoprotein that is exposed on the virion surface.This key entry factor is considered the main determinant of viral host and tissue tropism.Moreover, the S protein is highly immunogenic and the main target for neutralizing antibodies.Understanding this protein’s function will thus aid the design of strategies against this enteric swine coronavirus and is fundamental to our understanding of its epidemiology and pathogenesis.In this review, following a brief and general introduction on PEDV, we will describe the structure and function of the spike glycoprotein.In particular, we will report the generation of a recombinant PEDV virus harboring a large deletion in the S protein’s N-terminal region for studies to assess the role of sialic acid binding activity of PEDV S in infection.Finally we will discuss the mechanism by which the S protein is proteolytically activated for membrane fusion.PEDV is a member of the Coronaviridae family.This family of viruses comprises a large group of enveloped viruses with a positive-sense RNA genome of up to 32 kilobases.Coronaviruses infect a broad range of mammalian and avian hosts and can cause respiratory, enteric, hepatic and neurological disease.Pathogenic coronaviruses are found in farm animals as well as in humans and have demonstrated potential to cross the host-species barrier.Two zoonotic coronaviruses – the severe acute respiratory syndrome coronavirus and the Middle East respiratory syndrome coronavirus – have emerged over the last two decades, both causing severe and often fatal respiratory disease in humans.Coronaviruses have recently been subdivided into four genera: Alphacoronavirus, Betacoronavirus, Gammacoronavirus and Deltacoronavirus.Pathogenic viruses in each genus include transmissible gastroenteritis virus, human coronavirus 229E and HCoV-NL63, mouse hepatitis virus, SARS-CoV, MERS-CoV, avian infectious bronchitis virus and porcine deltacoronavirus.In swine five coronaviruses have been identified, representing three of the four genera.PEDV, TGEV and the natural TGEV deletion mutant porcine respiratory virus belong to the Alphacoronavirus genus.TGEV mainly infects epithelial cells from the small intestine and causes enteritis and fatal diarrhea in piglets; it is clinically indistinguishable from PEDV.Unlike TGEV, PRCoV mostly infects epithelial cells of the respiratory tract and alveolar macrophages causing a mild or often subclinical respiratory disease.The porcine hemagglutinating encephalomyelitis virus belongs to the Betacoronavirus genus; it targets respiratory and neuronal tissues and causes vomiting, wasting disease and neurological disorders in seronegative piglets.The recently identified PDCoV of the Deltacoronavirus genus has enteric tropism causing mild to moderate disease in young piglets.PED was not detected in swine until the 1970s.The first PED outbreak in swine was recognized in England in 1971.Seven years later the etiological agent was identified as a coronavirus and officially named as PEDV.PED was prevalent throughout Europe causing sporadic, localized outbreaks in the 1980s, 1990s and in subsequent years.PED was first reported in Asia in 1982 and since then it has had an increasingly great economic impact on the Asian swine industry.Particularly since 2010, devastating outbreaks have been reported in China and other Asian countries causing up to 100% mortality in suckling piglets.PEDV entered the United States for the first time in April 2013 and this virulent strain rapidly spread across the US to 36 states, as well as to other countries in North- and South-America, including Canada, Mexico, the Dominican Republic, Colombia and Peru.A less virulent PEDV strain has been detected in the US characterized by small genomic insertions and deletions in the viral spike glycoprotein.Since 2014, PEDV has reemerged in Europe including Germany, Italy, Austria, The Netherlands, Belgium, Portugal, France and Ukraine.PEDV mainly infects and replicates in villous enterocytes of the small intestine.Infection results in destruction of the intestinal epithelium with subsequent villus shortening causing watery diarrhea that lasts for about a week.Other clinical symptoms include vomiting, anorexia and fever.Pigs of all ages are susceptible, but symptoms are most severe in suckling piglets of less than one week old with mortality rates often approaching 100%.Fatality rates in weaned pigs are much lower while mortality has not been observed among fattening pigs.Many studies indicate that PEDV does not replicate outside the intestinal tract, though PEDV was detected in a recent study by RT-PCR and IHC in other organs of experimentally infected piglets including lung, liver, kidney and spleen.The PEDV S protein is the key protein responsible for virus entry into the target cell.It mediates the essential functions of receptor binding and subsequent fusion of the viral and cellular membranes during cell entry, thereby releasing the viral nucleocapsid into the cytoplasm.The PEDV S protein is a ±1383-residues long glycoprotein of 180–200 kilodalton in size.Trimers of these S proteins form the club-shaped, ±20 nm long projections on the virion surface that provide the coronavirus its typical crown-like appearance on electron micrographs.Like other CoV spike proteins, PEDV S is a type I membrane glycoprotein with an N-terminal signal peptide, a large extracellular region, a single transmembrane domain and a short cytoplasmic tail.The ectodomain of coronavirus spike proteins can be divided into two domains with distinct functions: the N-terminal S1 subunit responsible for receptor binding and the C-terminal membrane anchored S2 domain responsible for membrane fusion.The border between
Porcine epidemic diarrhea virus (PEDV), a coronavirus discovered more than 40 years ago, regained notoriety recently by its devastating outbreaks in East Asia and the Americas, causing substantial economic losses to the swine husbandry.The virus replicates extensively and almost exclusively in the epithelial cells of the small intestine resulting in villus atrophy, malabsorption and severe diarrhea.Cellular entry of this enveloped virus is mediated by the large spike (S) glycoprotein, trimers of which mediate virus attachment to the target cell and subsequent membrane fusion.
in vitro but also in vivo.The GDU spike protein has high homology to the spike protein of the original highly virulent US strains whereas the UU spike protein is of the less virulent S INDEL type.Most of the variation in the spike proteins of the original virulent US strains and the S INDEL strains maps to the N-terminal region of the S protein.All amino acid insertions and deletions that characterize the S INDEL strains occur within this region.The coronavirus spike protein is highly immunogenic and the main target for neutralizing antibodies.Differences in neutralizing titers of antisera raised against S proteins of different PEDV subtypes correlated with variation in these spike proteins.Antigenic variation in the N-domain is consistent with a functional relevance of this domain of the S protein in vivo and may have provided the virus evolutionary advantage in the evasion from adaptive immune responses.The inter-strain variation in sialic acid dependence observed for PEDV has also been seen for viruses of other virus families, including enterovirus 68, human norovirus and T3 reovirus though the significance of this polymorphism is unknown.Clearly, further studies are needed in vitro and in vivo to functionally assess differences in sialic acid binding activity of S proteins and their consequences for virus infection and pathogenesis.As first demonstrated by Hofmann and Wyler, propagation of PEDV in cultured cells strictly requires the supplementation of trypsin to the culture medium.Yet, cell culture adaptation of PEDV may result in a trypsin-independent propagation phenotype, as illustrated by the cell-passaged DR13 strain.Analysis of PEDV recombinants carrying spike proteins of trypsin-dependent and trypsin-independent viruses in an isogenic background demonstrated that the differences in trypsin dependency were determined by the spike protein.Moreover, inclusion of trypsin after the inoculation stage appeared to be required for cell–cell fusion and syncytia formation.These observations suggest a role of the trypsin in activation of the spike protein’s membrane fusion potential.In addition to cell entry, proteolysis is also required for release of progeny virus from the infected cell.Fusion of the coronavirus envelope membrane with a host cell membrane is driven by conformational changes in the spike protein.These conformational changes are irreversible and hence tightly regulated in time and space in order to prevent premature activation of the fusion protein.Conformational changes in the spike protein can be initiated by receptor binding as well as by acidic pH. Similar to other class I viral fusion proteins, the coronavirus spike protein requires proteolytic processing to activate its fusion potential.The spikes of a number of coronaviruses, particularly within the beta- and gammacoronavirus genera, are cleaved during biogenesis in the infected cell at S1/S2 junction by the subtilisin-like proprotein convertase furin.However, the spike protein of PEDV and many other alphacoronaviruses is presented on the virion surface in an uncleaved form.Recently, a second, more universal cleavage site has been proposed within the S2 subunit located just upstream of the fusion peptide for some beta- and gammacoronavirus spike proteins including that of MERS-CoV, SARS-CoV, MHV and IBV.This cleavage is thought to occur at the cell surface or in the endosome compartment during virus-cell entry.PEDV is an excellent model virus to study proteolytic activation of coronavirus spike proteins, given its unique requirement for supplemental trypsin proteases for infection in cell culture.We and others have shown that this proteolytic activation by trypsin proteases only occurred after the receptor binding stage.Treatment of virions or cells with trypsin prior to receptor binding did not rescue infectivity.This indicates that the trypsin cleavage site within the PEDV S protein is inaccessible on the virion, in contrast to other class I viral fusion proteins including influenza virus hemagglutinin which can be proteolytically primed at any stage after folding.The dependency on virus-cell interaction for exposure of the S protein cleavage site might prevent premature triggering of the S protein fusion machinery in the protease-rich intestine.Introduction of a furin cleavage site in the PEDV S protein by a single valine-to-arginine substitution at a position N-terminal of the predicted fusion peptide yielded a mutant virus exhibiting trypsin-independent membrane fusion.This observation further supports the hypothesis that cleavage just upstream of the fusion peptide is a general and essential requirement for activation of the CoV spike protein’s membrane fusion function.Replication of PEDV seems to be restricted to the enterocytes of the intestinal epithelium.Multiple factors may determine virus tropism including in particular the availability of functional receptors and fusion-activating proteases.Alterations in cleavage requirements can have a profound effect on tissue tropism and pathogenicity of viruses.The highly pathogenic phenotype of avian influenza viruses is largely determined by the acquisition of a multibasic cleavage site in the HA protein which switches processing of the hemagglutinin protein by tissue resident trypsin-like proteases to the ubiquitously expressed furin-like proteases.The strict requirement of PEDV for supplemental trypsin proteases during its cell entry and release are likely met in vivo by intestine-resident proteases.Gastric and pancreatic proteases or proteases locally expressed by intestinal epithelial cells may facilitate these processes essential for PEDV infection in the animal host and hence limit its tropism to the enteric tract.
PEDV propagation in vitro requires the presence of trypsin(-like) proteases in the culture medium, which capacitates the fusion function of the S protein.Moreover, we summarize the recent progress on the proteolytic activation of PEDV S proteins, and discuss factors that may determine tissue tropism of PEDV in vivo.
Conventional deep learning classifiers are static in the sense that they are trained ona predefined set of classes and learning to classify a novel class typically requiresre-training.In this work, we address the problem of Low-shot network-expansionlearning.We introduce a learning framework which enables expanding a pre-trained deep network to classify novel classes when the number of examples for thenovel classes is particularly small.We present a simple yet powerful distillationmethod where the base network is augmented with additional weights to classifythe novel classes, while keeping the weights of the base network unchanged.Weterm this learning hard distillation, since we preserve the response of the networkon the old classes to be equal in both the base and the expanded network.Weshow that since only a small number of weights needs to be trained, the harddistillation excels for low-shot training scenarios.Furthermore, hard distillationavoids detriment to classification performance on the base classes.Finally, weshow that low-shot network expansion can be done with a very small memoryfootprint by using a compact generative model of the base classes training datawith only a negligible degradation relative to learning with the full training set.
In this paper, we address the problem of Low-shot network-expansion learning
current review are limited to a sample of otherwise healthy male and female adults.The outcomes, therefore, may not extrapolate to other potentially vulnerable groups and so this should be explored.It would also be of value to see a more active exploration of gender differences in the study of psychological benefits of weight loss.Of the 36 studies included in the current review, approximately half of the studies were conducted in females only.Of those which included both males and females, more females than males took part, which led to unbalanced samples.Effects of gender on the outcomes measured were rarely formally assessed.Interestingly, one study reported changes in HRQoL to be gender specific in that males demonstrated improvement in the physical HRQoL domain whereas females demonstrated psychological and emotional improvements.It would be useful, therefore, for future studies to explore this in more detail.Finally, to enhance the effectiveness of the interventions used, it is of value to identify the key components that lead to success and, further, to develop a more comprehensive, inclusive definition of ‘success’ that includes both improved psychological outcomes together with physiological changes.A review of 36 studies demonstrated consistent significant improvements in psychological outcomes following participation in a behavioural and/or dietary weight loss intervention both with and without exercise, post intervention and at one year follow up.Specifically, improvements in self-esteem, depressive symptoms, body image and health-related quality of life were observed.Calculated effect sizes to determine the magnitude of change pre- to post- intervention demonstrated substantial variation across interventions and outcomes.Showing more consistency and larger changes in body image and vitality.However, it was not possible to calculate effect sizes for all pre- to post- comparisons of interest.Consequently, not all observed effects could be supported and should be treated with caution.Improvements generally increased in magnitude with greater weight loss but were also observed with no weight change.Greater weight loss was more strongly associated with greater improvements in HRQoL.The type of intervention may mediate this effect in that diet/exercise based interventions may be more dependent on weight loss for improved wellbeing whereas behavioural interventions with a psychological focus, may enhance autonomy and serve to change attitudes and promote positive psychological wellbeing.Greater weight loss and/or self-acceptance may mean that these effects can be maintained over longer periods of time.Despite a generally acceptable standard of quality, quality assessment scores varied and a number of methodological issues were identified.More research, therefore, is needed to improve the quality of intervention trials to fully elucidate the effects of weight loss on psychological outcomes, to identify the effective elements of interventions used and to incorporate a broader range of psychological domains, for example, self-efficacy and autonomy.
However, few behavioural and dietary interventions have investigated psychological benefit as the primary outcome.Hence, systematic review methodology was adopted to evaluate the psychological outcomes of weight loss following participation in a behavioural and/or dietary weight loss intervention in overweight/obese populations.36 Studies were selected for inclusion and were reviewed.Changes in self-esteem, depressive symptoms, body image and health related quality of life (HRQoL) were evaluated and discussed.Where possible, effect sizes to indicate the magnitude of change pre- to post- intervention were calculated using Hedges' g standardised mean difference.The results demonstrated consistent improvements in psychological outcomes concurrent with and sometimes without weight loss.Improvements in body image and HRQoL (especially vitality) were closely related to changes in weight.Calculated effect sizes varied considerably and reflected the heterogeneous nature of the studies included in the review.Although the quality of the studies reviewed was generally acceptable, only 9 out of 36 studies included a suitable control/comparison group and the content, duration of intervention and measures used to assess psychological outcomes varied considerably.Further research is required to improve the quality of studies assessing the benefits of weight loss to fully elucidate the relationship between weight loss and psychological outcomes.© 2013 The Authors.
being released.We did not detect a substantial effect of PBP1B on the rate of C15-PP degradation by the phosphatases.However, more experiments are needed to test the possibility that the degradation of C55-PP is faster when it is delivered directly from the polymerase than when it is freely diffusing in the membrane.The interaction between PgpB and PBP1B could improve the efficiency of PG polymerization in the cell membrane by preventing inhibition by locally high concentrations of C55-PP.Our results are consistent with several possible mechanisms of stimulation, which are not exclusive.The association of both proteins may accelerate the release of C55-PP from the active site of the GTase, prevent the inhibition of the GTase by the free pool of C55-PP on the periplasmic side of the cell membrane and/or cause an allosteric activation of PBP1B.While further experiments are needed to assess the contribution of each of these possibilities it is possible that C55-PP accumulates locally at sites of PG synthesis and, hence, its fast removal is achieved by the coupling of C55-PP phosphatases and PG synthases.Similar mechanisms might be used in other pathways involving the use of polyprenyl phosphates as lipid carriers for oligosaccharides across membranes.PgpB has been described to have a dual role in the cell.The protein is not only involved in the dephosphorylation of C55-PP, but it also dephosphorylates phosphatidylglycerol-phosphate, the precursor of the most abundant anionic phospholipid in E. coli, phosphatidylglycerol.In fact, PgpB on its own is able to sustain phosphatidylglycerol synthesis in the absence of the other PGP phosphatases.This second activity raises the possibility that PgpB triggers the synthesis of anionic phospholipids at sites of PBP1B localization, and that the phospholipid composition could be another factor that regulates PG growth.Interestingly, it has been reported that peptidoglycan synthesis requires ongoing phospholipid synthesis and that this is likely due to disruption in lipid II transport when phospholipid synthesis is blocked.In fact, it has been recently reported that lipid synthesis is a major determinant of bacterial cell size, independently of the stringent response by ppGpp.Further studies are needed to decipher the possible role of phospholipids in the regulation of PG synthesis.To our knowledge we report here for the first time an interaction between a membrane-anchored oligosaccharide glycosyltransferase and a polyprenyl pyrophosphate phosphatase.The interaction between the PG synthase PBP1B and the membrane phosphatase PgpB stimulates PG synthesis in membrane systems, presumably due to the faster release of the carrier lipid C55-PP from the active site of the polymerase, and by preventing substrate inhibition by C55-PP.We have no conflict of interest.
Peptidoglycan (PG) is an essential component of the bacterial cell wall that maintains the shape and integrity of the cell.The PG precursor lipid II is assembled at the inner leaflet of the cytoplasmic membrane, translocated to the periplasmic side, and polymerized to glycan chains by membrane anchored PG synthases, such as the class A Penicillin-binding proteins (PBPs).Polymerization of PG releases the diphosphate form of the carrier lipid, undecaprenyl pyrophosphate (C55-PP), which is converted to the monophosphate form by membrane-embedded pyrophosphatases, generating C55-P for a new round of PG precursor synthesis.Here we report that deletion of the C55-PP pyrophosphatase gene pgpB in E. coli increases the susceptibility to cefsulodin, a β-lactam specific for PBP1A, indicating that the cellular function of PBP1B is impaired in the absence of PgpB.Purified PBP1B interacted with PgpB and another C55-PP pyrophosphatase, BacA and both, PgpB and BacA stimulated the glycosyltransferase activity of PBP1B.C55-PP was found to be a potent inhibitor of PBP1B.Our data suggest that the stimulation of PBP1B by PgpB is due to the faster removal and processing of C55-PP, and that PBP1B interacts with C55-PP phosphatases during PG synthesis to couple PG polymerization with the recycling of the carrier lipid and prevent product inhibition by C55-PP.
Raw materials form the basis of Europe's economy to ensure jobs and competitiveness, and they are essential for maintaining and improving our quality of life.Securing reliable, sustainable, and undistorted access of raw materials and their circular use in the economy is, therefore, of growing concern within the EU and globally.Recent years have seen a tremendous increase in the amount of materials extracted and used together with a significant growth in the number of materials used in single products.Global economic growth coupled with technological change will increase the demand for many raw materials in the future.“Criticality” combines a comparatively high economic importance with a comparatively high risk of supply disruption."In 2008 the U.S. National Research Council proposed a framework for evaluating material “criticality” based on a metal's supply risk and the impact of a supply restriction.Since that time, a number of organizations worldwide have built upon that framework in various ways.Even though all raw materials are important, some resources are obviously of more concern than others.The list of CRMs for the EU and the underlying criticality methodology are therefore key instruments in the context of the EU raw materials policy.Such a list is a precise commitment of the Raw Material Initiative and subsequent updates."The EU criticality methodology was developed between April 2009 and June 2010 with the support of the European Commission's Ad-Hoc Working Group on Defining Critical Raw Materials within the RMI in close cooperation with EU Member States and stakeholders.The EC criticality methodology has already been used twice; to create a list of 14 CRMs for the EU in 2011 and an updated list of 20 CRMs in 2014.Given the intense and active dialogue with multiple stakeholders, the use of best available data reflecting the current situation and recent past, and considering that fully transparent datasets and calculations were made available to a large group of experts, the EC criticality methodology is generally well accepted in the EU, as well as considered reliable and robust.After the two releases of the list and considering several policy documents that make explicit reference to CRMs, it can certainly be stated that the EC criticality methodology is a well consolidated and reliable tool, which represents a cornerstone of the raw materials policy in the EU.In view of the next update of the CRMs list, the EC is considering to apply again the same methodology.This choice of continuity is synonymous with giving priority to comparability with the previous two exercises, which is in turn correlated to the need of effectively monitoring trends and maintaining the highest possible policy relevance.Nevertheless, some targeted and incremental improvements of the existing EU criticality methodology are required, taking into account the most recent methodological developments in the international arena, evolving raw materials markets at international scale, and considering explicit requests from the European industry and changing policy priorities and needs, e.g., on trade.A valuable support also came from recent projects funded by the EU under different schemes, which tackled specific aspects of criticality and/or contributed to generate European data on flows and stocks of CRMs.As the EC in-house science service, the Directorate General Joint Research Centre provided scientific advice to DG GROWTH in order to assess the current methodology and identify parameters that could be adjusted to better address the needs and expectations toward the methodology of capturing issues of raw materials criticality in the EU.This work was conducted in close consultation with the ad hoc working group on CRMs, who participated in regular discussions with DG GROWTH and other EC services and provided informed expert feedback.The analysis and subsequent revisions started from the assumption that the methodology used for the 2011 and 2014 CRMs lists proved to be reliable and robust and, therefore, the JRC mandate was focused on fine-tuning and/or targeted incremental methodological improvements.The goal of this paper is to present key new or modified elements of the EU criticality methodology, to highlight their novelties and/or potential outcomes, and to discuss them in the context of criticality assessment methodologies available internationally.A comprehensive presentation of the revised EC methodology is not a goal of the present paper, but will be presented in a future EC publication or communication in view of the third revised list expected in 2017.CRMs are both of high economic importance to the EU and vulnerable to supply disruption.Vulnerable to supply disruption means that their supply is associated with a high risk of not being adequate to meet EU industry demand.High economic importance means that the raw material is of fundamental importance to industry sectors that create added value and jobs, which could be lost in case of inadequate supply and if adequate substitutes cannot be found.Bearing the above concepts in mind, criticality has two dimensions in the EC methodology: Supply Risk and Economic Importance.A raw material is defined as being critical if both dimensions overcome a given threshold.The SR indicator in the EU criticality assessment is based on the concentration of primary supply from countries and their level of governance.Production of secondary raw materials and substitution are considered as risk-reducing filters.In this formula, SR stands for supply risk; HHI is the Herfindahl Hirschman Index; WGI is the scaled World Governance Index; EOLRIR is the End-of-Life Recycling Input Rate; and SI is the Substitution Index.The importance of a raw material to the economy of the Union is assessed by the indicator “Economic Importance”.This indicator relates to the potential consequences in the event of an inadequate
Raw materials form the basis of Europe's economy to ensure jobs and competitiveness, and they are essential for maintaining and improving quality of life.Although all raw materials are important, some of them are of more concern than others, thus the list of critical raw materials (CRMs) for the EU, and the underlying European Commission (EC) criticality assessment methodology, are key instruments in the context of the EU raw materials policy.For the next update of the CRMs list in 2017, the EC is considering to apply the overall methodology already used in 2011 and 2014, but with some modifications.As the EC's in-house science service, the Directorate General Joint Research Centre (DG JRC) identified aspects of the EU criticality methodology that could be adapted to better address the needs and expectations of the resulting CRMs list to identify and monitor critical raw materials in the EU.The goal of this paper is to discuss the specific elements of the EC criticality methodology that were adapted by DG JRC, highlight their novelty and/or potential outcomes, and discuss them in the context of criticality assessment methodologies available internationally.
is the scaled World Governance Index; t equals the trade adjustment; IR is the Import Reliance; EOLRIR is the End-of-Life Recycling Input Rate; and SISR = Substitution Index related to supply risk.The importance of a raw material to the economy of the Union is assessed by the indicator Economic Importance.This indicator relates to the potential consequences in the event of an inadequate supply of the raw material.In previous criticality assessments, EI was evaluated by accounting for the fraction of each material associated with industrial megasectors at EU level and their gross value added.However, megasectors combine several 3- and 4-digit NACE sectors with each other and therefore represent GVA at a high level of aggregation.In order to link raw materials to the corresponding manufacturing sectors at higher levels of sectorial resolution, the JRC examined the classification of product groups, economic activities, and NACE sectors in which raw materials are generally used.The resulting revised approach allows for a more detailed allocation of raw material uses to the corresponding NACE sectors.The allocation of uses could, e.g., be done using the PRODCOM product groups and the 5-/6-digit CPA classes corresponding to each type of use.In the cases in which the identification of a CPA category is not possible, the shares could be allocated directly to the corresponding 4-, 3- or 2-digit NACE sectors.At NACE 2-digit level, statistical identification of uses turned out to be easier."Allocation of the identified end uses to the NACE 2-digit level sectors is facilitated by the Eurostat's statistical correspondence between CPA, NACE 3-/4-digit and NACE 2-digit.In the previous criticality assessments, substitution was only addressed as a filter to decrease the supply risk.Expert judgment was used to determine the substitution/substitutability indexes.However, substitution can also alter the potential consequences of a supply shortage to the European economy and should therefore also be considered in the economic importance component.Substitution of raw materials is addressed in the majority of criticality studies, in a qualitative or semi-quantitative manner.Expert elicitation is indispensable for such qualitative estimations.A slightly more detailed approach is adopted in the Yale methodology.In the revised EC methodology, substitution is considered to reduce the potential consequences in the case of a supply disturbance.Substitution is to be incorporated, therefore, into the economic importance dimension.Nevertheless, given that the availability of substitutes could also mitigate the risk of supply disruptions, as it might decrease demand for a given raw material, it was recommended to also consider substitution in the estimation of SR.In summary, two different substitution factors are used, one in the EI and one in the SR.Since the scope of the EC assessment focuses on the current situation, only proven substitutes that are readily-available today and that could subsequently alter the consequences of a disruption are considered.As a result, only substitution, and not substitutability or potential future substitution, is considered in the revised methodology.A comprehensive presentation of the revised EC methodology in respect to the underlying calculation of the substitution indexes is not a goal of the present paper, but will be presented in a future EC publication or communication.In summary, the two main alterations of the refined EI component include: A more detailed and transparent allocation of RM uses to their corresponding NACE sectors, and introduction of a dedicated substitution index SIEI deemed to be a reduction factor for the EI.The EC criticality methodology is a well consolidated and reliable tool, which represents a cornerstone of the raw materials policy in the EU.However, due to changing policy priorities and needs, some targeted and incremental improvements were seen as necessary."As the European Commission's in-house science service, its Directorate General Joint Research Centre provided scientific advice to DG GROWTH in order to assess the current methodology and identify aspects that could be modified to better address the needs and expectations of the list of CRMs.In view of the next update of the CRMs list foreseen in 2017, the JRC mandate was focused on a fitness check of the current methodology and the introduction of some methodological improvements.This choice of continuity is synonymous with giving priority to comparability with the previous two exercises, which is in turn correlated to the need of effectively monitor trends and maintain the highest possible policy relevance.Original contributions in respect to specific elements of criticality assessment were proposed and tested by the JRC, which also highlighted the novelty and potential outcomes.A comprehensive and detailed presentation of the revised EC methodology is not given in the present paper, as possible fine-tuning might still take place during implementation.Under the supply risk dimension, the main novelties of the revised methodology, in response to the corresponding policy needs include: incorporation of trade barriers and agreements, adoption of a more systematic supply chain bottleneck approach, inclusion of import dependency and a more accurate picture of the actual supply to the EU and confirmation of the prominent role for recycling and a substantial improvement the quality and representativeness of data for the EU.The two main novelties of the refined economic importance component include: A more detailed and transparent allocation of RM uses to their corresponding NACE sectors, and introduction of a dedicated substitution index.The authors declare no competing financial interests.
Keeping the same methodological approach is a deliberate choice in order to prioritise the comparability with the previous two exercises, effectively monitor trends, and maintain the highest possible policy relevance.
use of automated technology has less of an impact on eating behaviour than the knowledge that a researcher will be examining and weighing the food consumed.Alternatively, it may be that as the participants are regularly interrupted by the UEM software during the meal to make VAS ratings, this increases awareness of how much food is being consumed, and thereby reduces the impact of the explicit awareness manipulation.A comparison of the effects of automated versus experimenter monitored intake under the same conditions is required to further explore these issues.In the present study we recruited only female participants.This was based on a previous study using a UEM in which some male participants engaged in “competitive eating” and consumed very large amounts of pasta.Therefore, it remains to be investigated whether men would behave similarly when aware or unaware of intake monitoring.In addition it will be important to examine how individual characteristics such as BMI and dietary restraint interact with awareness, since dieters and obese participants may be more concerned about issues of self-presentation than lean non-dieters.Finally, the results of this study should be considered in relation to the specific eating situation investigated.While we found no effects of awareness of the UEM on total food intake, the limited effects we identified on meal microstructure measures are consistent with previous observations of a potentially important effect of awareness of monitoring when participants are offered high energy dense snack foods to eat.However, these effects require replication in a more representative sample.To date, there have been relatively few investigations of the influence of participant awareness of food intake measurement on eating behaviour, and it is clear that a better understanding of these effects will enable improved design and interpretation of results in future studies.A caveat is that these results were obtained with females eating two test foods from a UEM.Thus, the effects might not translate to other populations or food types, and requires further investigation.Awareness of the presence of a UEM reduced the rate of consumption of a cookie snack, but had no effect on consumption of a pasta lunch.In addition, participants who were aware of the UEM reported lower levels of fullness while consuming pasta and higher levels of hunger when consuming the cookies.Hence, awareness of this type of monitoring of food intake had relatively limited effects, particularly on consumption of staple foods.
To date, there have been no studies that have explicitly examined the effect of awareness on the consumption of food from a Universal Eating Monitor (UEM - hidden balance interfaced to a computer which covertly records eating behaviour).We tested whether awareness of a UEM affected consumption of a pasta lunch and a cookie snack.39 female participants were randomly assigned to either an aware or unaware condition.After being informed of the presence of the UEM (aware) or not being told about its presence (unaware), participants consumed ad-libitum a pasta lunch from the UEM followed by a cookie snack.Awareness of the UEM did not significantly affect the amount of pasta or cookies eaten.However, awareness significantly reduced the rate of cookie consumption.These results suggest that awareness of being monitored by the UEM has no effect on the consumption of a pasta meal, but does influence the consumption of a cookie snack in the absence of hunger.Hence, energy dense snack foods consumed after a meal may be more susceptible to awareness of monitoring than staple food items.
Copper alloys, including bronzes, are currently employed in a wide range of engineering applications because of their high ductility, high corrosion resistance, non-magnetic properties, excellent machinability, and high hardness .Copper is used for electric wiring and in heat exchangers, pumps, tubing, and several other products, while aluminum bronze and high-strength brass are found in marine applications, for example in propellers and propeller shafts .Furthermore, shiny brass is widely employed for coins and for musical instruments.However, in spite of their excellent material characteristics, there is still scope for technical improvements to increase the strength and ductility of these alloys.To achieve improvements in mechanical strength, several copper alloys with high dislocation density and fine microstructure, containing solid solutions, have been proposed.The mechanical strength of ultrafine-grained or nanocrystalline Cu–Al alloys, prepared by equal-channel angular pressing, has been investigated, and the strength and uniform elongation of these alloys have been simultaneously improved by lowering the stacking fault energy .The hardness of even nanocrystalline copper with grain size as small as 10 nm still follows the Hall–Petch relation .A variety of methods have been used to make high-strength copper alloys.Maki et al. attempted to create a higher-strength Cu–Mg alloy through a solid-solution hardening effect, in which supersaturation with Mg increases the strength compared with that of a representative solid-solution Cu–Sn alloy .A high tensile strength of 600 MPa was reported by Sarma et al. , who produced a Cu–Al alloy with ultrafine-grained microstructure and very fine annealing twins by cryorolling and annealing at 523 K for 15 min.The higher strength of this Cu–Al alloy was interpreted in terms of the enhanced solid-solution strengthening effect of Al, which is about 1.7 times higher than the corresponding effect in Cu–Zn alloys .In recent years, Cu–Zn30–Al0.8 alloys exhibiting nanostructure have been fabricated by cryomilling of brass powders and subsequent spark plasma sintering .Such alloys have a high compressive yield strength of 950 MPa, which is much higher than the values of 200–400 MPa found in commercially available alloys.This increase in mechanical strength has been attributed to precipitation hardening and grain boundary strengthening .The effect of grain size on yield stress was examined in polycrystalline copper and Cu–Al alloys at 77 and 293 K, and the yield stress was found to satisfy the Hall–Petch relation in both materials .The influence of hydrogen on the mechanical properties of aluminum bronze was investigated, and it was found that neither tensile nor fatigue properties were affected .After low-temperature thermal treatment, strained Cu–Al alloys exhibited high mechanical strength, which is caused by increases both in the degree of order and in the electron-to-atom ratio .The effects of microstructural characteristics on the mechanical strength of Cu–Ni26–Zn17 alloy were investigated, and it was found that solid-solution strengthening of the alloy was affected by the interaction of Ni and Zn atoms with screw dislocations and by the effective interaction caused by the modulus mismatch .In order to understand the material properties of copper alloys, it is important to investigate their microstructural characteristics, including texture.The textures of copper alloys after rolling and recrystallization were analyzed by electron backscatter diffraction analysis .The evaluation of grain boundaries in copper bicrystals during one-pass ECAP was systematically investigated by several methods, including EBSD .The above literature survey shows that there are various approaches that can be adopted to improve the mechanical properties of copper alloys, including grain refinement, solid solutions, and high dislocation density.In many practical applications, it is desirable to reduce the weight of components and structures made from such alloys by enhancing their mechanical properties.Thus, in the present work, an attempt is made to create copper alloys with favorable tensile properties via microstructural modification using forging and casting processes under various conditions.To analyze the mechanical strength and ductility of these alloys, their microstructural characteristics are investigated by EBSD.Two commercial copper alloys, namely, an aluminum bronze and a brass, were studied, as well as a newly developed aluminum bronze.It should be pointed out that CADZ was developed on the basis of a Cu–Al10.5 alloy in Dozen-Kogyo Co. Ltd.The material characteristics of CADZ were originally developed by described in detail elsewhere .The test samples of the alloys were produced by casting and forging.In the casting process, two different cooling rates, and thus solidification speeds, were adopted.At the low cooling rate, the melts were solidified slowly in a furnace.In this case, the solidification process was carried out under an argon gas atmosphere to prevent oxidation.At the high cooling rate, the melts were solidified rapidly in a copper mold.The solidification speeds for both the rapid and slow cooling processes were measured directly using a thermocouple.In the rolling process, the alloys were forged at different deformation rates, using a 10-ton twin-rolling machine with high-strength rollers made of hot rolled steel.Samples of thickness 10 mm were forged under severe deformation at different temperatures: 293 K, 493 K, and 1073 K.Tensile tests were conducted at room temperature using a hydraulic servo-controlled testing machine with 50 kN capacity.Rectangular dumbbell-shaped specimens were employed with dimensions 3 mm × 20 mm × 2 mm.The loading speed was set at 1 mm/min until final failure.The tensile properties were evaluated via tensile stress versus tensile strain curves, which were monitored by a data acquisition system in conjunction with a computer through a standard load cell and strain gauge.Hardness measurements were made using a micro-Vickers tester at 2.94 N for 15 s.In this test, a diamond indenter was loaded manually at about
With the aim of obtaining copper alloys with favorable mechanical properties (high strength and high ductility) for various engineering applications, the microstructural characteristics of two conventional copper alloys — an aluminum bronze (AlBC; Cu–Al9.3–Fe3.8–Ni2–Mn0.8) and a brass (HB: Cu–Al4–Zn25–Fe3–Mn3.8) — and a recently developed aluminum bronze (CADZ: Cu–Al10.5–Fe3.1–Ni3.5–Mn1.1–Sn3.7), were controlled by subjecting the alloys to two different processes (rolling and casting) under various conditions.Microstructural characteristics, as examined by electron backscatter diffraction analysis, were found to differ among the alloys.
shown in Fig. 5, the hardness value increases with increasing rolling rate and decreasing rolling temperature.These trends are presumably due to the differences in dislocation density, deformation twinning, and internal stress arising during the rolling process, as indicated by the distributions of MO angles seen in Fig. 3.In particular, a high hardness is obtained for the cold-rolling process, owing to dislocation tangling, despite the low rolling rate.On the other hand, the low hardness of the samples made by hot rolling is a consequence of their recrystallization and grain growth, as described previously.It should also be pointed out that the deformation characteristics of AIBC and HB can vary depending on the SFE, as mentioned above.In general, it appears that deformation twining occurs for the alloys with lower SFE, namely, the AIBC samples.This deformation occurs when dislocation is dominated by the rolling process, i.e., work hardening occurs .Fig. 6 shows representative tensile stress versus tensile strain curves for the three alloys made by rolling and by casting, while Fig. 7 summarizes their tensile properties in terms of ultimate tensile strength versus fracture strain.It should be pointed out that more than three specimens were employed here to obtain the tensile properties.From the stress–strain curves, it can be seen that high ductility is obtained for the cast samples, with the fracture strain for AIBC being higher than that for HB and CADZ.The reason for this is the presence of deformation twinning in AIBC, as mentioned above.Huang et al. reported that the deformation twins in coarse-grained Cu occurred mainly in shear bands and at their intersections, as a result of the very high local stress caused by severe plastic deformation.On the other hand, a high tensile strength is obtained overall for the rolled samples compared with the cast ones.In particular, higher tensile strengths σUTS are obtained overall for AlBC and HB made at a high rolling rate and a rapid cooling rate.The highest σUTS values are obtained for WF-AlBC, WF-HB, and RC-CADZ.On the other hand, low σUTS values are found for HF-HB and HF-AlBC, even when high rolling rates were applied.The data plots of tensile properties are relatively scattered for CADZ, which may be due to the low sample quality.Fig. 8 shows an SEM image of the HF-CADZ sample after rolling but before the tensile test.As can be seen, several microcracks have been generated along the grain boundaries, as indicated by the dashed lines.Such microcracks could lead to a deterioration in mechanical properties.It should be noted that no clear microcracks were detected in the other rolled alloys, because of their high ductility.For the cast samples in Fig. 7, higher tensile strengths are obtained for the alloys made at a high solidification rate.For cast CADZ, the highest σUTS value is obtained for RC-CADZ, and is higher than the value for the corresponding rolled alloy.This may be a consequence of the fine-grained structure as well as the high sample quality.The tensile strength of the cast samples decreases with decreasing solidification rate.Unlike the tensile strength of CADZ, high tensile strengths of both HB and AlBC result from cold and warm rolling at a high rolling rate.In addition, the rolled AlBC and HB alloys show a raised ductility εf of more than 15%, although this strain value is lower than those for the RC-AlBC and RC-HB samples.From this result, it can be considered that the cast and rolled samples are overall located on the right- and left-hand sides, respectively.On the other hand, no clear trend in tensile properties is seen for CADZ.This may be due to its low deformability and the microcracks generated by the rolling process, as mentioned above.The mechanical properties of copper alloys made by different processes have been investigated.The results can be summarized as follows:The mechanical properties of the alloys depend on the production process: rolling or casting.For the CADZ alloy, high mechanical strength was obtained for the rapidly cooled cast sample, although low ductility was found.High ductility was obtained for cast AlBC and HB alloys.High tensile strength with high ductility was obtained by warm rolling at a high rolling rate, especially for HB and AlBC.The high hardness of the CADZ alloy was attributed to severe lattice strains almost throughout the material.Vickers hardness was clearly related to grain size for all three alloys, with larger grains leading to lower hardness, i.e., the Hall–Petch relationship.The CADZ alloy could not be subjected to intense rolling owing to its brittleness, arising from its complicated microstructure.A large number of microcracks were created in rolled CADZ, resulting in reduced tensile strength.On the other hand, intense rolling was possible for the HB and AlBC alloys, allowing samples to be produced with high strength and high ductility.
For the rolling process, the rolling rate and temperature were varied, whereas for the casting process, the solidification rate was varied.Complicated microstructures formed in CADZ led to high hardness and high tensile strength (σUTS), but low ductility (εf).For CADZ, casting at a high solidification rate allowed an increase in ductility to be obtained as a result of fine-grained structure and low internal stress.In contrast, high ductility (with a fracture strain of more than 30%) was found for both cast AlBC and cast HB; moreover, both of these alloys possessed high tensile strength when produced by warm rolling at 473 K. For CADZ, on the other hand, no clear effect of rolling on tensile strength could be found, owing to the many microcracks caused by its brittleness.The results of this study indicate that copper alloys with excellent mechanical properties can be produced.This is especially the case for the conventional alloys, with a high tensile strength σUTS = 900 MPa and a high fracture strain εf = 10% being obtained for warm-rolled brass.
Group II, it might be appropriate to ensure that strains from all three lineages are included.Similar issues also apply to Group I, and in particular the selection of suitable non-toxigenic strains for thermal sterilisation tests.A major future requirement is to increase understanding of the genomic diversity of C. botulinum Group I and Group II, the survival/proliferation of these bacteria in food, and the relationship between the two.This will provide new information on pathogen biology and transmission, and inform studies on pathogen evolution.Transcriptomic, proteomic and systems biology approaches will also use genomic data, and the findings will be equally important in future risk assessments, to extend understanding about mechanisms and control of phenotype, and may lead to the identification of novel intervention strategies.Papers of particular interest, published within the period of review, have been highlighted as:• of special interest,•• of outstanding interest
The deadly botulinum neurotoxin formed by Clostridium botulinum is the causative agent of foodborne botulism.The increasing availability of C. botulinum genome sequences is starting to allow the genomic diversity of C. botulinum Groups I and II and their neurotoxins to be characterised.This information will impact on microbiological food safety through improved surveillance and tracing/tracking during outbreaks, and a better characterisation of C. botulinum Groups I and II, including the risk presented, and new insights into their biology, food chain transmission, and evolution.
In tourism and travel related industries, most of the research on Revenue Management demand forecasting and prediction problems employ data from the aviation industry, in the format known as the Passenger Name Record.This is a format developed by the aviation industry .However, the remaining tourism and travel industries like hospitality, cruising, theme parks, etc., have different requirements and particularities that cannot be fully explored without industry׳s specific data.Hence, two hotel datasets with demand data are shared to help in overcoming this limitation.The datasets now made available were collected aiming at the development of prediction models to classify a hotel booking׳s likelihood to be canceled.Nevertheless, due to the characteristics of the variables included in these datasets, their use goes beyond this cancellation prediction problem.One of the most important properties in data for prediction models is not to promote leakage of future information .In order to prevent this from happening, the timestamp of the target variable must occur after the input variables’ timestamp.Thus, instead of directly extracting variables from the bookings database table, when available, the variables’ values were extracted from the bookings change log, with a timestamp relative to the day prior to arrival date.Not all variables in these datasets come from the bookings or change log database tables.Some come from other tables, and some are engineered from different variables from different tables.A diagram presenting the PMS database tables from where variables were extracted is presented in Fig. 1.A detailed description of each variable is offered in the following section.Data was obtained directly from the hotels’ PMS databases’ servers by executing a TSQL query on SQL Server Studio Manager, the integrated environment tool for managing Microsoft SQL databases .This query first collected the value or ID of each variable in the BO table.The BL table was then checked for any alteration with respect to the day prior to the arrival.If an alteration was found, the value used was the one present in the BL table.For all the variables holding values in related tables, their related values were retrieved.A detailed description of the extracted variables, their origin, and the engineering procedures employed in its creation is shown in Table 1.The PMS assured no missing data exists in its database tables.However, in some categorical variables like Agent or Company, “NULL” is presented as one of the categories.This should not be considered a missing value, but rather as “not applicable”.For example, if a booking “Agent” is defined as “NULL” it means that the booking did not came from a travel agent.Summary statistics for both hotels datasets are presented in Tables 2–7.These statistics were obtained using the ‘skimr’ R package .A word of caution is due for those not so familiar with hotel operations.In hotel industry it is quite common for customers to change their booking׳s attributes, like the number of persons, staying duration, or room type preferences, either at the time of their check-in or during their stay.It is also common for hotels not to know the correct nationality of the customer until the moment of check-in.Therefore, even though the capture of data took considered a timespan prior to arrival date, it is understandable that the distribution of some variables differ between non canceled and canceled bookings.Consequently, the use of these datasets may require this difference in distribution to be taken into account.This difference can be seen in the table plots of Fig. 2 and Fig. 3.Table plots are a powerful visualization method and were produced with the tabplot R package that allow for the exploration and analysis of large multivariate datasets.In table plots each column represents a variable and each row a bin with a pre-defined number of observations.In these two figures, each bin contains 100 observations.The bars in each variable show the mean value for numeric variables or the frequency of each level for categorical variables.Analyzing these figures it is possible to verify that, for both of the hotels, the distribution of variables like Adults, Children, StaysInWeekendNights, StaysInWeekNights, Meal, Country and AssignedRoomType is clearly different between non-canceled and canceled bookings.
This data article describes two datasets with hotel demand data.One of the hotels (H1) is a resort hotel and the other is a city hotel (H2).Both datasets share the same structure, with 31 variables describing the 40,060 observations of H1 and 79,330 observations of H2.Each observation represents a hotel booking.Both datasets comprehend bookings due to arrive between the 1st of July of 2015 and the 31st of August 2017, including bookings that effectively arrived and bookings that were canceled.Since this is hotel real data, all data elements pertaining hotel or costumer identification were deleted.Due to the scarcity of real business data for scientific and educational purposes, these datasets can have an important role for research and education in revenue management, machine learning, or data mining, as well as in other fields.
and scenarios of development futures.Solar energy facility developers use PVSyst or System Advisor Model with long-term NSRDB data to estimate power output and assess specific cost and feasibility.A brief summary and description of the abovementioned models is provided in Table 1.The NSRDB has also been used in bioenergy to evaluate algal biomass productivity potential in a variety of climatic zones .In addition to the energy-related applications, the NSRDB has been employed in many other research areas.For example, the American Society of Heating Refrigerating and Air-Conditioning Engineers uses the NSRDB for climate research.The NSRDB has also been used by the American Cancer Society to conduct cancer research because solar exposure is the primary vitamin D source that is associated with survival in multiple cancers.The residence-based ultraviolet radiation data from the NSRDB is used to examine its relationship to cancer outcomes and help understand the geographic disparities in cancer prognosis.The NSRDB is a widely used public solar resource dataset that has been developed and updated during more than 20 years to reflect advances in solar radiation measurement and modeling.The most recent version of the NSRDB uses 30-min satellite products at a 4 × 4 km resolution that cover the period 1998–2016."The NREL-developed PSM was the underlying model for developing this recent update, which used this two-step physical model and took advantage of the progressive computing capabilities and high-quality meteorological datasets from NOAA's GOES; NIC's IMS; and NASA's MODIS and MERRA-2 products.The percentage biases in the latest NSRDB are approximately 5% for GHI and approximately 10% for DNI when compared to the long-term solar radiation observed by the ARM, NREL, and SURFARD stations across the United States.Future updates of the NSRDB are expected annually.Advanced information in the planned dataset will involve new satellite retrievals and improved AOD data.However, future advancements in the PSM—e.g. identifying low clouds and fog in coastal areas, improving the discrimination of clouds from snow, providing specular reflection on bright surface, and reducing uncertainties of parallax especially under high-resolution conditions—are desired to further increase the accuracy of the NSRDB.Further, the Lambert-Bouguer Law is almost non-exclusively utilized by physics-based radiative transfer models, including FARMS, that assume DNI is constituted of an infinite narrow beam."This assumption is interpreted differently in surface-based observations by pyrheliometers where direct solar radiation is defined as the “radiation received from a small solid angle centered on the sun's disc” .To reduce this disagreement in principle, we employed an empirical model, DISC , to decompose DNI from the GHI in cloudy situations.Further efforts are underway in developing a new DNI model to bridge the gap between model simulation and surface observation.Additionally, the launch of GOES-16 is also expected to provide improved cloud products; however, this requires better capabilities to process larger volumes of data.Finally, while the PSM has been applied to the GOES satellites the methods and models are equally applicable to any other geostationary satellites.Therefore, future work will involve developing global capabilities in collaboration with various national and international partners.
The National Solar Radiation Data Base (NSRDB), consisting of solar radiation and meteorological data over the United States and regions of the surrounding countries, is a publicly open dataset that has been created and disseminated during the last 23 years.This paper briefly reviews the complete package of surface observations, models, and satellite data used for the latest version of the NSRDB as well as improvements in the measurement and modeling technologies deployed in the NSRDB over the years.The current NSRDB provides solar irradiance at a 4-km horizontal resolution for each 30-min interval from 1998 to 2016 computed by the National Renewable Energy Laboratory's (NREL's) Physical Solar Model (PSM) and products from the National Oceanic and Atmospheric Administration's (NOAA's) Geostationary Operational Environmental Satellite (GOES), the National Ice Center's (NIC's) Interactive Multisensor Snow and Ice Mapping System (IMS), and the National Aeronautics and Space Administration's (NASA's) Moderate Resolution Imaging Spectroradiometer (MODIS) and Modern Era Retrospective analysis for Research and Applications, version 2 (MERRA-2).The NSRDB irradiance data have been validated and shown to agree with surface observations with mean percentage biases within 5% and 10% for global horizontal irradiance (GHI) and direct normal irradiance (DNI), respectively.During the last 23 years, the NSRDB has been widely used by an ever-growing group of researchers and industry both directly and through tools such as NREL's System Advisor Model.
Cardiovascular diseases as life-threatening diseases are the most common cause of death in Western European countries .Myocarditis and non-ischemic dilated cardiomyopathy are acute or chronic disorders of heart muscle which arises mainly from myocardial inflammation or infections by cardiotropic viruses .More than 12 million patients in Europe and 15 million patients in the United States are suffering from heart failure including four million with DCM, according to an estimation of the European Society of Cardiology .The traditional clinical diagnosis based on individual patient’s clinical symptoms, medical and family history, laboratory and imaging evaluations should be expanded by endomyocardial biopsy diagnostics to confirm myocardial disease for following treatment decisions .Improvements in human genetic studies and the continuously-expanding field of biomarker discovery revealed the potential of physiological biomarkers such as microRNAs or gene expression profiles for diagnosis of complex diseases such as cardiomyopathies and for applications in personalized medicine .miRNA profiling can serve as a new exciting tool in modern diagnostics, which is comparable to gene expression analysis but with less amount of analytes.In addition, approximately 2500 human mature miRNAs have been discovered so far, which seems to be relatively small in number compared to the enormous number of genes discovered .miRNAs are 20–22 nucleotides in length and highly-conserved non-coding RNAs.They have been demonstrated to play multiple roles in negative or positive regulation of gene expression including transcript degradation, translational suppression, or transcriptional and translational activation.miRNAs are present in a wide range of tissues .In body fluids such as serum, plasma or spinal fluid, miRNAs are protected from endogenous RNase activity by inclusion in exosomes or protein complexes .Due to their high biostability, circulating miRNAs can be used as reliable blood-based markers to identify cardiovascular or other human disorders .Up to now, about 800 expressed miRNAs have been experimentally detected in EMBs .As shown for DCM, hypertrophic and inflammatory cardiomyopathy, the expression of miRNAs is characteristically altered in heart tissue .Differential miRNA patterns allow the identification of different heart disorders or disease situations .The role of these human miRNAs in pathogenesis highlights their value as potential molecular biomarkers for complex diseases such as cardiomyopathies .The discriminating power of single miRNAs for diagnosis of complex diseases can be increased by its integration in a larger panel presenting a specific miRNA signature.The application of myocardial miRNA profiling allows the differentiation of distinct phases of viral infections and the prediction of the clinical course of virally-induced disease at the time point of primary diagnostic biopsy .In the same individual, miRNA signatures in tissue, serum, peripheral blood mononuclear cells, or other body fluids show specific features for the current condition.Therefore these disease-specific biomarkers are of increasing interest for personalized medicine .Non-expressed miRNAs in their entirety were ignored and corresponding data were rarely presented .Due to rather negative regulation of miRNAs in general, absent miRNAs would indicate genes which are not altered in terms of expression and therefore normally expressed in specific compartments.Occurrence of previously absent miRNAs could be an easy predictor for changes in functional activity in analyzed biological sample or in the disease situation under examination.Analyses of expression data by bioinformatic software are currently based on two strategies: presentation of published data of deregulated miRNAs and their association with affected pathways or diseases and prediction of involved miRNAs extrapolated from data of differentially expressed genes in corresponding disease situation as presented in the Kyoto Encyclopedia of Genes and Genomes schemata.Comprehensive expression data of indicated pathways or associated disorders are limited by availability of larger patient cohorts and comparability of analytical methods.In this article, we focused on the non-detectable miRNAs measured on different platforms in myocardial tissue, blood cells, and serum in a large cohort of cardiac patients suffering from different forms of inflammatory or virally-induced heart muscle diseases .The underlying disease was diagnosed by routine EMB .The bioinformatic analyses of generated data using two current freely-available prediction tools revealed no evidence for their involvement in heart-related pathways.Experimental findings for cardiac patients were confirmed by comparisons of absent miRNAs in large cohorts of patients with different diseases measured on the same analytical platforms.We performed miRNA expression studies with three analytical platforms, the Geniom Biochips and two TaqMan PCR-based high-throughput systems including low density array and OpenArray.Based on the analysis of deregulated miRNAs, we presented lists and pathways of non-detectable miRNAs in different tissues of primarily cardiac patients.All data were generated in the same laboratory to facilitate comparative data analysis.miRNA preparations were obtained for patients with inflammatory or virally-induced cardiomyopathies from EMBs, PBMCs, or serum including corresponding controls.miRNAs in EMBs and serum were measured using two different platforms, which cover different sets of miRNAs.Therefore, an additive list for EMBs and serum of absent miRNAs of each system was generated and used for all following calculations.A list of absent miRNAs was generated to indicate common or unique tissues in which miRNAs are not detectable.Furthermore, a Venn diagram analysis was performed to reveal overlapping absent miRNAs in EMBs, serum, and PBMCs and miRNAs exclusively absent in particular tissues.As shown in Figure 1, we detected 1107 miRNAs in total absent in 1–3 sample groups.179 miRNAs were found to be absent in all three sample sources from cardiac patients.The miRNA Enrichment Analysis and Annotation Tool analysis showed that these miRNAs are involved in 685 pathways, implying possibly unaltered genes in these pathways.7 out of 685 pathways were indicated to be heart-related.In addition, there are 2 pathways described for viral myocarditis and
MicroRNAs (miRNAs) can be found in a wide range of tissues and body fluids, and their specific signatures can be used to determine diseases or predict clinical courses.The miRNA profiles in biological samples (tissue, serum, peripheral blood mononuclear cells or other body fluids) differ significantly even in the same patient and therefore have their own specificity for the presented condition.Since miRNAs regulate gene expression rather negatively, absent miRNAs could indicate genes with unaltered expression that therefore are normally expressed in specific compartments or under specific disease situations.miRNA expression data were generated by microarray or TaqMan PCR-based platforms.
first time a panel of absent miRNAs in serum, PBMCs, EMBs, spinal fluid, urine, and ocular fluid of diseased patients including corresponding healthy controls.Implementing this spectrum in comparison to miRNA studies in different disorders, disease-specific miRNAs can be identified expeditiously.Further studies have to confirm especially which of these absent serum miRNAs in cardiomyopathies are not versatile.Circulating miRNAs will be the novel diagnostic biomarkers, also for heart muscle diseases .Some of these serum miRNAs are present in other disorders not corresponding to cardiomyopathies, which could be of scientific interest for understanding of specific pathomechanisms or finally as therapeutic targets for miRNA modulation to deal with discrete disease situations.There are some limitations in the current study.Three analytical platforms were used in generating data for overlapping sample sets to infer miRNAs absent alone or in different combinations.EMBs and PBMCs were measured with microarray-based technology for former sets of available miRNAs, whereas Taqman PCR-based analysis were performed later and used to measure miRNAs in serum, EMBs , spinal fluid, urine, and ocular fluid.In addition, only two freely-available software tools were used for pathway prediction.The bioinformatic and translational perspective of presented approach is manifold.This first preliminary study on non-detectable miRNAs should sensitize scientific community to present not only data of deregulated candidates, but also data of completely absent miRNAs as a valuable dataset for improvement of commonly used software tools.Non-detectable miRNAs should be excluded from further prediction of corresponding pathways.Otherwise the collection of these data for all tissues, cells, or body fluids would be an important reservoir for future research or also pharmaceutical studies, and thus should be propagated by bioinformatics.The unexpected finding of previously-described non-expressed miRNAs in an experiment or clinical study will facilitate the identification of newly involved pathways or functional dysregulations in an observed setup.EMB, PBMC, and serum samples were obtained from healthy controls and patients suffering from inflammatory or virally induced myocarditis as shown in Table 1 .The study was performed within the Transregional Collaborative Research Centre .The study protocol was approved by the local ethics committees of the participating clinical centers, as well as by the committees of the respective federal states.An informed written consent was obtained from each participant.Spinal fluid samples were received from healthy controls and patients suffering from Alzheimer’s disease, with the ethical statement described previously .Urine samples were acquired from healthy controls and patients harboring bladder cancer, with the ethical statement described previously .In addition, we analyzed pooled ocular fluid from random patients.miRNAs were obtained from patients, using mirVana™ miRNA Isolation Kit resp.mirVana™ PARIS™ RNA and Native Protein Purification Kit for low content samples such as serum, urine, ocular fluid, and spinal fluid according to manufacturer’s instructions.All presented expression studies were performed in the same laboratory.Total RNA including miRNA fraction was reversely transcribed to cDNA using Megaplex stem-loop RT primer for Human Pool A and B in combination with the TaqMan MicroRNA Reverse Transcription Kit.This allowed simultaneous cDNA synthesis of 377 unique miRNAs for Pool A and B each.Except for biopsy materials, a pre-amplification protocol was performed for all low content samples to increase the detection rate.The entire procedure for quantification using TaqMan® OpenArray® and TaqMan® LDA is described elsewhere.miRNAs which were not detectable or above cycle threshold 28 resp.32 were considered to be absent in the sample.The expression analysis of all 906 miRNA and miRNA∗ sequences as annotated in Sanger miRBase version 14.0 was performed with the Geniom Real Time Analyzer and the Geniom biochip MPEA hsapiens V14.Sample labeling with biotin was carried out by using the ULS labeling Kit from Kreatech.All essential steps such as hybridization, washing, as well as signal amplification and measurement, were done automatically by Geniom Real Time Analyzer.The resulting detection images were evaluated using the Geniom Wizard Software for background correction and normalization of generated data.miRNA expression analyses were carried out using the normalized and background-subtracted intensity values.miRNAs not detectable in all samples of corresponding biological material were regarded as absent for this material and disease.All following bioinformatics analyses by pathway prediction tools were based on the list of these candidates.Venn diagrams of intersecting sets of miRNAs between different tissues and platforms are generated using Venny v2.0.miEAA and DIANA miRPath v.2.0 were used for miRNA target prediction and pathway analysis.All given lists of miRNAs are translated and annotated according to miRBase v14 nomenclature.CS conducted the bioinformatic algorithms and miRNA target identification, and drafted the manuscript.CS and MR carried out miRNA expression studies.DL conceived the study, and participated in study design and coordination.UK, FE, and HPS had primary responsibility for patient characterization and management.All authors discussed the results, read, and approved the final manuscript.The authors declare no competing financial interests or relationships relevant to the content of this paper to disclose.
Complex profiles of deregulated miRNAs are of high interest, whereas the importance of non-expressed miRNAs was ignored.For the first time, non-detectable miRNAs in different tissues and body fluids from patients with different diseases (cardiomyopathies, Alzheimer's disease, bladder cancer, and ocular cancer) were analyzed and compared in this study.Lists of absent miRNAs of primarily cardiac patients (myocardium, blood cells, and serum) were clustered and analyzed for potentially involved pathways using two prediction platforms, i.e., miRNA enrichment analysis and annotation tool (miEAA) and DIANA miRPath.
emerging post-Brexit landscape following the EU referendum in June 2016.Early signs are that the current Prime Minister Theresa May will continue to support and expand NCS.However, she is also grappling with a ‘new’ mapping of the Union along fragmented national mandates for ‘Leave’ or ‘Remain’, prompting the possibility of a second referendum on independence in Scotland.On the one hand, the English-centric geographies of NCS discussed here could be seen as adding to these tensions.We have already shown how, problematically, the model of NCS represents a wider retreat from the global scale in its framework.On the other hand, we have seen increased calls in the weeks and months following the EU referendum for the importance of political and civic education.Could, therefore, a re-fashioned or re-imagined NCS be needed more than ever?,Are there potential opportunities to re-align the scales of youth citizenship it currently hosts?, "It is too early to tell the full impact of the Brexit vote for young people living in the United Kingdom and young people's politics. "However, the place of National Citizen Service is firmly cemented in the Conservative Party's future plans and ambitions, and as such, is part of these wider narratives and dilemmas.This paper has engaged with, and pushed forward, key debates on the scaling of youth citizenship, making two key contributions to disciplinary work on being/becoming citizens, being/becoming political, and being/becoming adults.First, the paper has offered the concept of ‘brands’ of youth citizenship to understand how the state promotes youth citizenship formations.Using the example of NCS and its institutional geographies, the paper demonstrated how the state seeks to create, shape and govern citizens of the future through a scalar political imagination.This much-needed contribution to work on the geographies of youth citizenship emphasised the multiple actors in the design and delivery of a youth citizenship model and how scale is crucial to that agenda.Citizenship and adulthood are often used as powerful ideological tropes to mobilise wider policy objectives, and we have shown in our study how the state has prioritised certain scales as part of its vision, namely the primacy of the national and local, with a retreat from the global.However, the regional infrastructure of the scheme is creating differences in the activities NCS ‘hosts’, and the extent to which young people are encouraged or enabled to pursue P/political activities based on their postcode.Our study has exposed the overall primacy of social action and the legitimacy given to certain types of community engagement and ‘good’ participation that reveal how this ‘branding’ of citizenship is being used by the neoliberal state to encourage a particular type of citizen-subject.We have demonstrated how ideas about being a ‘good’ citizen and a good ‘young person’ merge and mix, and would suggest this is set to continue in England with the recent push for character education within the Department for Education.Overall, we have gone beyond using labels for different types of hyphenated forms of citizenship formation to instead propose a focus on the branding of youth citizenship – a vision and set of scalar institutional strategies that transmit a model."In this case, one firmly cast in terms of social action, aligned with the state's broader political project.Second, this paper has contributed an important focus to the often neglected processes of state-formation, governance and wider ideologies to such youth citizenship projects, with timely insights into challenging and competing visions of citizenship, belonging and national identity."In the context of NCS, the geographies of devolution have actively shaped NCS provision and uptake across the UK, perhaps mirroring wider differences in youth policy across devolved administrations and their responses to Westminster's politics of voluntarism and the ‘Big Society’.There should be greater sensitivity in geographical work to these themes, and future research could usefully map the different youth citizenship discourses in England, Scotland, Wales and Northern Ireland.We also highlighted how the rhetoric of ‘Britishness’ has been used as a framing device for NCS, a further element in its ‘brand’ of youth citizenship, shaped by the wider political climate.Through our discussion, we contributed a focus on the uneven geographies of learning to be a citizen and the multiple scalar fractures and fissures within such training spaces."This timely contribution to work on young people's political geographies is needed more than ever after the recent EU referendum.Indeed, questions on the branding and scaling of youth citizenship should matter for all political geographers, not just those who study the geographies of children and young people.This work was supported by the Economic and Social Research Council – ESRC .
This paper explores the politics of scale in the context of youth citizenship.We propose the concept of ‘brands of youth citizenship’ to understand recent shifts in the state promotion of citizenship formations for young people, and demonstrate how scale is crucial to that agenda.As such, we push forward debates on the scaling of citizenship more broadly through an examination of the imaginative and institutional geographies of learning to be a citizen.The paper's empirical focus is a state-funded youth programme in the UK – National Citizen Service – launched in 2011 and now reaching tens of thousands of 15–17 year olds.We demonstrate the ‘branding’ of youth citizenship, cast here in terms of social action and designed to create a particular type of citizen-subject.Original research with key architects, delivery providers and young people demonstrates two key points of interest.First, that the scales of youth citizenship embedded in NCS promote engagement at the local scale, as part of a national collective, whilst the global scale is curiously absent.Second, that discourses of youth citizenship are increasingly mobilised alongside ideas of Britishness yet fractured by the geographies of devolution.Overall, the paper explores the scalar politics and performance of youth citizenship, the tensions therein, and the wider implications of this study for both political geographers and society more broadly at a time of heated debate about youthful politics in the United Kingdom and beyond.
The dataset contains 3 folders: 1) The first folder is all the bibliographic information for thermal comfort and building control research.The total number is 5536 articles, and the publication range is from 1970 to 2016.Table 1 summarizes general information about the publications for the two different search periods.The bibliographic information is summarized by multiple text files.2) The co-occurrences among keywords are described.Firstly, the keywords are extracted from the title and abstract text and they are further filtered by pre-defined thesaurus words.Subsequently, the keywords are clustered based on research topics.Finally, the co-occurrences among keywords are normalized as distances among them.The files contain each keyword and its coordinate for the two periods.For the visualization of this two, the figures can be found in the original research paper .Tables 2 and 3 explain keyword analysis for historical developments and recent trends, respectively.3) The papers essentially are classified by their research theme, and their citation relationship is tabulated by matrix form in the data.Table 4 describes the citation relationship among the three themes.Note that only 3572 papers build the citation relationship.This citation relationship is also visualized in the original research paper ."For the publication collection, we selected Thomson Reuters' Web of Science bibliographic database .We used the following logical combinations of search terms to collect relevant publications: For thermal comfort research related to buildings, we use the search term, AND,On the other hand, the search term for building control research related to energy efficiency wasowing to the fact that building control research can be found under several alternative terms.Using these search terms, we downloaded the publication information, i.e., title, abstract, author, citation, publication year, as a tab-delimited text file, suitable for further processing.Essentially, we split the dataset into two parts by publication dates.The first contains all the publications until 2010 and allows us to study the historical developments.The second part is for the publications from 2011–2016 in order to identify recent trends.For the selection of keywords in a scientific landscape, all the words were extracted from the title and abstract of the publication collections and they were filtered for a minimum of 30 occurrences.With filtered words, the most relevant keywords were extracted through a VOSviewer built-in text mining function .Subsequently, we eliminated unrelated words and merged repetitive words by applying the pre-defined thesaurus files.With the list of keywords, the VOSviewer generated the co-occurrence table and clustered the keywords based on the co-occurrences.Two words are defined as co-occurred if they appear in the same document.In addition, the cluster names were manually labeled based on the observed keywords.Ultimately, the scientific landscape of thermal comfort and building control research is generated.In this figure, the size and color of the circle represents the frequency of occurrence and cluster type of the individual keyword, respectively.Lastly, the distance between the keywords is representative of their relative co-occurrence, e.g., two keywords that are close to each other co-occur more frequently, whereas a large distance between two keywords indicates that they do not co-occur.To identify the interaction between thermal comfort and building control research, we investigate citations of the whole publications.Analyzing citation information specifies quantitative interactions between the two.
This dataset contains bibliography information regarding thermal comfort and building control research.In addition, the instruction of a data-driven literature survey method guides readers to reproduce their own literature survey on related bibliography datasets.Based on specific search terms, all relevant bibliographic datasets are downloaded.We explain the keyword co-occurrences of historical developments and recent trends, and the citation network which represents the interaction between thermal comfort and building control research.Results and discussions are described in the research article entitled “Comprehensive analysis of the relationship between thermal comfort and building control research – A data-driven literature review” (Park and Nagy, 2018).
croplands .A meta-analysis was applied to synthesize information Regarding the original compilation, we decided to discard data that raised uncertainty by adopting the following criteria: i) data with incomplete information about the investigation, ii) incomplete reporting of missing data, iii) incomplete information of essential processes, iii) small sampling size; iv) mean value and standard deviation in data that were extremely high or low in relation to the mean and standard deviation of the whole database, v) data that did not reflect the current condition of the local environment.The final database cases from 366 peer-reviewed publications for different climatic regions.These collected data were counteracted with the results obtained with the theoretical method.The objective was to submit it to a validation process to verify its strength.For this, a simple linear regression model was used.The statistical significance of the model and the values of the coefficients of determination and correlation, together with the behaviour of the residuals, were verified.The high and significant relationship between the results coming from the proposed theoretical method with the empirical data that emerged from meta-analysis, gave it additional strength when estimating soil carbon sequestration.The relevance of this method is that it allows obtaining different carbon balance results when incorporating carbon sequestration in grazing lands into calculations.Examples of estimation and results obtained with this methodology compared with IPCC Tier 1 results can be seen in Viglizzo et al. .We find no sources of conflict in this work.
Based on international guidelines, the elaboration of national carbon (C) budgets in many countries has tended to set aside the capacity of grazing lands to sequester C as soil organic carbon (SOC).A widely applied simple method assumes a steady state for SOC stocks in grasslands and a long-term equilibrium between annual C gains and losses.This article presents a theoretical method based on the annual conversion of belowground biomass into SOC to include the capacity of grazing-land soils to sequester C in greenhouse gases (GHG) calculations.Average figures from both methods can be combined with land-use/land-cover data to reassess the net C sequestration of the rural sector from a country.The results of said method were validated with empirical values based on peer-reviewed literature that provided annual data on SOC sequestration.This methodology offers important differences over pre-existing GHG landscape approach calculation methods: .improves the estimation about the capacity of grazing-land soils to sequester C assuming these lands are not in a steady state and .counts C gains when considering that grazing lands are managed at low livestock densities.
although it does include items such as “I am often unhappy, down-hearted or tearful”.Depression is relatively uncommon before puberty, so adjustment for a wider range of emotional symptoms at this age could be a better way to account for differences in future depression risk, though residual confounding by childhood depression is still possible.28,Our study did not include single-parent families because we investigated the potential independence of paternal and maternal depression.Research into the role of fathers in single-parent families is scarce.Of course, residual confounding is always a possibility in observational studies—eg, we did not have detailed information about comorbid health problems in parents.Further research that acknowledges the complexity of these associations in both parents, and their implications for offspring, would be beneficial.Finally, for both maternal and paternal depressive symptoms, it is difficult to judge the potential clinical importance of the observed associations.An increase of 1 SD in paternal depressive symptoms was associated with an increase in adolescent depressive symptoms of 0·04 of an SD in GUI and 0·03 in MCS.Though small, these findings were observed after follow-up of 4 years for GUI and 7 years for MCS, and several factors could have led to an underestimation of the association, including error in the paternal depression measure and outcome.We also used brief measures of depressive symptoms before the main period of depression incidence in offspring.There is good evidence that treatment of maternal depression in clinical populations leads to meaningful improvements in offspring outcomes.29,Our evidence suggests that similar improvements in offspring outcomes would be expected if paternal depression were treated, although future research to test this possibility is required.Several studies of adolescent depression report no influence of paternal depression, or that the influence of maternal depression is stronger.12,30–32,However, many of these studies were small, contained few fathers, or did not examine adolescent depression as an outcome.In studies from previous decades, fathers possibly had less involvement with children than in our more contemporary samples.In the MCS cohort, the magnitude of the maternal depression association appeared stronger than that of paternal depression, but there was no statistical evidence to support a difference.In the GUI cohort, there was no evidence of any difference.Our results also suggest that a child with two parents with depression is at greater risk than a a child with one parent with depression.There is evidence that the intergenerational transmission of depression occurs predominantly through environmental mechanisms, although genetic influences are also important.19,Environmental mechanisms could include social modelling of depressive thinking styles.33,There is also good evidence that mothers and fathers with depression experience difficulties in parenting and parent–child relationships, which partly account for the influence of depression on their children.34,Most of the work on mechanisms has been done with mothers, and less is known about possible mechanisms in relation to fathers.35,Our exposure variables were measured before puberty, when the prevalence of depression is low, and our outcomes in early adolescence, when incidence is only just beginning to rise.Adolescent depressive symptom scores were higher in MCS than in GUI, possibly because adolescents were, on average, 14 years old in MCS and 13 years old in GUI.This is an important difference in age for the adolescent increase in depressive symptoms, which only begins at around the age of 13 years.1,Our findings, if they reflect a causal relationship, are therefore important for the primary prevention of depressive disorder.Current interventions for preventing adolescent depression focus largely on mothers.Depressive symptoms in parents are associated, and depression in one parent is a risk factor for depression in the other.36,When the mother is depressed, clinicians should therefore also consider the associated yet independent influence of depression in the father, especially since men are less likely to seek treatment for depression.37,This is particularly important given that children are at even higher risk when both parents have depressive symptoms.Our findings, if they reflect a causal relationship, suggest that the priority should be treatment of depression in both parents."Our results are inconsistent with the idea that mothers are responsible, or even to blame for children's mental health, whereas paternal influences are negligible.Rather, they suggest that the mental health of both parents is important for the mental health of their children.Interventions to improve adolescent mental health should therefore target both parents, irrespective of their sex.
Although maternal depression is a risk factor for adolescent depression, evidence about the association between paternal and adolescent depression is inconclusive, and many studies have methodological limitations.Parental depressive symptoms were measured with the Centre for Epidemiological Studies Depression Scale in the GUI cohort when children were 9 years old, and the Kessler six-item psychological distress scale in the MCS cohort when children were 7 years old.Adolescent depressive symptoms were measured with the Short Mood and Feelings Questionnaire (SMFQ) at age 13 years in the GUI cohort and age 14 years in the MCS cohort.Findings There were 6070 families in GUI and 7768 in MCS.After all adjustments, a 1 SD (three-point) increase in paternal depressive symptoms was associated with an increase of 0.24 SMFQ points (95% CI 0.03–0.45; p=0.023) in the GUI cohort and 0.18 SMFQ points (0.01–0.36; p=0.041) in the MCS cohort.This association was independent of, and not different in magnitude to, the association between maternal and adolescent depressive symptoms (Wald test p=0.435 in the GUI cohort and 0.470 in the MCS cohort).Interpretation Our results show an association between depressive symptoms in fathers and depressive symptoms in their adolescent offspring.These findings support the involvement of fathers as well as mothers in early interventions to reduce the prevalence of adolescent depression, and highlight the importance of treating depression in both parents.
in Fig. 6 are visualized in Fig. 7.All compositions were confirmed with EDX analysis.The sample with the pre-oxidation layer removed from the hydrogen-facing side showed that Fe2O3 covered the entire surface of the sample.This breakaway oxidation layer exhibited a microstructure similar to the one discussed for shorter pre-oxidation times.Underneath the roughly 20 μm thick Fe2O3 layer, an equally thick3O4 layer had formed.In contrast, the sample with a pre-oxidation layer present on the hydrogen-facing side showed no signs of breakaway oxidation.Instead, a highly protective, roughly 50 nm thick Cr- and Mn-containing oxide layer was observed on the entire sample.SEM analysis confirmed the results from the visual inspection; the pre-oxidation layer on the hydrogen-facing side is more important for protection against the dual atmosphere effect than the pre-oxidation layer on the air-facing side.Alnegren et al. have discussed the different possible mechanisms for the dual atmosphere effect.The exact mechanism remains unknown and further research is needed, however, the beneficial effect of pre-oxidation in dual atmosphere has been observed .Two different possible reasons have been proposed for this beneficial effect.The first hypothesis is that the formation of a protective oxide layer on the air-facing side slows down oxidation of the alloy, as a direct air-alloy interface is prevented.The second hypothesis is that the protective oxide layer on the hydrogen-facing side decreases the ingress of hydrogen into the steel and, thus, limits the effect of the dual atmosphere on the corrosion behavior of the air-facing side.The present study clearly proves that the latter hypothesis is more probable.The pre-oxidation layer on the hydrogen-facing side impedes the onset of breakaway corrosion, whereas the pre-oxidation layer on the air-facing side does not.This is also in agreement with previous work by Kurokawa et al. on hydrogen permeation through an oxidized FSS at 800 °C.In those studies, it was observed that hydrogen permeation through Cr2O3 is substantially slower than diffusion through ferritic stainless steel, and that after 100 h of exposure and the formation of a roughly 760 nm thick Cr2O3 scale, the hydrogen permeation level had decreased by 0.18% compared to hydrogen permeation through the bare alloy.This also suggests that a thicker oxide scale on the hydrogen-facing side enhances the corrosion resistance in dual atmosphere, as less hydrogen dissolution in the alloy occurs.The results observed for different pre-oxidation times in the present study confirm this conclusion.However, these results also strongly suggest that the beneficial effect of the pre-oxidation layer on the hydrogen-facing side might not be sufficiently effective for very long exposure times.It is, therefore, vital to protect the alloy from hydrogen dissolution by other, more effective, means, such as barrier coatings.The results indicate that fuel side coatings might be the most effective.The present study investigated the influence of the pre-oxidation of uncoated AISI 441 on the dual atmosphere effect.The pre-oxidation time clearly correlated to the onset of breakaway oxidation.This means that under dual atmosphere conditions, longer pre-oxidation times increase corrosion resistance.Consequently, shorter pre-oxidation times should allow for accelerated testing of different materials.A comparison of samples with the pre-oxidation scale removed from one side showed that the existence of an oxide scale on the hydrogen-facing side was more important to maintaining a protective oxide scale on the air-facing side.The results suggest that a pre-oxidation scale on the hydrogen-facing side acts as a barrier to hydrogen ingress into the steel.The presence of an oxide scale on the air-facing side before dual atmosphere exposure seems to be of less importance.This indicates that barrier coatings on the hydrogen-facing side might be the most efficient in mitigating a dual atmosphere effect.
Dual atmosphere conditions have been shown to be detrimental for the ferritic stainless steel interconnects used in solid oxide fuel cells (SOFC) under certain conditions.In the present work, we analyze the influence of pre-oxidation on corrosion resistance in dual atmosphere with regard to two parameters: the pre-oxidation time and the pre-oxidation location (pre-oxidation layer on the air-facing side or the hydrogen-facing side).The steel AISI 441 is investigated and pre-oxidation is achieved in air at 800 °C.Photographs, taken throughout the exposure, show that the pre-oxidation time correlates with the onset of breakaway corrosion.To analyze the influence of pre-oxidation location on corrosion behavior, the samples are pre-oxidized for 180 min, and then a pre-oxidation layer is removed from one side of the sample.Subsequent dual atmosphere exposure at 600 °C for 500 h shows that the pre-oxidation layer on the hydrogen-facing side is more important for corrosion resistance in dual atmosphere than the pre-oxidation layer on the air-facing side.
there was vanishingly smaller than those the subarctic and transition regions.As with the MPD of the bomb-derived 137Cs, the MPD of the Fukushima-derived 134Cs in the subtropical region was likely overestimated due to the subsurface maximum.In June 2011, about seven months earlier, Buesseler et al. observed the MPD of Fukushima-derived radiocesium in the transition region to be 42 m.The MPD of 42 m and the penetration time of about 3 months suggest that the MPD of Fukushima-derived radiocesium in January 2012 should have been about 80 m.The MPD of the Fukushima-derived 134Cs observed in January 2012 is thus about three times the MPD estimated by using the data of June 2011.The deeper observed MPD implies that the Fukushima-derived 134Cs, especially that derived from the direct discharge of contaminated water from the FNPP1, was explained not by a simple one-dimensional advection/diffusion process but by strong coastal and tidal currents.By using the relationship between water-column inventories and activity concentrations in surface water of the Fukushima-derived 134Cs in winter 2012, we could obtain better coverage of the water-column inventory because we have surface 134Cs data from a larger set of locations.As a result, it is possible to obtain a better estimate of the total amount of radiocesium released from the FNPP1 into the Pacific Ocean.In the same way, by using the relationship between water-column inventories and activity concentrations in surface water of the bomb-derived 137Cs in this study, we could obtain larger coverage of the total inventory of the bomb-derived 137Cs just before the accident.Because of the short half-life of 134Cs, the Fukushima-derived 134Cs activity concentration is decreasing rapidly, and it will decay to less than the detection limit within the coming decade.Thus, in the future the Fukushima-derived radiocesium will be estimated using 137Cs, which has a half-life of 30.04 y, instead of 134Cs.The water-column inventory of the Fukushima-derived 137Cs will be obtained by subtracting the inventory of the bomb-derived 137Cs from the observed 137Cs inventory.In winter 2012, about ten months after the FNPP1 accident, the Fukushima-derived 134Cs activity concentration and water-column inventory were largest in the transition region due to the directed discharge of the contaminated water from the FNPP1.We also evaluated the bomb-derived 137Cs activity concentration, that is, the activity concentration just before the FNPP1 accident, along 149 °E meridian to be about 1.10 ± 0.04 Bq m−3 on average from the excess 137Cs activity concentration relative to the 134Cs activity concentration observed in winter 2012.The estimated the bomb-derived 137Cs activity concentration agrees well with that obtained in 2005 after decay-correction using the apparent half-life of 13 ± 1 y.The bomb-derived 137Cs activity concentration, which is due mainly to nuclear weapons testing in the 1950s and 1960s, is at present concentrated in the subtropical region of the North Pacific.This implies that the Fukushima-derived 134Cs will also be transported from the transition to subtropical regions in the coming decades by way of the thermocline circulation, including the subduction of the mode waters.Mean values of the water-column inventories for the Fukushima-derived 134Cs and the bomb-derived 137Cs, both decay-corrected to the date of the FNPP1 accident, were estimated to be 1020 ± 80 and 820 ± 120 Bq m−2, respectively.The ratio of the Fukushima-derived 134Cs versus the bomb-derived 137Cs inventories, 1.3 ± 0.2, suggests that the impact of the FNPP1 accident in the western North Pacific Ocean in winter 2012 was nearly the same as that of nuclear weapons testing.The Fukushima-derived 134Cs will decay to less than the detection limit within the coming decade because of its short half-life.After that, it will be necessary to employ 137Cs to estimate the Fukushima-derived radiocesium instead of 134Cs, a unique tracer of radiocesium released by the FNPP1 accident.Knowledge of the bomb-derived 137Cs activity concentration will be essential to evaluation of the total amount of Fukushima-derived 137Cs activity concentration.
We measured vertical distributions of radiocesium (134Cs and 137Cs) at stations along the 149°E meridian in the western North Pacific during winter 2012, about ten months after the Fukushima Dai-ichi Nuclear Power Plant (FNPP1) accident.The Fukushima-derived 134Cs activity concentration and water-column inventory were largest in the transition region between 35 and 40°N approximately due to the directed discharge of the contaminated water from the FNPP1.The bomb-derived 137Cs activity concentration just before the FNPP1 accident was derived from the excess 137Cs activity concentration relative to the 134Cs activity concentration.The water-column inventory of the bomb-derived 137Cs was largest in the subtropical region south of 35°N, which implies that the Fukushima-derived 134Cs will also be transported from the transition region to the subtropical region in the coming decades.Mean values of the water-column inventories decay-corrected for the Fukushima-derived 134Cs and the bomb-derived 137Cs were estimated to be 1020 ± 80 and 820 ± 120 Bq m-2, respectively, suggesting that in winter 2012 the impact of the FNPP1 accident in the western North Pacific Ocean was nearly the same as that of nuclear weapons testing.Relationship between the water-column inventory and the activity concentration in surface water for the radiocesium is essential information for future evaluation of the total amount of Fukushima-derived radiocesium released into the North Pacific Ocean.
There are two parts to the data.The first part of the data includes Table 1, which describes the detailed experimental conditions.The second part of the data includes raw CT images and processed Matlab matrices that show 3D maps of experimental results.For each of the nine sandstone core samples, two 3D CO2 saturation maps, one 3D porosity map, and one 3D permeability map are shared with this article.The two CO2 saturation maps contain a post-drainage initial CO2 saturation map and a post-imbibition residual CO2 saturation map.Using these two CO2 saturation maps, residual trapping relationships can be calculated for all core samples provided.All of the 3D maps are illustrated in Fig. 1.Note that the permeability map of Fontainebleau2 is not as accurate as those of the other core samples, it is however still provided here for data completeness.For more information, see Ni et al. .Steady-state CO2/water coreflooding experiments at reservoir conditions have been conducted on nine sandstone rock samples.The samples come from the Berea, Massillon, Bentheimer, Fontainebleau, and the Shezaf sandstone formations.The nine core samples also have a wide range of heterogeneity and internal features.The experiments contain both drainage and imbibition stages.The CT scans with the highest post-drainage CO2 saturation are selected as the initial scans and the corresponding CT scans after 100% water imbibition are selected as the residual scans to be presented with this article.Both the CO2 saturation maps and the porosity maps are directly obtainable through manipulating CT images, whereas the permeability maps are calculated through an extensive iterative procedure involving reservoir simulation .For details regarding the experimental procedure and data processing, refer to Ni et al. .Table 1 lists all the experimental conditions used for the coreflooding experiments performed on the nine sandstone cores, including experimental temperature, pressure, fluid types used, flow rates, and the conventional capillary numbers.The conventional capillary numbers reported here are achieved during 100% water imbibition stages.The following properties are used to calculate the conventional capillary number for all experiments.At a pressure of 1300 psia, CO2/water interfacial tension σ = 35 mN/m and water viscosity μ = 5.4843 × 10−4 Pa s at 50 °C .The equation for conventional capillary number is νμ/σ, where ν is the Darcy velocity.Fig. 1 illustrates all the 3D maps provided with this data article.Each column of subplots shows the four 3D maps available for each of the nine sandstone core samples.The first row of subplots shows the porosity maps.The second row shows the permeability maps.The third row shows the initial CO2 saturation maps and the fourth row shows the residual CO2 saturation maps.All CO2 saturation data provided is steady-state result and the processed data has been averaged over three independent CT scans.For exact voxel sizes, refer to Ni et al. .For more details on CT scan data processing, CT scan precision, and data uncertainty analysis, see Ni et al. and its supplementary material.
This data article provides detailed explanation and data on CO2/water coreflooding experiments performed on nine sandstone rock cores.Refer to the research article “Predicting CO2 Residual Trapping Ability Based on Experimental Petrophysical Properties for Different Sandstone Types” [1] for data interpretation.The reader can expect to find experimental conditions including temperature, pressure, fluid pair types, as well as flow rates.Furthermore, the raw CT images and the processed three-dimensional (3D) voxel-level porosity, permeability, and CO2 saturation maps for each of the nine sandstone samples are also supplied.
preferences and desired outcomes, and provide a more varied range of supervised exercise activities.Despite low adherence overall, those who engaged with the intervention demonstrated some notable benefits after just 10 weeks., The largest improvements were in negative symptoms.These symptoms are strongly associated with long-term functional impairment, and tend not respond to antipsychotics.These clinically meaningful improvements corresponded with the emergent themes of qualitative interviews, which found acute ‘feel good effects’ from moderate-to-vigorous exercise, as found other qualitative studies.However, there were minimal changes in bodyweight following the intervention.The age of participants and their duration of SMI are important factors to consider when interpreting these findings.Previous research in people with long-term SMI has suffered from high rates of attrition and found that benefits only occur for the subset of patients who adhere to the interventions provided.This may be due to long-term antipsychotic treatment, and associated sedentary habits, obesity and metabolic complications, all of which act as a barrier to exercise.Nonetheless, broader lifestyle interventions which have proven effective for reducing bodyweight, even in long-term obese patients.For example, Daumit et al. combined group exercise sessions with weight-management counselling; incorporating social cognitive theory and behavioral self-management in a manner which had previously proven effective in non-psychiatric populations, but was adapted for psychiatric inpatients in this study.This attracted higher rates of participation than our intervention, with 64% of eligible residents joining the study, high rates of adherence over 6 months, and significant reductions in bodyweight and waist circumference.Although the scale of this evaluation project was reasonably large, only a small proportion consented to take part in exercise, reducing our ability to generalize from these findings.Along with continuing to study the benefits in patients who volunteer for exercise trials, future research should explore new ways to reach and engage the majority of people with SMI, who may typically opt-out of exercise or regular physical activity."Developing interventions which draw on and support people's autonomous motivation may be effective for increasing motivation towards exercise in this patient group.Additionally, focusing on motivational aspects of physical activity engagement, rather than just providing exercise sessions, may be more important for increasing physical activity in long-term patients.Another limitation of this study is that the only form of supervised exercise offered to participants was circuit training classes.Therefore, the lack of participation could be due to some service users having a general interest in exercise, but are averse to this specific format.Thus, more targeted and personalized approaches to exercise coaching may benefit greater numbers of service users; as has been observed in other studies which tailor exercise interventions towards participant preference.The role of qualified exercise professionals in mental healthcare services could be extended beyond providing exercise classes to also include facilitating engagement in exercise activities available in service users local community; especially for those who feel unable to attend these activities alone.This study was funded by Greater Manchester West NHS Mental Health Foundation Trust.Corresponding author JF is funded by an MRC Doctoral Training Grant.All authors declare that they have no competing interests.
Physical exercise is increasingly recognized as an important component of psychiatric care, although the feasibility of implementing exercise in residential care settings is not well understood.We evaluated the feasibility of a 10-week intervention of weekly fitness classes (delivered by a personal trainer) and other exercise activities using a mixed-methods approach.This was offered to across four residential care services, to all 51 residents who had severe mental illness (SMI).Of these, 27.5% consented to the exercise intervention.Of those who completed the intervention, increased physical activity was associated with significantly reduced negative symptoms.In conclusion, implementing exercise interventions in residential psychiatric care is challenging; given that supervised exercise classes may not be appealing to many residents, while unsupervised exercise is poorly adhered to.Future interventions should consider that better tailored exercise programs are required to adequately confront motivational issues, and to account for participant preference in order to increase engagement.
components.Table 14 shows the outcomes from component upgrades for the inter-array cables, assuming a linear manufacturing cost model.Improvements observed in most likely LCOE could be regarded as marginal; however, when considering the contribution to CAPEX from the electrical network they become more relevant.Improvement in variability was more pronounced, reducing by as much as 56.26%.Interestingly, the greatest reduction in variability was observed for different conditions to the greatest reduction in most likely LCOE.It follows that the typical strategy of investing to reduce LCOE may run contrary to the importance of minimising potential cost variance.Further investigation is required to accurately quantify the error in the most likely and 95th percentile metrics, for a given number of data points.It is important to observe that the results discussed above pertain to a very prescriptive maintenance strategy and are derived from coarse estimates for the cost and reliability of components.Nonetheless, important trends have been revealed which merit investigation of cost variability on a case by case basis.This is especially true if the highest variability is recorded at the smallest deployment scales, and a lower than expected return could deter future investment.Mitigation of these risks could be undertaken by investing in more reliable components, but clear understanding of the relationships between production costs and reliability is critical to determining the optimal level of investment.A parametric model of ocean energy converter array design and deployment, with higher complexity than previous models, has been demonstrated.The model fully integrates OEC positioning, power calculation, electrical network and station keeping design, installation of the OECs and infrastructure, lifetime maintenance and downtime prediction.Variability in the levelised cost of energy is revealed by modelling random sub-system failures and weather dependent logistics operations."Utilising the model's component level design, a framework for evaluating the impact of investment into more reliable components is proposed.A case study of a theoretical floating wave energy converter array was developed as a baseline for investigating cost variability and the impact of investment.The variability in levelised cost of energy is shown to reduce with increased size of deployment, indicating that the least reliable energy cost predictions are associated with smaller arrays.Such results may provide an incentive to accelerate development of larger arrays.Should the performance of small arrays be critical to unlocking additional funding, then this may present a risk to sustained investment.Upgrading the reliability of components can reduce predicted energy cost and variability, but the lowest cost solution may not be the least variable.Further work should examine the effect of alternative maintenance strategies on cost variability and investigate sensitivity to choice of component cost model.The influence of other sources of variability, such as power generation and installation actions, also merits further study.Quantifying the error in the economic metrics and understanding the effects of input uncertainty alongside variability is vital for real world applications.
Variability in the predicted cost of energy of an ocean energy converter array is more substantial than for other forms of energy generation, due to the combined stochastic action of weather conditions and failures.If the variability is great enough, then this may influence future financial decisions.This paper provides the unique contribution of quantifying variability in the predicted cost of energy and introduces a framework for investigating reduction of variability through investment in components.Following review of existing methodologies for parametric analysis of ocean energy array design, the development of the DTOcean software tool is presented.DTOcean can quantify variability by simulating the design, deployment and operation of arrays with higher complexity than previous models, designing sub-systems at component level.A case study of a theoretical floating wave energy converter array is used to demonstrate that the variability in levelised cost of energy (LCOE) can be greatest for the smallest arrays and that investment in improved component reliability can reduce both the variability and most likely value of LCOE.A hypothetical study of improved electrical cables and connectors shows reductions in LCOE up to 2.51% and reductions in the variability of LCOE of over 50%; these minima occur for different combinations of components.
air traffic statistics data source employed in SAGE called CAPSTATS.It was shown that the OAG and CAPSTATS air traffic movements data sources are very similar hence it is unlikely that a large portion of the 8% difference is due to SAGE incorporating OAG data.SAGE complemented the OAG flight schedules with radar data.The greatest part of the 8% difference is likely due to the inclusion of radar data capturing cargo, military, charter and unscheduled flights.The 8% difference can effectively be thought of as an estimate of the number of unscheduled departures not captured by CAPSTATS.The estimates of the CO, HC and NOx emissions were 10%, 140% and 30% higher than those predicted by SAGE respectively.The differences between the CO and HC estimates serve as a first quantification of the magnitude of the large modelling uncertainties associated with the EIHC and EICO below the 7% engine power setting, HC in particular, when the BFFM2 is applied to the calculation of HC and CO emissions.In SAGE, a cap was implemented whereby the EIHC and EICO were not modelled at fuel flows below the 7% engine power setting; a cap was not applied in APMI however.Also, these differences highlight the importance of the log–log curve fits used with the BFFM2.The execution of the APMI software was performed on the University of Bristol’s High Performance Computing cluster BlueCrystal demonstrating a novel approach to modelling aircraft fuel burn and emissions and the computational advantages that High Performance Computing can offer this area of research.Previously, modelling aircraft fuel burn and emissions may have been restricted to estimates for one or a handful of years only due to computational intensity.High Performance Computing offers a vast improvement in that respect.Ultimately, this paper demonstrates that obtaining a consistent and extended timeline of estimates of commercial air traffic fuel burn and emissions is fundamentally limited by the lack of a free, publicly available and suitable air traffic movements database.It shows that, when such a database can be procured, this can be achieved in a relatively short time by a small team of researchers as opposed to large organisations like NASA, the FAA or the EC.
Estimates of global aviation fuel burn and emissions are currently nearly 10years out of date.Here, the development of the Aircraft Performance Model Implementation (APMI) software which is used to update global commercial aviation fuel burn and emissions estimates is described.The results from APMI are compared with published estimates obtained using the US Federal Aviation Administration's System for Assessing Aviation's Global Emissions (SAGE) for the year 2006.The number of global departures modelled with the APMI software is 8% lower compared with SAGE and reflects the difference between their commercial air traffic statistics data sources.The mission fuel burn, CO2 and H2O estimates from APMI are approximately 20% lower than those predicted by SAGE for 2006 while the estimate for the total global aircraft SOx emissions is approximately 40% lower.The estimates for the emissions of CO, HC and NOx are 10%, 140% and 30% higher than those predicted by SAGE respectively.The reasons for these differences are discussed in detail.
such as online-to-offline, business-to-customer, business-to-business, and so forth, a large amount of information on effective individual needs becomes hidden in big data.An essential question in product design is how to mine and transform individual requirements in order to design customized equipment with high efficiency and low cost.Customized equipment design is usually based on mass production, which is further developed in order to satisfy the customers’ individual requirements.Modular recombination design and variant design are carried out for the base product and its composition modules, in accordance with the customers’ special requirements, and a new evolutionary design scheme that is furnished to provide options and evolve existing design schemes is adopted.An individual customized product is provided for the customers, and the organic combination of a mass product with a traditional customized design is achieved.In the Internet age, the design of customized equipment stems from the knowledge and experience of available integrated public groups, and is not limited to a single designer.In this way, the innovation of customized equipment is enhanced via swarm intelligence design.As a result, the Internet Plus environment has transformed the original technical authorization from a manufacturing enterprise interior or one-to-one design into a design mode that fuses variant design with swarm intelligence design.Intelligent design using intelligent CAD systems and KBE is a new trend in the development of product design.This is a gradually deepening process of data processing and application, which moves from the database to the data warehouse, and then to the knowledge base.Fig. 9 shows the GUI of an accuracy allocation design for NC machine tools.Fig. 10 shows the GUI of a design integration for NC machine tools.Fig. 11 shows surface machining using a five-axis NC machine center with a 45° tilt head.The process of intelligent design corresponding to individual requirements includes achieving individual mutual fusion of the requirements and parameters, and providing a foundation to solve the dynamic response and intelligent transformation of individual requirements.The basic features of future customized equipment design are numerous, incomplete, noisy, and random.Unstructured design requirement information is equally mapped between individual requirements.A mutual fusion-mapping model of the different requirements and design parameters from the big data environment is urgently needed.The process of customized design using swarm intelligence includes achieving the drive and feedback of a swarm intelligence platform design, and providing technological support for a further structural innovation design platform for Internet Plus customized equipment.The future design of customized products lies in the process of cooperation between multiple members of the public community and in swarm intelligence design, which is not limited to a single designer.Swarm intelligence design can be integrated into the intelligence of public groups.The process of intelligent design for customized products with a knowledge push includes achieving the active push of a design resource based on feedback features, and enhancing the design intelligence of complex customized equipment.In future, intelligent design for customized products can be achieved by design status feedback and scene triggers based on a knowledge push.With the development of advanced technology such as cloud databases and event-condition-action rules , future intelligent design for customized products will be more requirement-centered and knowledge-diversified, with appreciable specialty and higher design efficiency.
The development of technologies such as big data and cyber-physical systems (CPSs) has increased the demand for product design.The key technologies of intelligent design for customized products include: a description and analysis of customer requirements (CRs), product family design (PFD) for the customer base, configuration and modular design for customized products, variant design for customized products, and a knowledge push for product intelligent design.The development trends in intelligent design for customized products include big-data-driven intelligent design technology for customized products and customized design tools and applications.
Partially observable Markov decision processes are a natural model for scenarios where one has to deal with incomplete knowledge and random events.Applications include, but are not limited to, robotics and motion planning.However, many relevant properties of POMDPs are either undecidable or very expensive to compute in terms of both runtime and memory consumption.In our work, we develop a game-based abstraction method that is able to deliver safe bounds and tight approximations for important sub-classes of such properties.We discuss the theoretical implications and showcase the applicability of our results on a broad spectrum of benchmarks.
This paper provides a game-based abstraction scheme to compute provably sound policies for POMDPs.
developed stage of degradation, S, O, and Cl attack the alloy interior, reducing the integrity of the alloys and resulting in rapid mass loss and crack formation.For alloy 600 it was found that the oxide scale composition has a principally different composition than 310S and 800H/HT.No major Na-Cr-O formation was found in the alloy surface scale, and no nitrides were found in the alloy.Additionally, sulfides and oxides formed below the surface scale, but while both of these also lowered the Cr activity of the alloy matrix, neither inhibited the activity-driven mass transport of Cr to the surface scale.We conclude that NaCl and Na2SO4, via the formation of Cr-Na-O compounds and the resulting release of elemental Cl and S, are more adverse to the alloy chromia scales and alloy internals than KCl and K2SO4.Also, Na2SO4 is more adverse than K2SO4 from an initial mass-transport point of view because of the lower melting point of Na2SO4.In all, it is recommended that alloys with low solubility of nitrogen, such as Ni-base materials, be employed in this application.If Fe-Cr–dominated alloys are used, the injection of ammonia or the combustion of, for example, nitrogen-rich biomass should be done in a way that decreases the VF’s exposure to highly reactive elemental N. Additionally, a reduction of the Na content of the fuel, and a limitation of the combustion temperature below the Na2SO4 melting temperature of 884 °C, should limit the interactions of Na, Cl and S with the chromia scales and alloy internals.The raw/processed data required to reproduce these findings cannot be shared at this time because the data also form part of an ongoing study.
Mechanisms of alloy degradation in a fireside N-S-O-C-H-Cl-Na-K atmosphere at 880 °C were elucidated using SEM-EDS, chemical equilibrium calculations, and XRD.Alloys 310S, 800H/HT, and 600 were studied after 0, 8000, and 16,000 h exposure in a boiler co-firing biomass waste.For 310S and 800H/HT it was shown that nitrogen formed internal Cr nitrides lowering the Cr activity and inhibiting internal alloy Cr permeation, and that NaCl and Na 2 SO 4 reacted with Cr oxide to form chromate and to accelerate the S and the Cl pickup.Alloy 600 showed no nitride or major chromate formation.
an ash detection tool but further work investigating its effectiveness for other volcanic emissions would be interesting and worthwhile.This example demonstrates the importance of using multiple techniques and interpretations for guidance in a hazard situation and highlights the benefit of qualitative interpretation over the exclusive use of thresholds.The SDI is a fast and simple calculation routinely implemented at some operational centres for dust monitoring and could be implemented for ash detection at relatively little computational cost, complementing the suite of already existing ash detection tools with a product that forecasters are already familiar with and in many cases is already available to them."The index has already been extended for other satellite sensors and through radiative transfer simulations, could be developed for the Himawari-8 satellite which has sensors which can detect at the same wavelengths as the SDI, which could extend the index's range to the West Pacific: a region of active volcanism.We have demonstrated one way in which the SDI could be useful to ash detection problems and shown its effectiveness as a qualitative tool to be comparable to other detection tools, although it was also shown to be sensitive to other aerosols.Quantitatively, the SDI was seen to be slightly less skilful than the more established split window method for the studied scenes, however uncertainties in the ‘truth’ assumed for quantification of the skill make it difficult to conclude that one is more effective than the other for these scenes.Furthermore, the scene-specific thresholds used to produce a binary mask for calculation of the quantitative skills scores was determined here through reference to the ‘truth’, which is not available for real-time applications.Future work to determine the sensitivity of the skill of the methods to this threshold, or a comparison of the methods using fixed, pre-determined thresholds would provide further insight into the relative skill of the SDI as a quantitative tool.In practice it is recognised that fixed, predetermined thresholds are often inappropriate and forecasters often refer to qualitative products and construct deterministic products by adjusting thresholds through expert judgement and so it was deemed inappropriate for this preliminary investigation to use a fixed threshold.The lack of an absolute truth against which to verify remote sensing results is widely recognised and the expert mask used here by no means solves it, however it does offer an alternative to the single pixel comparisons that are possible through colocation of observations from other instruments such as LiDAR, and to the comparison of contemporary remote sensing products which often rely on similar assumptions.By focusing on the study image, it also avoids the problem of comparing observations of slightly different volumes of atmosphere, which can be challenging to compensate for when observations from different instruments are compared.Our study also highlights some of the disadvantages of relying on a binary approach to ash detection in preference to qualitative products, which arguably contain more information, particularly in cases where ash and cloud are both present.Further work to investigate the effectiveness of the SDI for monitoring ash with a greater range of ash compositions and other aerosols, in a greater range of atmospheric conditions, is necessary in order to fully validate it as a measure for ash detection, but this demonstration suggests that it could usefully complement existing techniques for day and night monitoring of ash hazards.
Despite the similar spectral signatures of ash and desert dust, relatively little has been done to explore the application of dust detection techniques to the problem of volcanic ash detection.The Saharan dust index (SDI) is routinely implemented for dust monitoring at some centres and could be utilised for volcanic ash detection with little computational expense, thereby providing a product that forecasters already have some familiarity with to complement the suite of existing ash detection tools.We illustrate one way in which the index could be implemented for the purpose of ash detection by applying it to three scenes containing volcanic ash from the 2010 Eyjafjallajökull eruption, Iceland and the 2011 eruption of Puyehue, Chile.It was also applied to an image acquired over Etna in January 2011, where a volcanic plume is clearly visible but is unlikely to contain any ash.These examples demonstrate the potential of the SDI as a tool for ash monitoring under different environmental and atmospheric conditions.In addition to presenting a valuable qualitative product to aid monitoring, this work includes a quantitative assessment of the detection skill using a manually constructed expert ash mask.The optimum implementation of any technique is likely to be dependent on both atmospheric conditions and on the properties of the imaged ash (which is often unknown in a real-time situation).Here we take advantage of access to a 'truth' rarely available in a real-time situation and calculate an ash mask based on the optimum threshold for the specific scene, which is then used to demonstrate the potential of the SDI.The SDI mask is compared to masks calculated from a simplistic implementation of the more traditional split window method, again exploiting our access to the 'truth' to set the most appropriate threshold for each scene, and to a probabilistic method that is implemented without reference to the 'truth' and which provides useful insights into the likely cloud-/ash-contamination of each pixel.Since the sensitivity of the SDI and split window methods to the tailored thresholds was not tested (such tailoring is unlikely to be possible in a real situation), this study presents the maximum anticipated skill for the SDI in the context of the maximum skill anticipated for the split window method, although both are likely to be lower in a real-time situation.The results for the SDI are comparable to those of the other methods, with a true skill score of 80.02% for the Eyjafjallajökull night-time scene (compared to 88.81% and 46.63% for the split window and probabilistic method respectively) and 90.06% for the Eyjafjallajökull day-time scene (compared to 97.61% and 56.96%).For the Puyehue image, the SDI resulted in a true skill score of 74.85%, while the split window approach achieved 99.62%.These results imply that the SDI, which is already implemented operationally at some centres for dust detection, could be a useful complement to existing ash monitoring techniques.
The endothelium exerts many vasoprotective effects that are largely mediated by nitric oxide.These include anti-oxidative effects, anti-inflammatory effects, and anti-platelet aggregation effects.Endothelial dysfunction is an early, reversible step in atherosclerosis and is characterized by a reduction in the bioavailability of NO .Risk factors such as smoking, hypertension, diabetes and dyslipidemia cause changes in endothelial cells which lead to oxidative stress and the loss of the endothelium’s ability to produce NO .Endothelial NO is generated from the conversion of l-arginine to l-citrulline by nitric oxide synthase, a process that requires multiple co-factors.When co-factor levels are insufficient, eNOS cannot couple the reduction of molecular oxygen with the oxidation of l-arginine, which results in the generation of O2− rather than NO, a process known as eNOS uncoupling .This excess in O2− can modify LDL to form oxidized LDL.Elevations in LDL, and especially oxLDL, further contribute to eNOS uncoupling by markedly decreasing NO bioavailability and increasing O2− in a concentration dependent manner .Finally, endothelial uptake of oxLDL contributes to vascular inflammation and atherosclerotic progression , and circulating levels of oxLDL have been shown to correlate with severity of acute coronary syndromes and an increased risk for myocardial infarction and metabolic syndrome .Elevated triglycerides are an independent risk factor for cardiovascular disease, which may be in part due to increases in inflammation.However, studies administering TG-lowering agents such as fenofibrate and niacin have shown little cardiovascular benefit when added to statins or compared to statin monotherapy, although these studies also did not prospectively enroll patients with elevated TG levels .Medical management of high TGs with omega-3 fatty acids has been shown to reduce circulating triglycerides, cholesterol-containing remnant lipoproteins, oxLDL, as well as to the volume and increase stability of atherosclerotic plaque .While also not prospectively enrolling patients with elevated TG levels, some, but not all, outcome studies administering O3FA have resulted in significant reductions in cardiovascular risk .For example, in the Japan EPA Lipid Intervention Study, purified eicosapentaenoic acid was effective in reducing the risk of major coronary events in hypercholesterolemic patients on top of statin therapy compared to statin therapy alone .Yet there are challenges in evaluating the roles of O3FA across studies, introduced by both dosing and formulation heterogeneity, with many trials having utilized low doses of O3FA with varying ratios of EPA and docosahexaenoic acid as well as unregulated dietary supplements.In addition to TG-lowering and other lipid changes, O3FA may provide cardioprotection beyond lipid-lowering.Previous studies show O3FAs have direct effects on vascular membranes where, due to the lipophilic structure and molecular space dimensions, they intercalate within lipid bilayers and may play important roles in the maintenance of endothelial function, local inflammation, platelet activation, and other cellular processes .Although EPA and DHA have some similar benefits, increasing evidence suggests that EPA and DHA may differentially affect oxidation, membrane structure, and other activities in vivo .While EPA has demonstrated atheroprotective effects, the precise mechanism of these properties has not been fully explored.We have previously shown that treatment with EPA inhibits oxidation of low density lipoprotein, very low density lipoprotein and small, dense LDL particles in vitro when administered alone or in combination with a widely used statin to a greater extent than what is observed with other TG-lowering agents .We have also demonstrated that EPA inhibits lipid peroxidation and cholesterol domain formation in model membranes exposed to oxidative stress .Endothelial cell dysfunction is causally related to atherosclerosis and is associated with higher cardiovascular risk , thus treatments leading to reversal of EC dysfunction may lead to benefits in coronary artery disease.While both EPA and statins have been shown separately to improve EC function , their effects in combination have not yet been examined.In the series of in vitro experiments described here, we evaluated the effects of treatment with various TG-lowering agents versus treatment with O3FA on endothelial cells exposed to oxidized lipoproteins.Comparative and time-dependent effects of these agents on NO and peroxynitrite release levels were measured in human umbilical vein endothelial cells.These investigations were expanded to an ex vivo model utilizing rat glomerular endothelial cells exposed to conditions modeling either hyperglycemia alone or in parallel to oxLDL exposure to model oxidative stress.The omega-3 fatty acid cis-5,8,11,14,17-eicosapentaenoic acid was purchased from Sigma-Aldrich and prepared initially at 10 mM in redistilled ethanol.Primary and secondary O3FA stock solutions were prepared and stored under argon at −20 °C.Ortho-hydroxy atorvastatin metabolite was synthesized and purchased from Toronto Research Chemicals and solubilized in ethanol at 1 mM; subsequent dilutions were prepared in ethanol or aqueous buffer as needed.Fenofibrate, gemfibrozil, and nicotinic acid were purchased from Toronto Research Chemicals and solubilized in ethanol at 1 mM.HUVECs were isolated into primary cultures from female donors by Clonetics and purchased as proliferating cells.All cell culture donors were healthy, with no pregnancy or prenatal complications.The cultured cells were incubated in 95% air/5% CO2 at 37 °C and passaged by an enzymatic procedure.The confluent cells were placed with minimum essential medium containing 3 mM l-arginine and 0.1 mM BH4 .Before experimental use, the cells were rinsed twice with Tyrode-HEPES buffer with 1.8 mM CaCl2.Venous blood from healthy normolipidemic volunteers was collected into Na-EDTA vacuum tubes after a 12-hour fast.Plasma was immediately separated by centrifugation at 3000 g for 10 min at 4 °C.LDL was separated from freshly drawn plasma by preparative ultracentrifugation with a Beckman ultracentrifuge equipped with an SW-41 rotor .The density of plasma was adjusted to 1.020 g/mL with sodium chloride solution, the plasma
The endothelium exerts many vasoprotective effects that are largely mediated by release of nitric oxide (NO).Endothelial dysfunction represents an early but reversible step in atherosclerosis and is characterized by a reduction in the bioavailability of NO.Previous studies have shown that eicosapentaenoic acid (EPA), an omega-3 fatty acid (O3FA), and statins individually improve endothelial cell function, but their effects in combination have not been tested.
of any changes in eNOS expression, suggesting that the mechanism responsible for this benefit is related to eNOS efficiency rather than an increase in the total amount of enzyme or its activity.As noted earlier, when the process known as eNOS uncoupling occurs, excess O2− is generated instead of NO which both decreases NO bioavailability and increases oxidation of LDL to oxLDL.Increased LDL, and more specifically oxLDL reduces endothelial cell NO/ONOO− release ratio, illustrating that dyslipidemia may be causally related to endothelial dysfunction as a result of eNOS uncoupling, a process which may be protected against by EPA and ATM.We observed EPA-mediated protection of ApoB-containing particles, particularly LDL and sdLDL .Oxidized LDL is a known contributor to endothelial dysfunction, vascular inflammation, and other processes involved in the development of atherosclerosis .Several lines of evidence of evidence suggest that sdLDL is highly atherogenic as compared to larger LDL particles , especially since sdLDL is more susceptible to oxidative modification as compared to LDL .In addition, EPA inhibits oxidation in Apo-B containing particles for a longer period of time than DHA, suggesting that EPA may have more sustained antioxidant benefits than DHA.Oxidized lipids associated with lipoprotein particles are a major source of vascular inflammation during atherosclerosis .Evidence shows that sdLDL levels lead to a higher risk of CAD .Taken together, this suggests that the effects of EPA on sdLDL levels and other Apo-B containing particles could be clinically important given the atherogenicity associated with their oxidation.This finding may also have clinical implications for EPA in comparison to DHA with regards to reducing oxidation of Apo-B containing lipoprotein particles.Pretreatment of HUVECs with EPA and ATM prior to oxLDL exposure revealed a beneficial effect on endothelial function.This observation reinforces the idea that favorable interactions between EPA and ATM may be related to their similar distributions in the lipid environment of cell membranes and lipid particles, as well as their shared antioxidant properties .Lastly, we found that the beneficial effects of EPA and ATM on endothelial cells in vitro could be extended to an ex vivo model.While both EPA and ATM both individually showed benefit, a combination treatment of EPA and ATM exhibited additional improvement regarding NO bioavailability in an ex vivo rat system modeling either hyperglycemia alone, or with parallel exposure to oxLDL.Therefore, treatments such as EPA and ATM that improve NO bioavailability may have therapeutic effect in CAD prevention .The potent endothelial effects observed ex vivo may help to explain reduced CV events observed for hypercholesterolemic patients that received EPA in addition to statin treatment .EPA may provide unique benefit to endothelial function, as contrasted to other TG-lowering agents that have thus far failed to show a reduction in CV events when combined with statin .In conclusion, combined treatment of endothelial cells with EPA and ATM inhibited endothelial dysfunction in response to conditions modeling hyperglycemia, oxidative stress, and dyslipidemia.This result was verified by multiple experimental approaches.RPM has received grant/research support from Amarin Pharma Inc., Pfizer Inc., Amgen Inc, ARCA Biopharma and Novartis AG.RPM is a paid speaker and consultant for Amarin Pharma Inc., Pfizer Inc. and Novartis AG.
Through a series of in vitro experiments, this study evaluated the effects of a combined treatment of EPA and the active metabolite of atorvastatin (ATM) on endothelial cell function under conditions of oxidative stress.Specifically, the comparative and time-dependent effects of these agents on endothelial dysfunction were examined by measuring the levels of NO and peroxynitrite (ONOO−) released from human umbilical vein endothelial cells (HUVECs).The data suggest that combined treatment with EPA and ATM is beneficial to endothelial function and was unique to EPA and ATM since similar improvements could not be recapitulated by substituting another O3FA docosahexaenoic acid (DHA) or other TG-lowering agents such as fenofibrate, niacin, or gemfibrozil.Comparable beneficial effects were observed when HUVECs were pretreated with EPA and ATM before exposure to oxidative stress.Interestingly, the kinetics of EPA-based protection of endothelial function in response to oxidation were found to be significantly different than those of DHA.Lastly, the beneficial effects on endothelial function generated by combined treatment of EPA and ATM were reproduced when this study was expanded to an ex vivo model utilizing rat glomerular endothelial cells.Taken together, these findings suggest that a combined treatment of EPA and ATM can inhibit endothelial dysfunction that occurs in response to conditions such as hyperglycemia, oxidative stress, and dyslipidemia.
the DTW approach.Here the DTW approach only synchronizes batch trajectories of shorter batches while the batch end-product quality for those shorter batches is kept constant from their original endpoints.This paper has studied an approach to align uneven batch trajectories and the corresponding batch end-product quality values.The principle of the proposed method is to identify short-window PCA&PLS models at first and then to apply the identified models to estimate missing trajectories for shorter batches and also to predict future batch end-product quality for those shorter batches.Thus all batches are to be the same length through feeding the missing data to shorter batches and updating the corresponding batch end-product quality.The proposed method can also align uneven batch data to be a specific batch length between the shortest and the longest batches.Thus extra flexibility exists for the control of batch-end product quality as the remaining batch running length is not fixed at each control decision point.The application of the proposed data alignment method to a benchmark simulation for penicillin fed-batch fermentation has demonstrated its effectiveness in estimating missing trajectories and predicting future batch end-product quality.It should be emphasized that the proposed data alignment method is only applicable to those batch processes that can be modeled by single PCA and PLS models.For batch processes with multiple phases or key events happening during the batch run that change the correlation characteristics, multiple local models should be employed to align data for each phase so as to ensure key events overlapping for all batches.Furthermore, for those processes that can hardly be modeled by a linear model such as PCA and PLS models, nonlinear-type modeling methods should be applied instead for uneven batch data alignment.The application of the proposed data alignment method to those complex processes with multiple phases and/or nonlinear process dynamics can be the future work.
Batch processes are commonly characterized by uneven trajectories due to the existence of batch-to-batch variations.The batch end-product quality is usually measured at the end of these uneven trajectories.It is necessary to align the time differences for both the measured trajectories and the batch end-product quality in order to implement statistical process monitoring and control schemes.Apart from synchronizing trajectories with variable lengths using an indicator variable or dynamic time warping, this paper proposes a novel approach to align uneven batch data by identifying short-window PCA&PLS models at first and then applying these identified models to extend shorter trajectories and predict future batch end-product quality.Furthermore, uneven batch data can also be aligned to be a specified batch length using moving window estimation.The proposed approach and its application to the control of batch end-product quality are demonstrated with a simulated example of fed-batch fermentation for penicillin production.
the same subcategory have shorter paths between them, requiring about one less transition from any starting-point job.The first and last rows of each table section show the closeness between non-green and green jobs.The number of connections between Green Rival and non-green jobs is similar to that of Green Rival and green jobs, whereas Other jobs have longer paths to reach green jobs compared with non-green jobs.These findings indicate that greening will likely be a long-term process and may require more than one stage of re-training, since on average it takes non-green workers more than one transition to join the green economy.While average path lengths are a useful summary statistic of the general closeness between jobs in the network, the shortest path length to any green job can give an optimistic estimate of green job growth.The length of the shortest path to any green job was calculated for each Green Rival and Other job.This length represents the quickest possible way for a worker to join the green economy.The majority of Green Rival jobs can transition directly to a green job, while most Other jobs require an extra transition.33,These findings suggest that there is large potential for short-run growth in the share of workers involved in green economic activity, if transitions are strategically managed.The green growth transition is expected to have a large structural impact on labour markets worldwide."As with previous large-scale labour market shocks such as job outsourcing/offshoring and the IT revolution, greening will change the skills required and tasks involved in existing occupations, and shifts in relative demand for particular occupations will require job transitions and may change workers' career paths. "Using O*NET's definition of green jobs, the proportion employed in the US green economy, using the broadest definition of green jobs, could be as much as 19.4% of the total workforce.However, a large proportion of this estimated employment would be ‘indirectly’ green, with 10.3% of the total workforce actually using any specifically green tasks in their jobs and 1.2% employed in jobs that are unique to the green economy.While there is a large proportion of employment in jobs that are closely related to green jobs, there is also a substantial proportion of employment in jobs that are not closely related to green jobs, which limits the potential short-term labour market benefits of the green transition.The use of green tasks and types of skills required varies greatly across the green job subcategories defined by O*NET, which suggests that ‘green’ should be considered as a continuum rather than a binary characteristic.Between the two ‘directly’ green job categories, Green New and Emerging jobs are ‘greener’ than Green Enhanced Skills jobs, i.e. involve a higher proportion of green tasks to non-green tasks and use green tasks more frequently, and also rely more heavily on non-routine skills.It is also important to recognise that non-green jobs fall into two distinct subcategories: aside from their connection to green jobs, Green Rival and Other jobs also differ in standard skill measures and skill content.It is important to account for this heterogeneity within green and non-green job categories when defining green employment and designing re-training programmes.Analysis of skill content indicates that it is easier to transition to indirectly green rather than directly green jobs.Among the three categories of green jobs, Green Rival jobs are more similar to Green Increased Demand jobs in terms of educational requirements and the types of skills utilised more heavily.Compared to ‘directly’ green jobs, Green Rival jobs are typically lower-wage, lower-skill, require less on-the-job training, and involve more routine and manual skills.However, all the distance measures used in this paper indicate that these differences are not large in absolute terms.Green Rival and green jobs differ in only a few specific aspects, so the scale and scope of transitions due to greening is likely to be similar to that of existing job transitions and much smaller than transitions which resulted from the IT revolution and outsourcing, so re-training can mostly happen on-the-job.34,Network analysis shows that the green economy has large potential for short-run growth, if job transitions are strategically managed.
This paper estimates the share of jobs in the US that would benefit from a transition to the green economy, and presents different measures for the ease with which workers are likely to be able to move from non-green to green jobs.Using the US O*NET database and its definition of green jobs, 19.4% of US workers could currently be part of the green economy in a broad sense, although a large proportion of green employment would be ‘indirectly’ green, comprising existing jobs that are expected to be in high demand due to greening, but do not require significant changes in tasks, skills, or knowledge.Analysis of task content also shows that green jobs vary in ‘greenness’ with very few jobs only consisting of green tasks, suggesting that the term ‘green’ should be considered a continuum rather than a binary characteristic.While it is easier to transition to indirectly green rather than directly green jobs, greening is likely to involve transitions on a similar scale and scope of existing job transitions.Non-green jobs generally appear to differ from their green counterparts in only a few skill-specific aspects, suggesting that most re-training can happen on-the-job.Network analysis shows that the green economy offers a large potential for short-run growth if job transitions are strategically managed.
instances.In other instances, the indirect approach was employed, where the oxidation half reaction took place on a photoexcited n-type semiconductor.As shown in Fig. 20, the overall cell design is very similar to the conventional PEM electrolyzers, except that the anode is irradiated .The principal benefit of this setup is that all the knowledge gathered for the cathode reaction can be implemented, while the solar energy input is harnessed.As demonstrated in this review article, multiple parameters have to be optimized simultaneously to efficiently perform continuous-flow electroreduction of CO2.Some of them are well understood, while others still need to be carefully studied.The effects of high pressure and temperature are of particular interest to achieve reasonable current density and selectivity.These parameters will also affect the surface of the catalysts , which is another factor to be studied in continuous-flow cells.Computational modeling can contribute to the rational design of electrolyzer configuration.In this vein, the reactor performance can be numerically simulated to unravel the influence of flow rate and channel geometry on CO2 conversion and consumption rate.Similarly, recent advances in 3D printing allows rapid prototyping of different cell geometries and thus will be a powerful tool in the hand of electrochemists .Furthermore, we believe that successful studies in vapor phase will open up the opportunity to use industrial exhaust fume directly as feedstock for solar fuel generation.Accordingly, different model gases containing typical impurities should be studied in the future.As for future development avenues, we would like to emphasize two directions.One is coming from the materials perspective: the need for intricate architectures where the elements of the GDE are simultaneously optimized.As shown in Sections 3.3–3.5, rationally designed interfaces are required for efficient CO2 conversion.In this endeavor, the cooperation of chemists, materials scientists, and engineers is highly recommended.The second R&D path is rooted in the fact that the anode reaction was oxygen evolution in almost all the presented studies.In such cases the formed oxygen is considered as a non-harmful by-product, and is simply let to the atmosphere without using it for any purpose.We also note that OER as the anode process can be important in Space applications, namely as a root for the recovery of O2 from CO2.With the interest of deep space exploration, it is of high importance to improve such key enabling technologies.As for terrestrial applications, the formed oxygen can be compressed and sold, but driving a more beneficial electrochemical procedure on the anode could be a value-added approach.In this manner, CO2 electrolyzers could be easily integrated in other industrial processes, in which the main product is formed on the anode.There are several candidates, for example, using the oxidation of organic pollutants on the anode, which is a kinetically-facile reaction.Thus the electrolyzer can be employed as both CO2 converter and water purifier adding value to the overall process .This can be envisioned by either directly oxidizing the organic pollutants, or indirectly, by generating ozone on the anode.This concept is well-known for water electrolyzers, in which hydrogen is produced on the cathode, while oxidation of water pollutant occurs on the anode .Chlorine evolution is another technologically relevant reaction, which might be worth coupling with CO2 reduction .Importantly, the redox potential of chloride oxidation matches with that for the water oxidation; therefore, this approach does not lead to an increased cell voltage .In this case, however, important precursors of some commodity chemicals are formed on both the electrodes.As these products are all in the gas phase, it is easy to separate them from the aqueous electrolyte during a subsequent technological step.This concept is very similar to the so called oxygen depolarized cathode chlor-alkali cells, where chlorine is formed on the anode, while oxygen gas is reduced on the cathode .Plants operating on this concept have been in operation for years; and therefore the infrastructure and technological know-how are readily available.Finally, while H2 oxidation at the anode is not a value-added approach, it allows for gas feed on both sides, which can be beneficial in certain instances .We are also convinced that concentrated efforts need to be devoted to scale-up and scale-out, to achieve reactor sizes which are at least similar to industrially used water electrolyzers.It is worth emphasizing that conclusions drawn for electrochemical cells offering very low current densities are not necessarily valid for those with high currents.Consequently, analyzing electrodes/cells under conditions which are far removed from those which are necessary for practical applications, is a futile exercise.Finally, we hope that the proposed benchmarking protocol will provide insightful guidelines to researchers involved in this endeavor and will lead to more comparable results.
Solar fuel generation through electrochemical CO2 conversion offers an attractive avenue to store the energy of sunlight in the form of chemical bonds, with the simultaneous remediation of a greenhouse gas.While impressive progress has been achieved in developing novel nanostructured catalysts and understanding the mechanistic details of this process, limited knowledge has been gathered on continuous-flow electrochemical reactors for CO2 electroreduction.This is indeed surprising considering that this might be the only way to scale-up this fledgling technology for future industrial application.In this review article, we discuss the parameters that influence the performance of flow CO2 electrolyzers.This analysis spans the overall design of the electrochemical cell (microfluidic or membrane-based), the employed materials (catalyst, support, etc.).We highlight R&D avenues offering particularly promising development opportunities together with the intrinsic limitations of the different approaches.By collecting the most relevant characterization methods (together with the relevant descriptive parameters), we also present an assessment framework for benchmarking CO2 electrolyzers.Finally, we give a brief outlook on photoelectrochemical reactors where solar energy input is directly utilized.
useful signatures are emerging."With the availability of 'omic technologies, researchers and clinicians can now generate large amounts of data on biological samples.The maturation of bioinformatics technologies is now allowing for the analysis of large data sets such as entire genomes in shorter time frames.The ability to practice precision medicine does and will continue to depend on the knowledge acquired through the analysis of cohorts of clinical samples."Tools or interfaces that increase the utility of these 'omics in routine practice will help to drive the production and availability of the knowledge base in precision medicine to ultimately assist clinicians in taking action based on the results.Improvements to data acquisition, data analysis, and data utilization will drive precision medicine initiatives such as those proposed by President Barack Obama in his 2015 State of the Union address.And, while genomics is a mainstay for deriving a higher resolution view of human health – the view that will yield the roadmap for tailored individual healthcare – metabolomics is a clear ally.Box 1 highlights the major roles of metabolomics in current and emergent precision medicine – from large cohort analysis to individual health and risk assessment.A main focal point for where metabolomics fits into this is its relationship to the phenotype whether the phenotype is primarily driven by a single gene or a complex combination of external factors.Associating biochemical levels and alterations with specific genotypes or external factors such as the microbiome offers the ability to streamline diagnostics and utilize a greater breadth of information to the clinic to assess patient health.
Precision medicine is an active component of medical practice today, but aspirations are to both broaden its reach to a greater diversity of individuals and improve its "precision" by enhancing the ability to define even more disease states in combination with associated treatments.Given complexity of human phenotypes, much work is required.In this review, we deconstruct this challenge at a high level to define what is needed to move closer toward these aspirations.In the context of the variables that influence the diverse array of phenotypes across human health and disease - genetics, epigenetics, environmental influences, and the microbiome - we detail the factors behind why an individual's biochemical (metabolite) composition is increasingly regarded as a key element to precisely defining phenotypes.Although an individual's biochemical (metabolite) composition is generally regarded, and frequently shown, to be a surrogate to the phenotypic state, we review how metabolites (and therefore an individual's metabolic profile) are also functionally related to the myriad of phenotypic influencers like genetics and the microbiota.We describe how using the technology to comprehensively measure an individual's biochemical profile - metabolomics - is integrative to defining individual phenotypes and how it is currently being deployed in efforts to continue to elaborate on human health and disease in large population studies.Finally, we summarize instances where metabolomics is being used to assess individual health in instances where signatures (i.e.
and sensory effects including adverse effects and overdose; their physical and chemical properties and pharmacology; their traditional and modern cultural uses; the current state of scientific medical research; and national and international policy implications."Conducting a summit with this level of detail provides Federal Government executive level decision makers the critical information necessary to: identify phase 3 clinical research trials; the capital resources and time required to complete these clinical trials; addressing ethical issues surrounding the use of these agents; and collectively deciding how best to update current policies and regulations to advance this research, “with the goal of ensuring that the nation's drug policies are informed by science.”",.There should be no illusion to the challenges that lay ahead in conducting this research and by no means is this process a sprint, but instead be regarded as nothing short of a marathon.It will take committed, combined, and collegial leadership from all affected Departments and agencies actively engaged to see this endeavor through.Currently identified psychedelic agents such as DMT, psilocybin, and mescaline are naturally occurring, and agents such as LSD and MDMA have existed for decades beyond their patent expirations."Given the enormity and immediacy of the mental health crisis, the lack of financial incentives for the private sector to engage in this research, and the sheer magnitude of research needed to determine the therapeutic efficacy of these agents; it's in the Federal Government's strategic interest to fund this field of research.We are facing in the US and globally a multigenerational crisis of epidemic proportions due to mental health related disorders with loss of life, profound reduction in quality of life, with increasing recognition that more needs to be done.Mental health disorders, including treatment resistant depression, anxiety, addiction, and PTSD, have and will continue their combined overt and covert steady-state weakening of the private and public sectors which in turn will continue to undermine our overall economic structure.It is the responsibility of the Federal Government to undertake those challenges that simply are too great for the private sector to tackle itself.Nonetheless, the private sector has both a potential role and opportunity, and balanced regulatory approaches can incentivize the commitment of the private sector in co-developing profoundly needed new medicinal treatments.Endeavoring to alter the trajectory of this mental health crisis through conducting research is one where the combined collaborative efforts of the Federal Government, public, and private entities will be required, with Federal Government leadership, intervention, and partnership being essential."An illustration of this is the FDA's efforts to listen to and then work to incentivize the pharmaceutical industry to develop abuse deterrent opioids to find safer ways to alleviate pain and suffering, efforts that resulted in the development and approval of more than ten such advances in pain medicines in the last few years.This commentary has focused on the scientific foundation for understanding the nature, etiology and prevalence of various mental and behavioral disorders and the clinical advances in potential treatments.However, as evident from the provisions of the CSA, science is not the only consideration in health policy.As discussed in this commentary and elsewhere in this special issue of Neuropharmacology, the CSA was developed during a time of fear, political concern, and misinformation about psychedelic substances that led to establishing substantial barriers impeding their research and potential clinical uses.Conversely, personal and social factors, along with new emerging clinical scientific information, are relevant to the resurgence of interest in research and potential application of certain psychedelic substances.Given the extent and prevalence of brain-related disorders, it seems likely that few scientists in the field have not themselves been influenced in their research interests and policy opinions based on their own professional and personal experiences involving family, close colleagues, and friends who have suffered from numerous mental health disorders.Such experiences galvanize a deep personal commitment to serve humanity through pursuing scientific research and clinical treatment development, and this is the case with the authors of this commentary.From these perspectives, we close this commentary with reference to Table 8, which summarizes three reflections of author CAPT Belouin.The views, opinions, and content of this publication are those of authors CAPT Sean J. Belouin, and Jack E. Henningfield, and do not necessarily reflect the views, opinions, or policies of the US Public Health Service, the US Department of Health and Human Services, and the Substance Abuse and Mental Health Services Administration.Through, Pinney Associates, JEH has consulted and/or are presently on the evaluation and regulation of pharmaceutical products including opioid and nonopioid analgesics, psilocybin, and other CNS acting products.
The purpose of this commentary is to provide an introduction to this special issue of Neuropharmacology with a historical perspective of psychedelic drug research, their use in psychiatric disorders, research-restricting regulatory controls, and their recent emergence as potential breakthrough therapies for several brain-related disorders.These regulatory controls severely constrained development of psychedelic substances and their potential for clinical research in psychiatric disorders.Despite the limitations, there was continued research into brain mechanisms of action for psychedelic drugs with potential clinical applications which began during the 1990s and early 2000s.Finding pathways to accelerate clinical research in psychedelic drug development is supported by the growing body of research findings that are documented throughout this special issue of Neuropharmacology.Accumulated research to date suggests psychedelic drug assisted psychotherapy may emerge as a potential breakthrough treatment for several types of mental illnesses including depression, anxiety, post-traumatic stress disorder, and addiction that are refractory to current evidenced based therapies.This research equally shows promise in advancing the understanding of the brain, brain related functioning, and the consequential effects of untreated brain related diseases that have been implicated in causing and/or exacerbating numerous physical disease state conditions.The authors conclude that more must be done to effectively address mental illnesses and brain related diseases which have become so pervasive, destructive, and whose treatments are becoming increasingly resistant to current evidenced based therapies.This article is part of the Special Issue entitled ‘Psychedelics: New Doors, Altered Perceptions’.
which they attempt either to bite each other on the shoulders or mount one another; grasshopper mice either compete to bite each other on the nape of the neck or lick, groom and nuzzle the sides of their partner’s shoulders; and gray mouse lemurs compete to bite each other on the face, groom each others’ faces and upper bodies or mount one another.Detailed temporal and kinematic analyses of play fighting sequences in all these species show that there is no mixing within a sequence – a play fight starts and ends with attack and defense related to only one type of advantage.Once such a play fight is terminated, another involving competition for another advantage may commence.Thus, on a broad time scale over the entire duration of a play session or over successive sessions, play fighting involves behavior patterns from multiple behavior systems, but in the moment-to-moment moves and countermoves when engaged in a particular play fight, the animals do not mix behavior patterns from different behavior systems.Of course, the sequential pattern of an aggressive play fight being followed by a sexual play fight may still reflect a common play behavior system.Indeed, our hypothesized model provides a means for this level of mixing.The constituent play behavior systems may retain sufficient coherence within each system to maintain functional cohesion of the behavior patterns involved, so that aggressive and sex behavior patterns are not interspersed.Nevertheless, at the super-play behavior system level, the interspersed sequences of aggressive and sexual play may form part of a seamless session of play.Perhaps as dynamic imaging techniques become available that can track brain circuit activity in freely behaving animals, objective evidence may be obtained to determine if the participants perceive the overall interaction, one involving sequentially occurring aggressive and sexual play, as one continuous bout of play or as discrete encounters.In addition, more detailed studies are needed of species that engage in multiple forms of play to provide a comparative data set on the various ways in which behavior patterns or sequences of behavior patterns derived from different behavior systems can be juxtaposed.Many instances of what qualifies as being labeled play has the functional coherence in the organization of how its constituent behavior patterns are ordered to make it look like a behavior system as defined by Burghardt and Bowers,.However, given that most of the behavior patterns used in play are co-opted from other behavior systems, how play originated and how it has achieved the coherence of a behavior system is unresolved.Also, since there are multiple forms of play that in many lineages have evolved independently, it is unclear how, in some species, these may coalesce so that they are integrated together in coherent sequences of behavior.Finally, no existing theory provides an explanation for how novel behavior patterns - those that are not part of the repertoire of any of the behavior systems simulated during play - arise and become incorporated in the play of some species.The evolutionary-based hypothesis suggested in this paper provides an attempt to answer these questions and does so in a manner that can integrate the vast species differences that exist in the presence and content of play across the animal kingdom.Even though the ‘many to one’ hypothesis is coherent and can account for that variation, so may an alternative ‘play syndrome’ hypothesis.Empirical limitations in our knowledge about play do not yet permit the construction of formal competing models of the components of how a play behavior system may be organized.Nonetheless, by thinking about play from a behavior systems perspective at least two viable hypotheses emerged, hypotheses that may be useful in directing further empirical research.
Given that many behavior patterns cluster together in sequences that are organized to solve specific problems (e.g., foraging), a fruitful perspective within which to study behaviors is as distinct ‘behavior systems’.Unlike many behavior systems that are widespread (e.g., anti-predator behavior, foraging, reproduction), behavior that can be relegated as playful is diverse, involving behavior patterns that are typically present in other behavior systems, sporadic in its phylogenetic distribution and relatively rare, suggesting that play is not a distinct behavior system.Yet the most striking and complex forms of play have the organizational integrity that suggests that it is a behavior system.One model that we develop in this paper, involves three stages of evolutionary transition to account for how the former can evolve into the latter.First, play-like behavior emerges from the incomplete development of other, functional behavior systems in some lineages.Second, in some of those lineages, the behavior patterns typical of particular behavior systems (e.g., foraging) are reorganized, leading to the evolution of specific ‘play behavior systems’.Third, some lineages that have independently evolved more than one such play behavior system, coalesce these into a ‘super system’ allowing some animals to combine behavior patterns from different behavior systems during play.Alternative models are considered, but irrespective of the model, the overall message from this paper is that the conceptual framework of the behavior system approach can provide some new insights into the organization and diversity of play present in the animal kingdom.
which was stored at 4 to 20 °C are heated to 70 °C for 20 minutes.These conditions will inactivate the cell growth while protecting the cell damage and protein denaturation.15 g of these microalgae was mixed with 1 liter of water to have the un-sheared standard sample.To prepare the sheared samples, same amount of this mixture is placed in a mechanical agitator and mixed well at different rotational speeds for 20 minutes.Rheological properties were studied by using MCR 102 Anton Paar rheometer with concentric cylinder geometry.Steady shear rate experiments were performed for samples with different concentrations obtained during the cell growth to determine the fluid behavior, by varying the shear rate and measuring the shear stress.The experiments were repeated for three times to ensure the repeatability.In measuring the geometrical characteristics, the following assumptions were made: The velocity at the walls of the double wall couette cup is zero, Settling is negligible as less than 1.25 Microalgae are not ruptured during rheometry based on micrographs of the cells before and after measurement.To study the effect of shear rate, the biomass sample during the rheology is examined for the microscopy.Since the rheometer does not posses the integrated microscopy facility, the sample after shearing at different shear rates, is placed immediately on the microscope for examination.It is assumed that the structural features non-equilibriate at least for some period of time."To calculate the power required for the mixing, the modified Reynold's number is calculated according to the equation 3 using the values of K and n listed in the Table 2.As the industrial bioreactors operate at a rotational speed of 10-200 rpm, a rotational speed of 60 rpm is considered for the present calculation.A vessel diameter of 1.7 m with a broth slurry height of 1.7 m are considered for the vessel design.The density of all the samples are 1015 kg per cubic meter.With the obtained modified Reynolds number, the power number is evaluated using equation 2 for various D/T and W/T values.The actual power requirements are calculated using the equation 4.During the analysis of growth curve of microalgae, the maximum absorbance was found at 680 nm wavelength.The data obtained from the spectrophotometer is detailed supplementary data with the file name wavelength_data.To indicate the optimum wavelength, the data obtained from spectrophotometer at different concentrations is plotted and presented in the supplementary data with the file name optimal_wavelength_plot.The absorbance of the culture was measured every day at regular time intervals for a period of 16 days.It is observed from the results that the growth phase was observed till 8th day followed by a stationary phase, as shown in Fig. 1.The change in the viscosity at a shear rate of 15 s−1 was measured for the broth during the course of the batch growth and is as shown in the Fig. 1.The Non-Newtonian behavior of the culture broth is evident from the first day of the growth as shown in Fig. 1.For a given shear rate, the shear stress increased with culture time.The growth of the microalgae is well represented by the sigmoidal growth curve and the growth, transition and plateau phases are apparent as shown in Fig. 1.Cary 600 from Agilent technologies USA was used to measure the absorbance vs wave length in the range of 4000 and 800 cm−1.Table 1 shows the various functional groups that were present in the microalgae.The dried sample of microalgae showed the presence of amines, alkynes, cellulose, lipids, nucleic acids and polysaccharides.Biomass of microalgae solution is associated with the nutrients with various kinds of forces that forms the complex structure.Knowing the influence of shear on this complex fluid reveals the understanding of flow nature.Correlating the flow properties with the microscopy analysis gives clues of shear induced effects on these complex fluids.Fig. 2 is an example of such analysis where the flow properties and the micro structure of the fluid are compared for better understanding.Fig. 2 shows four different regimes of shear rates.The evolution of viscosity in all the regimes is evident to be different.At low shear rates, the fluid shows much of shear thinning in nature.This might be due to the breakage of hydrogen bonding associations among the cells, as the cells initially form clusters in the biomass.The microscopy image shown in Fig. 3a shows the micro structure of the regime where the shear rate ranges between 0.1 to 1 s−1.With an increase in shear rate from 1 to 10 s−1, the aggregations of cells forms loose clusters and obstruct the flow field.The microscopic picture in the Fig. 3b shows the cluster like micro structures.In this range between 1 to 10 s−1, the viscosity is still shear thinning as shown in Fig. 2, but this change is limited to an order of magnitude.These changes in viscosity are comparatively less when compared to the shear range between 0.1 to 1 s−1, though the qualitative nature of the fluid flow being shear thinning.In the shear rate range between 10 to 100 s−1, the deformation forces over come the attractive forces there by the cells align in the flow field showing no variations in viscosity, as shown in Fig. 2.The micro structure shown in Fig. 3c indicates the cell separation from cluster forming the individual cells.At higher shear rates, in the range of 100 to 1000 s−1, and the flow behavior becomes slightly shear thickening as shown in Fig. 2.This argument could be supported by the micro structure as shown in Fig.
It showed an exponential phase of growth up to 8 days and then a stationary phase of growth from 8 days to 15 days.The rheological properties of microalgae biomass during the growth represented power law model.Microscopic analysis showed the influence of shearing on variation of algal structure from clusters to complete cell separation.The flow properties supported the microscopy analysis showing the shear thickening property at high shear rates and shear thinning nature at low shear regime.Optimal power required for the agitation of biomass based on the variations of non-Newtonian viscosity were predicted by considering the vessel geometry.
the different tools only need to map some information already available on the filesystem or provided by the operating system in internal data structures.Differently from Mammut, likwid does not have any means to monitor remote architectures.This is an important feature due to the capillary diffusion of computing devices, like in IoT and Fog systems.Moreover, Mammut provides a flexible API, that can be used by the programmer to enhance his application by exploiting information about the underlying architecture.On the contrary, likwid was mainly designed for system administrators, since it provides a set of tools to be used from a command line interface.Despite an API has been later added to likwid, differently from Mammut, it is not object oriented.Providing an object oriented abstraction is of paramount importance, since it captures a model of the real world and leads to improved maintainability and understandability of the code written by the framework’s user .Mammut eases the development and rapid prototyping of algorithms and applications that need to operate on system’s knobs or to monitor the system they are running on.This allows the researchers to focus on the algorithm development while the management of such data is performed in an intuitive way by using the high-level API provided by Mammut, shortening the development time.Mammut has been used by researchers to optimise power consumption of parallel applications and to develop models for the prediction of power consumption and performance of parallel applications .In this context, Mammut has been used to operate on some system parameters and to correlate the effect of these parameters on the observed power consumption.This led to the design and development of efficient algorithms to dynamically adapt application’s power consumption to the varying workload conditions .Mammut also allowed researchers to improve the energy efficiency of Data Stream Processing applications by allowing the developers to easily increase or decrease the clock frequency of the CPU .Moreover, Mammut have been integrated into the Nornir framework.2,Nornir is a framework which can be used to enforce specific constraints in terms of performance and/or power consumption on parallel applications.More recently, it has been used in the RePhrase EU H2020 project3 as low-level runtime tool for collecting power consumption and other statistics of parallel applications .4,These information are used by the runtime system for deciding which architecture is most suited to execute a specific parallel application.Mammut has also been recently used to measure and optimise power consumption of query processing in web search engines .Moreover, it has been used to evaluate power consumption of parallel benchmarks for multicore architectures 5 and to properly allocate the benchmarks’ threads on the target architecture.OCaml and C bindings of Mammut have been recently implemented and released as open source6 by researchers at University of Orleans.Finally, Mammut has been selected as a power meter in the parallel runtime framework FastFlow,7 which targets heterogeneous multi-cores platforms.In FastFlow, Mammut provides information regarding power consumption of application parallelized by using parallel patterns.Given the short life of the project, the increasing interest it is receiving from different research communities implies the need of such a tool.Its simplicity and flexibility allows the users to exploit Mammut in different contexts, helping the programmer in building and optimising architecture-aware software.By using Mammut, developers can easily access and modify information provided by the hardware and OS through an intuitive object-oriented interface, without dealing with portability issues when moving their code on a different system.Moreover, Mammut seamlessly allows the management of remote systems.We are currently planning to extend Mammut with other modules for the management of caches, memory, and Graphics Processing Units.In addition to that, we are considering the possibility to support machines running Windows operating systems.
Managing low-level architectural features for controlling performance and power consumption is a growing demand in the parallel computing community.More important, most existing tools can only be used through a command line interface and they do not provide any API.Moreover, in most cases, they only allow monitoring and managing the same machine on which the tools are used.MAMMUT provides and integrates architectural management utilities through a high-level and easy-to-use object-oriented interface.By using MAMMUT, is possible to link together different collected information and to exploit them on both local and remote systems, to build architecture-aware applications.
Melioidosis is a serious infection caused by the Gram-negative bacillus Burkholderia pseudomallei, found in soil and water.1,The reported incidence of human melioidosis is highest in northeast Thailand and northern Australia.2,3,Melioidosis also affects travellers to melioidosis-endemic regions of the world,4 which includes much of Asia, regions of South America, various Pacific and Indian Ocean islands, and some countries in Africa including Nigeria, Gambia, Kenya, and Uganda.1,First-line initial antimicrobial treatment is parenteral ceftazidime or a carbapenem drug for at least 10 days.5,Patients are then switched to oral antimicrobials for at least 12 weeks.This extended period of treatment compared with most other bacterial infections is needed to achieve cure and prevent recurrent infection,5 which has been reported to occur in 16% of cases within 10 years of the primary infection and has a case fatality rate of 24% in Thailand.6,The recommended oral antimicrobial regimen for melioidosis in Thailand is trimethoprim-sulfamethoxazole plus doxycycline.This recommendation is based on findings that this regimen is as effective as, and better tolerated than, the previously recommended regimen of TMP-SMX plus doxycycline and chloramphenicol.7,However, a quarter of patients with melioidosis given TMP-SMX plus doxycycline develop an adverse drug reaction.7,Such adverse reactions often results in a switch to second-line treatment, which is strongly associated with an increased risk of relapse.6,Findings from a descriptive 10 year cohort study done in Australia reported recurrent infection in less than 2% of patients who had oral treatment with TMP-SMX alone;8 TMP-SMX has since become the standard regimen in Australia3 and is occasionally used in Thailand.9,We proposed that TMP-SMX alone was an adequate treatment for melioidosis, and did a clinical trial to compare the efficacy and safety of TMP-SMX versus TMP-SMX plus doxycycline for the oral treatment phase of melioidosis.Between Oct 24, 2005, and Feb 1, 2010, we randomly assigned 626 patients with culture-confirmed melioidosis to receive either oral TMP-SMX plus placebo, or oral TMP-SMX plus doxycycline.Baseline characteristics were comparable between the two treatment groups.There were no missing data for baseline characteristics.Overall, 40 patients did not require parenteral antimicrobial treatment before enrolment, and 357 patients were deemed to need longer than 14 days of parenteral treatment before starting oral treatment.Most patients had a bodyweight between 40 kg and 60 kg and received 320 mg of TMP with 1600 mg of SMX twice daily.Follow-up was completed in Feb 21, 2011, 1 year after we enrolled the last patient.618 patients had at least one follow-up assessment.Median follow-up duration was 17 months in the TMP-SMX plus placebo group and 19 months in the TMP-SMX plus doxycycline group.Total duration of follow-up was 536 person-years in the TMP-SMX plus placebo group and 583 person-years in the TMP-SMX plus doxycycline group.We recorded no between-group difference in our primary analysis for culture-confirmed recurrent melioidosis.Non-inferiority of TMP-SMX plus placebo was shown because the upper bound of the 95% CI was below the pre-defined non-inferiority margin.The probability of having culture-confirmed recurrent melioidosis within 1 year of enrolment was 3% and within 3 years of enrolment was 10%.Of 37 culture-confirmed recurrent melioidosis cases, seven occurred during 20 weeks of oral treatment, 17 occurred during the first year of follow-up, four occurred between year 1 and year 2 of follow-up, and nine occurred after 2 years.Comparison of secondary endpoints in the two treatment groups is shown in table 2.The incidence of overall recurrent melioidosis and overall mortality was not different between the two treatment groups.Of 45 participants who died, 14 died during 20 weeks of oral treatment, 15 died during the first year of follow-up, six died between year 1 and year 2 of follow-up, and ten died after 2 years.Cause of death was culture-confirmed recurrent melioidosis, clinical recurrent melioidosis, unknown causes, and other diseases.No deaths were attributed to an adverse reaction to the study drugs.Overall, 516 patients received oral treatment for at least 12 weeks and 445 patients received oral treatment for at least 20 weeks.37 of 311 patients given TMP-SMX plus placebo and 59 of 315 patients given TMP-SMX plus doxycycline switched treatment to amoxicillin-clavulanic acid because of adverse drug reactions.Patients given TMP-SMX plus placebo had about a 40% lower chance of switching to the second-line regimen due to adverse drug reactions than those given TMP-SMX plus doxycycline.Six patients given TMP-SMX plus placebo and six patients given TMP-SMX plus doxycycline switched due to treatment failure.Analysis of Schoenfeld residuals showed that the HR for all outcomes were not variable over time with the exception of switching due to treatment failure.Of all patients who completed 20 weeks of the study drug, nine of 226 patients in the TMP-SMX plus placebo group and 12 of 218 patients in the TMP-SMX plus doxycycline group needed an extension of treatment beyond 20 weeks.The proportion of patients reporting adverse drug reactions was lower in the TMP-SMX plus placebo group than in the TMP-SMX plus doxycycline group.Common adverse drug reactions were allergic reactions and gastrointestinal disorders.Serious adverse events were reported in five patients given TMP-SMX plus placebo and eight patients given TMP-SMX plus doxycycline.These serious adverse events included Stevens-Johnson syndrome, severe hyponatraemia, and severe hyperkalaemia.We analysed 226 patients in the TMP-SMX plus placebo group and 218 patients in the TMP-SMX plus doxycycline group who completed 20 weeks of the study drug in a per-protocol analysis.Non-inferiority of TMP-SMX plus placebo for culture-confirmed recurrent melioidosis was also shown.We did bacterial genotyping for 29 of 37 patients with culture-confirmed recurrent melioidosis for whom paired isolates were available.14 recurrent cases were defined as relapse, and
Background Melioidosis, an infectious disease caused by the Gram-negative bacillus Burkholderia pseudomallei, is difficult to cure.The standard oral regimen based on trial evidence is trimethoprim-sulfamethoxaxole (TMP-SMX) plus doxycycline.Findings We enrolled and randomly assigned 626 patients: 311 to TMP-SMX plus placebo and 315 to TMP-SMX plus doxycycline.16 patients (5%) in the TMP-SMX plus placebo group and 21 patients (7%) in the TMP-SMX plus doxycycline group developed culture-confirmed recurrent melioidosis (HR 0.81; 95% CI 0.42-1.55).
and were the only feasible option to establish susceptibility in the five study sites.The estimated HR of discontinuation of the study drug due to treatment failure should be interpreted with caution because the Schoenfeld test provided weak evidence suggesting that the HR for this outcome might not be constant over time.In this study, we also assessed the efficacy of TMP-SMX over TMP-SMX plus doxycycline for culture-confirmed relapse.The results from a sensitivity analysis were very similar to the main analysis, except that the lower bound of the 95% CI for the HR for culture-confirmed relapse was slightly greater than the non-inferiority margin.This finding is mainly because the study was not powered to assess the non-inferiority based on this outcome in the sensitivity analysis.Therefore, we would suggest that these potential limitations are unlikely to have affected the conclusions of the study.Having established that TMP-SMX is preferable to TMP-SMX and doxycycline for the oral phase of melioidosis treatment, the next challenge is to establish the optimum duration of this regimen.We did this multicentre, double-blind, non-inferiority, randomised placebo-controlled trial in five hospitals in northeast Thailand: Sappasithiprasong Hospital, Srinagarind Hospital, Udon Thani Hospital, Mahasarakam Hospital, and Khon Kaen Hospital.We enrolled adult patients with culture-confirmed melioidosis who had been satisfactorily treated with parenteral antimicrobials, or who had mild localised disease that was not considered to need intravenous antimicrobial treatment by the attending physicians.We defined satisfactory clinical improvement from parenteral treatment as cessation of fever for at least 48 h and the ability to take oral drugs.We excluded patients if they were infected by B pseudomallei that was resistant to TMP-SMX or doxycycline, if their melioidosis infection was recurrent, or if they had a contraindication to either TMP-SMX or doxycycline.Resistance to doxycycline was determined by disc diffusion as an inhibition zone diameter ≤12 mm, which was modified from the Clinical and Laboratory Standards Institute breakpoint recommended for Enterobacteriaceae.10,11,Resistance to TMP-SMX was determined by Etest as a minimum inhibitory concentration of 4/76 mg/L or higher, which was modified from the CLSI breakpoint for B pseudomallei determined by broth dilution method.11,12,The trial was done in accordance with the principles of good clinical practice, and the ethical principles in the Declaration of Helsinki.The study protocol was approved by the local ethical committees and the institutional review boards of all participating hospitals.The study was reviewed by an independent data safety and monitoring board.All patients gave signed or fingerprinted informed consent before randomisation.This trial is registered with www.controlled-trials.com, number IRSCTN86140460.We randomly allocated patients in a 1:1 ratio to receive TMP-SMX with either placebo doxycycline or doxycycline, which were identical in appearance.Randomisation and masking was done at the coordinating centre at the Mahidol-Oxford Tropical Medicine Research Unit.The allocation sequence was computer generated with a block size of ten and was stratified by study site.To achieve treatment concealment, TMP-SMX and either placebo or doxycycline were dispensed into sequential, identical, tamper-proof bottles for each participant for 20 weeks.The study drug bottles were labelled with sequential code numbers and distributed to the study sites.TMP-SMX, placebo, and doxycycline were manufactured and provided by the Siam Pharmaceutical Company.All patients and study investigators were unaware of the drug allocation throughout the study.The randomisation codes remained sealed until after data collection, data cleaning, and completion of a masked analysis.All patients received TMP-SMX plus placebo or TMP-SMX plus doxycycline for a minimum of 20 weeks.TMP-SMX tablets were prescribed using a weight-based dosage, as follows: bodyweight less than 40 kg or estimated glomerular filtration rate 15–29 mL/min, 160 mg TMP and 800 mg SMX twice daily; bodyweight of 40 kg to 60 kg, 240 mg TMP and 1200 mg SMX twice daily; and bodyweight greater than 60 kg, 320 mg TMP and 1600 mg SMX twice daily.13,Doxycycline or placebo was prescribed as a 100 mg tablet to be taken twice daily.Patients were advised to repeat the dose if vomiting occurred within 30 min of their taking the tablet.The minimum duration of 20 weeks was based on a combination of current practice and available evidence.The recommended duration for oral antimicrobials is 12–20 weeks in Thailand,6 and 3–6 months in Australia,14 a discrepancy that shows the uncertainty about the optimum duration.Findings from a retrospective study in Thailand showed that treatment for longer than 12 weeks was associated with lower risk of relapse.6,We therefore chose to use 20 weeks as a minimum duration rather than 12 weeks or an empirically chosen point between the two.After enrolment, we followed patients up at weeks 4, 12, and 20 of oral treatment, every 4 months for 1 year after completion of treatment, and annually thereafter to the end of the study.Patients who did not attend scheduled appointments were contacted by telephone.The trial was designed to stop 1 year after the last participant was enrolled.At each clinical visit, we undertook a clinical examination and laboratory analyses, including complete blood count, blood sugar, blood urea nitrogen, creatinine, electrolyte, and liver function tests.Chest radiography and abdominal ultrasonography were done at enrolment, and repeated at weeks 12 and 20 if an abnormality was detected on the first test.We asked patients to bring the study drug bottles to follow-up visits, at which drug compliance was checked by pill counts.Treatment with the randomised drugs was extended beyond 20 weeks if clinically indicated because of evidence of residual infection, as decided by the treating physician.Concealed study drug bottles labelled with unique spare sequential code numbers were separately prepared for patients who needed treatment for more
This regimen is used in Thailand but is associated with side-effects and poor adherence by patients, and TMP-SMX alone is recommended in Australia.Methods For this multi-centre, double-blind, non-inferiority, randomised placebo-controlled trial, we enrolled patients (aged ≥15 years) from five centres in northeast Thailand with culture-confirmed melioidosis who had received a course of parenteral antimicrobial drugs.This study is registered with www.controlled-trials.com, number ISRCTN86140460.
find model parameters for individuals.An individualised model can aid design and optimisation of layouts for individual differences and abilities.The model can be used to compare frequently and infrequently used objects.In the experiment reported here, all objects had the same frequency.However, the model predicts, for instance, that infrequently utilised visual objects are slower to find, compared to frequently used objects, due to smaller memory activation values which leads to a slower and less reliable recall of positional and feature information.This allows the investigation of more realistic use cases, where the designer can, for example, assess the optimal placement of an object, given the frequency that it is used.On the other hand, infrequently used objects should be placed such that they do not distract the user.However, when these objects are needed, it should be easy to find them even when the user cannot be trusted to remember the location or the features of the object.The model can help the designer balance these contradictory requirements.As discussed above, with regard to the VSTM decay parameter τ, a more realistic model of visual search would implement low- and high-level search strategies.An example of a low-level strategy is to prefer objects that are close to the currently fixated one.This minimises saccade time, which depends on the length of the saccade, as well as allows the model to encode close targets without a costly saccade.However, this strategy conflicts with our bottom-up saliency model, wherein the fixation is focused on the target with the most salient object unless guided by further top-down information.More detailed experimental and modelling work would be required to analyse how these two competing processes guide vision.An alternative search strategy is to group the layout into a relatively small number of internally consistent groups and search one group at a time.This helps to maintain a shorter VSTM load, as inhibition of return is broadened to groups instead of objects.However, more research is needed into how the grouping happens, how exactly the groups are stored in working memory and VSTM, and how the within-group search proceeds.The model was tested on a desktop computer.More work is needed to assess the effects that different display sizes may have.The model has no theoretical limitation that would prevent its application for mobile devices.What changes is the apparent size of elements, which has effects on the utility of peripheral vision.Nevertheless, a validation study with smaller devices should be carried out to confirm the applicability of the model in the mobile domain.Furthermore, our model does not simulate user interaction with the environment, such as pointing the mouse cursor or the finger at desired target objects.Such work would involve simulating a user, who is tasked with finding and acting on multiple layout elements in the correct order to accomplish a goal.However, it is clear that such a full model of long-term UI interaction will require a model of visual search and learning.To that end, one would need to implement both a pointing model and a task-control model.For example, the model could calculate the movement time from the current cursor location to the target by using a motor-control system similar to that in EPIC, then add this to the visual search time to simulate user performance.As the model already implements a utility-based control system, this system can be augmented to process task-specific instructions.More work is needed to make the model more readily actionable in design practice.Due to how the model builds an internal representation of its task interface, it is not possible to directly use the model to analyse images, such as screenshots of interfaces.However, an automated segmenter can be used to transform a UI screenshot into a model-readable file.A prototype version of such an automated segmenter, with the possibility of testing our model with it, is available.1,This allows the quick analysis of visual search times of novice users on any layout.More complicated analyses, such as tweaking object frequencies and setting the users’ expertise levels requires modifying the script files of the model.In the future the model could be integrated in design tools, similar to the concept of CogTool,We have demonstrated, with several examples, how the model can be used to aid in solving design problems related to visual search and layout learning.The model helps designers make decisions about element placement, the number of layout elements, and variations in features.More generally, the model assists designers by predicting: 1) visual search times and eye movements, given a layout and a user history; 2) changes in visual search times and eye movements as the user is exposed to a layout; and 3) adaptation to dynamically changing layouts.Our modelling approach is based on the principle of optimal adaptation: our model uses utility learning to find a rationally optimal behaviour, given its resources and the bounds of its architecture and the task environment.This principle frees the modeller from making assumptions about the low-level strategies, including having to potentially specify different strategies for different tasks and environments.The assumption of optimality, given the resources and limitations, allows for a clear framework where each component of the model can be described in terms of how it assists the agent to achieve the goal, and what bounds it imposes.We conclude that models exploiting reinforcement or utility learning under the idea of bounded rationality offer exciting avenues for applied modelling in HCI.
We present a computational model of visual search on graphical layouts.It assumes that the visual system is maximising expected utility when choosing where to fixate next.Three utility estimates are available for each visual search target: one by unguided perception only, and two, where perception is guided by long-term memory (location or visual feature).The system is adaptive, starting to rely more upon long-term memory when its estimates improve with experience.However, it needs to relapse back to perception-guided search if the layout changes.The model provides a tool for practitioners to evaluate how easy it is to find an item for a novice or an expert, and what happens if a layout is changed.The model suggests, for example, that (1) layouts that are visually homogeneous are harder to learn and more vulnerable to changes, (2) elements that are visually salient are easier to search and more robust to changes, and (3) moving a non-salient element far away from original location is particularly damaging.The model provided a good match with human data in a study with realistic graphical layouts.
average heat transfer coefficient goes up due to the temperature gradient increase between the base of heat sink and the airflow.Therefore, the thermal resistance is lower as it is inversely proportional to the surface area and coefficient of heat transfer, Eq.Thermal performance of plate-fin heat sinks with fillet profile subject to parallel flow and those without fillet profile subject to impinging has been compared.To achieve this, a CFD model for plate-fin heat sinks without fillet profile subject to impinging has been developed and validated with an experimental study from the literature.The obtained results demonstrated that the maximum difference between experimental data and numerical results were 12.4% and 8.8% for pressure drop and the thermal resistance respectively under same conditions.This proves the accuracy of the numerical analysis that has been developed in this study.In particular, three sets of simulations have been discussed, i.e. the effect of fillet profile on plate-fin heat sinks subject to impinging flow, the effect of flow direction on plate-fin heat sinks with fillet profile and the comparison of the proposed design and conventional design.The study has shown that adding a fillet profile and changing the flow direction have a notable effect on thermal performance of plate-fin heat sinks.Although, this was beyond the scope of present paper to demonstrate which one has superior effect, the primary results have shown that both parameters have approximately same effect on base temperature and thermal resistance of heat sinks.Furthermore, the numerical results of comparison between proposed design and conventional design shown that the base temperature of heat sink in proposed design decreases by 7.5% compared to the conventional model.Moreover, the thermal resistance for proposed design is 18% lower in comparison with the conventional design.Therefore, a notable improvement in the thermal performance of the heat sink was demonstrated that might help to develop more advanced cooling technologies for electronic equipment industry.
Many researchers have studied the thermal performance of heat sinks, however to the best knowledge of the authors, the effect of flow direction (place of fan) on the thermal performance of plate-fin heat sinks with fillet profile have not yet been investigated.In this paper, the investigation develops a computational fluid dynamics (CFD) model, validated through comparison with an experimental data from the literature, which demonstrates the effect of flow direction and fillet profile on the thermal performance of plate-fin heat sinks.In particular, a plate-fin heat sink with fillet profile subject to parallel flow has been compared with the conventional design (plate-fin heat sink without fillet profile subject to an impinging flow) and satisfactory results have been perceived.The results of this study show that the base temperature along with the thermal resistance of the heat sink is lower for the proposed design.Therefore, the developed approach has strong potential to be used to improve the thermal performance of heat sinks and hence to develop more advanced effective cooling technologies.
We present a deep reinforcement learning approach to minimizing the execution cost of neural network computation graphs in an optimizing compiler.Unlike earlier learning-based works that require training the optimizer on the same graph to be optimized, we propose a learning approach that trains an optimizer offline and then generalizes to previously unseen graphs without further training.This allows our approach to produce high-quality execution decisions on real-world TensorFlow graphs in seconds instead of hours.We consider two optimization tasks for computation graphs: minimizing running time and peak memory usage.In comparison to an extensive set of baselines, our approach achieves significant improvements over classical and other learning-based methods on these two tasks.
We use deep RL to learn a policy that directs the search of a genetic algorithm to better optimize the execution cost of computation graphs, and show improved results on real-world TensorFlow graphs.
an atypical member of the outgroup and that the outgroup did indeed hold contrasting values.Nonetheless, across the studies a picture emerges that imagining contact with an anti-normative outgroup member generates a positive situational construal that can also promote more positive responses to the outgroup as whole.A great deal of research has demonstrated that group members react especially strongly toward others who oppose the norms of their own group.In particular, those who show disloyalty toward the ingroup are liable to be derogated whereas those who show disloyalty within the outgroup are liable to be praised.Moreover, such effects are stronger when the deviant is a full member of the group and when the group is less, rather than more homogeneous.Differentiation between normative and deviant group members serves the function of sustaining ingroup identity by validating ingroup norms.The present research is consistent with these prior findings in showing that imagined contact with an anti-normative outgroup member has a positive impact on prejudice.A strength of the present studies is that they tested the effects of imagined contact across diverse settings, with multiple groups and in relation to a range of different outcome variables.This diversity helps to mitigate the possibility that the positive effects of imagined contact with an outgroup deviant are attributable to any other variable across the studies.A positive effect occurred regardless of whether imagined contact with an anti-normative outgroup member was compared with a no contact control, or contact with an anti-normative ingroup member.Positive effects have been demonstrated in two different countries.A positive effect was found in academic, and inter-religion intergroup contexts.Positive outcomes were observed on, imagination construal and prejudice.Therefore, the results converge and provide confidence that imagined contact with an anti-normative outgroup member can have a particularly positive effect on intergroup relations.Studies 2 and 3 show that this effect occurs even when prior intergroup contact is accounted for.It could be argued that anti-normative outgroup members are rare and that imagining them may create false hopes or prospects of intergroup harmony.It is also the case that individuals who try to espouse antinormative positions are very likely to be the target of criticism or even rejection within their own group – a difficult, lonely and perhaps dangerous position.Despite these obstacles there are reasons to be less pessimistic – after all, most groups tend to want to dominate the center ground and this implies that they include individuals who do and can have values or priorities that overlap with those of other groups."For example, finding individuals who are credible members of their ingroup but are open to seeing part of the other group's side is an important part of Kelman's problem-solving approach to conflict resolution.Moreover, the goal of imagined contact is to encourage openness to actual contact and the possibility of discovering a more positive route for intergroup relations.Recent research indicates that ‘hope’ should be seen as an essential asset, rather than a liability, in the case of intractable conflicts.The three studies reported in this paper show that imagined contact with an anti-normative outgroup member can reduce prejudice.Although prior research shows that intergroup contact has most positive effects if the outgroup member is typical, this has been operationalized as meaning only that the person is stereotypically consistent.Yet, the present research shows that when groups are in direct conflict or comparison, imagined contact with a normative outgroup member does not have as strong an effect as imagined contact with an outgroup member who adopts an oppositionally deviant stance, and is thus highly atypical.This latter type of contact seems to create a psychological connection that can improve intergroup relations.This body of work therefore supports an important revision to a widely accepted conclusion from extant research on intergroup contact, i.e. that the most effective form of contact is with typical outgroup members.It also adds a new and feature makes use of the unique and distinctive capacity to systematically vary the content of intergroup contact within imagined contact scenarios.By drawing on a different perspective, that of subjective group dynamics theory, and focusing on the implications of ingroup norm validation, the research has revealed a new approach in which imagined contact with clearly anti-normative, outgroup members can play a powerful role.Strikingly, the evidence in this paper opens possibilities for using a novel strategy for promoting intergroup harmony.This strategy would not focus merely on finding ingroup exemplars and role models to promote positive attitudes to outgroups, but also would identify outgroup exemplars with something positive to offer to ingroup identity."Correspondingly, where there is scope to build positive intergroup relations, those seeking ways to approach outgroups for dialog or cooperation may find the task easier if they are able to draw outgroup members' attention to real or potential ingroup members who help to validate or reinforce some important outgroup norms.
Can imagining contact with anti-normative outgroup members be an effective tool for improving intergroup relations?Extant theories predict greatest prejudice reduction following contact with typical outgroup members.In contrast, using subjective group dynamics theory, we predicted that imagining contact with anti-normative outgroup members canpromote positive intergroup attitudes because these atypical members potentially reduce intergroup threat and reinforce ingroup norms.In Study 1 (N = 79) when contact was imagined with an anti-normative rather than a normative outgroup member, that member was viewed as less typical and the contact was less threatening.Studies 2 (N = 47) and 3 (N = 180), employed differing methods, measures and target groups, and controlled for the effects of direct contact.Both studies showed that imagined contact with anti-normative outgroup members promoted positive attitudes to the outgroup, relative both to a no contact control condition and (in Study 3) to a condition involving imagined contact with an ingroup antinormative member.Overall, this research offers new practical and theoretical approaches to prejudice reduction.
doxorubicin .A recent study confirmed the role of PSCs in radio-resistance by activating the integrin-FAK signaling in tumor cells .CAFs represent the majority of the cellular compartment of tumor microenvironment of PDACs."A study by Richard's et al. showed that CAFs exposed to chemotherapy play an active role in the survival and proliferation of cancer cells.They also found that CAFs are intrinsically resistant to gemcitabine.Furthermore, exposition of CAFs to gemcitabine significantly increased the release of exosomes, a type of extracellular vesicles.These exosomes tended to increase the expression of SNAIL in recipient epithelial cells, thereby promoting proliferation and drug resistance."Importantly, treatment of gemcitabine-exposed CAFs with an inhibitor of exosome release, called GW4869, significantly reduced the survival in co-cultured epithelial cells, pointing at an important role of CAFs' exosomes in chemo-resistance .Part of the mechanism by which exosomes confer chemo-resistance in pancreatic cancer cells has been revealed and involves the upregulation of two ROS detoxifying enzymes, the superoxide dismutase 2 and catalase, and the miR-155-mediated downregulation of gemcitabine-metabolizing enzyme, DCK .Immune cells, such as tumor associated macrophages, can affect the response of tumor cells to chemotherapy through a process called environment-mediated drug resistance.A study by Amit et al. showed that TAMs can secrete the enzyme cytidine deaminase which metabolizes gemcitabine into its inactive form thereby leading to the survival of cancerous cells and favoring the emergence of chemo-resistant clones .Cytidine deaminase may have unexpected origin as it has recently been shown that intra-tumor bacteria, mainly belonging to Gammaproteobacteria, also express and secrete a bacterial form of this enzyme which is active on gemcitabine .Inflammation within the pancreatic tumor environment has been linked to chemo-resistance and tumor progression through NFκB, IL6, Toll like receptor and TGFβ pathways .PDAC is a compact solid tumor with reduced blood flow that leads to temporary or chronic hypoxia .Such Hypoxic conditions stabilize HIF1A which is known to participate in the resistance to chemotherapy and radiotherapy .Moreover, most chemotherapies induce their toxicity also through the generation of ROS.As the generation of ROS is strongly reduced in cells under hypoxia the efficacy such treatments are also reduced .Hypoxia can increase the expression of P-glycoprotein, the product of multidrug resistance gene, which is involved in drug inactivation and consequently in drug resistance .EMT is a process in which epithelial tumor cells lose their epithelial markers like E-cadherin and start expressing mesenchymal markers like vimentin, undergo cytoskeletal remodeling followed by loss of cell polarity and acquisition of an invasive phenotype which aids the metastatic process .Recent studies suggest an important role of EMT in resistance to gemcitabine, 5-FU, and cisplatin, which can be reversed by silencing of ZEB1 in resistant PDAC cancer cell lines .Despite the fact that genetic mutations are responsible for the tumor development, they cannot explain the phenomenon of resistance nor help to anticipate the response to a given chemotherapeutic drug.PDAC has a well known set of mutations but, nevertheless, it displays an incredible inter-tumor and intra-tumor heterogeneity."This may explain why the past attempts to target mutations to surpass resistance and to provide more efficient cures for PDAC didn't reach satisfactory results.Hence, it becomes obvious that other paths must be taken in order to solve the mystery of PDAC resistance and to design better treatments able to improve the survival rates.The literature contains an increasing number of examples showing that resistant mechanisms are associated with a particular phenotype of the tumor, both at the tumoral cell level and at the associated microenvironment level.So, deeper investigations of these resistance mechanisms will reveal new molecular pathways that could be targeted either by already available molecules or by new specifically designed ones to ultimately improve the survival of PDAC patients.All authors listed have significantly contributed to the development and the writing of this article.This work was supported by La Ligue Contre le Cancer, INCa, Canceropole PACA, DGOS and INSERM.Mirna Swayden was supported by La Ligue Contre le Cancer and the Lebanese Ministry of the Interior and Municipalities.The authors declare no conflict of interest.No additional information is available for this paper.
Pancreatic Ductal Adenocarcinoma (PDAC) is one of the deadliest forms of cancer.A major reason for this situation is the fact that these tumors are already resistant or become rapidly resistant to all conventional therapies.Like any transformation process, initiation and development of PDCA are driven by a well known panel of genetic alterations, few of them are shared with most cancers, but many mutations are specific to PDAC and are partially responsible for the great inter-tumor heterogeneity.Importantly, this knowledge has been inefficient in predicting response to anticancer therapy, or in establishing diagnosis and prognosis.Hence, the pre-existing or rapidly acquired resistance of pancreatic cancer cells to therapeutic drugs rely on other parameters and features developed by the cells and/or the micro-environment, that are independent of their genetic profiles.This review sheds light on all major phenotypic, non genetic, alterations known to play important roles in PDAC cells resistance to treatments and therapeutic escape.
arithmetic abilities.Specifically, prediction of the outcome of tracking a single object across occlusion effectively consists of adherence to the principles that 0 + 1 = 1 and 1 − 1 = 0.In this respect, this account is consistent with the argument that awareness of object permanence develops from perception of object persistence across occlusion.We know that young infants’ perception of object persistence across occlusion is limited to short spatiotemporal gaps in perception, and it is likely that this same perceptually constrained process operates in Wynn’s task, such that young infants form a perceptual expectation about the persistence of an added object when the screen is lowered.The additional process revealed in the current work is that infants apparently track an object off the scene and form a perceptual expectation of its absence behind the screen, an expectation that is violated when it is revealed remaining in its original location.In summary, to our knowledge this is the first study to derive eye-tracking data from a task involving addition and subtraction of objects from a three-dimensional scene.The clearest results were obtained in the subtraction violation condition, where infants directed particular attention specifically to the object that should no longer be there.Selective attention of this sort is not predicted by a low-level account based on familiarity preference.However, the fact that there was no increase in looking to the object that was not subject to the subtraction operation does not support a symbolic numerical account, according to which detection of a numerical violation should lead to an increase in attention to both objects in the outcome scene.Our results are more closely in keeping with an object file account in which each object is tracked separately, such that attention is directed only to the object whose file is violated.We favor the view that processing at this level forms a precursor of symbolic numerical ability, which may well develop through the constructionist processes advanced by Cohen and Marks.
Investigating infants’ numerical ability is crucial to identifying the developmental origins of numeracy.Wynn (1992) claimed that 5-month-old infants understand addition and subtraction as indicated by longer looking at outcomes that violate numerical operations (i.e., 1 + 1 = 1 and 2 − 1 = 2).However, Wynn's claim was contentious, with others suggesting that her results might reflect a familiarity preference for the initial array or that they could be explained in terms of object tracking.To cast light on this controversy, Wynn's conditions were replicated with conventional looking time supplemented with eye-tracker data.In the incorrect outcome of 2 in a subtraction event (2 − 1 = 2), infants looked selectively at the incorrectly present object, a finding that is not predicted by an initial array preference account or a symbolic numerical account but that is consistent with a perceptual object tracking account.It appears that young infants can track at least one object over occlusion, and this may form the precursor of numerical ability.
The relevant data is provided in this article.See Table 1 and Table 1, and Figs. 1 and 2.The raw data files that were used in the analysis and interpretation are available in Institute for Atherosclerosis Research, Skolkovo Innovative Center, Moscow Region, Russian Federation.http://inat.ru/.This dataset report is dedicated to mtDNA variants, associated with asymptomatic atherosclerosis.These data were obtained using the method of next generation pyrosequencing.The whole mitochondrial genome of the sample of patients from the Moscow region was analyzed.In this article the dataset of homoplasmic mtDNA variants in patients with atherosclerosis and healthy individuals from the Moscow region was presented.The materials for obtaining the data were leukocytes from whole blood of 68 patients.31 subjects with carotid atherosclerosis and 37 control subjects without atherosclerosis were selected for the study .Selected individuals did not have severe clinical manifestations of atherosclerosis and oncological diseases.The number of patients with diabetes mellitus was minimized.To assess the state of the carotid artery wall, high-resolution B-mode ultrasonography was performed with ultrasound scanner SonoScape SSI-1000 using a linear vascular 7.5 MHz sensor.Values of the average carotid intima-medial thickness were used to estimate the presence and severity of atherosclerotic plaques in carotid arteries.Borderline age-related CIMT values for Moscow region population were used to characterize the presence of carotid atherosclerosis .If there was a presence of atherosclerotic plaque with the carotid artery stenosis of more than 20% or thickening of the intima-media layer exceeding the boundaries of the 75th percentile, as well as the combination of these factors, persons were considered as belonging to the group of patients with atherosclerosis.Controls were characterized by CIMT values that does not exceed median values for appropriate age group, and by the absence of atherosclerotic plaques.The extraction of DNA from blood leukocytes of patients was performed using methods developed earlier by us , on the basis of the methods, published by Maniatis et al. .Before sequencing, an enrichment of mitochondrial genome by the amplification of the whole mitochondrial genome using REPLI-g Mitochondrial Kit was performed.To carry out mtDNA sequencing, Roche 454 GS Junior Titanium system was used.Sequencing workflow was performed according to the manufacturer׳s recommendations and using appropriate instruments and reagents.Sequence analysis of mitochondrial DNA was carried out using GS Reference Mapper software.Cambridge reference sequence of the human mitochondrial genome was used for mapping .Statistical analysis of the obtained data was carried out using IBM SPSS Statistics v.21.0.We identified 58 most common homoplasmic variants that were characterized by more than 5% presence in the observed sample.Among them, 7 mtDNA variants were associated with presence of atherosclerotic lesions of carotid arteries and 16 variants mtDNA occurred more often in healthy individuals.
This dataset report is dedicated to mitochondrial genome variants associated with asymptomatic atherosclerosis.These data were obtained using the method of next generation pyrosequencing (NGPS).The whole mitochondrial genome of the sample of patients from the Moscow region was analyzed.In this article the dataset including anthropometric, biochemical and clinical parameters along with detected mtDNA variants in patients with carotid atherosclerosis and healthy individuals was presented.Among 58 of the most common homoplasmic mtDNA variants found in the observed sample, 7 variants occurred more often in patients with atherosclerosis and 16 variants occurred more often in healthy individuals.
X-ray diffraction pattern for the Ag30Li70 alloy, including settings on the experimental run, followed by two columns with the 2θ and Intensity normalized to an Imax = 100.No zero-shift correction and no normalization were performed.The configuration of the diffractometer is Bragg-Brentano and the sample was polycrystalline.The source used was CuKα.X-ray diffraction simulated pattern constituted by two columns with 2θ and Intensity normalized to Imax = 100 for the γ-Ag4Li9 disordered phase.The simulated source used was CuKα.X-ray diffraction simulated pattern constituted by two columns with 2θ and Intensity normalized to Imax = 100 for the γ-Ag3Li10 disordered phase.The simulated source used was CuKα.X-ray diffraction simulated pattern constituted by two columns with 2θ and Intensity normalized to Imax = 100 for the β-Ag15Li49 phase.The simulated source used was CuKα.Calculated vibrational heat capacity at constant volume for temperatures below the melting point T < 500 K for the γ-Ag4Li9 disordered phase that was optimized using DFT.The melting point is not known with precision.Two columns with the data: T, and Cv included.Calculated thermal linear expansion coefficient for γ-Ag4Li9 disordered phase for temperatures below the melting point T < 500 K.The melting point is not known with precision.Two columns with the data: T, and α × 106 included.Calculated enthalpies of formation, Hf, for several phases at 298 K. Three columns: compound, x, and Hf included.Calculated Gibbs energies of formation, Gf, for several phases at 298 K. Three columns: compound, x, and Gf included.Calculated enthalpies of formation, Hf, for several phases at 320 K. Three columns: compound, x, and Hf included.Calculated Gibbs energies of formation, Gf, for several phases at 320 K. Three columns: compound, x, and Gf included.Calculated enthalpies of formation, Hf, for several phases at 425 K. Three columns: compound, x, and Hf included.Calculated Gibbs energies of formation, Gf, for several phases at 425 K. Three columns: compound, x, and Gf included.Calculated enthalpies of formation, Hf, for several phases at 600 K. Three columns: compound, x, and Hf included.Calculated Gibbs energies of formation, Gf, for several phases at 600 K. Three columns: compound, x, and Gf included."Details on Ag–Li phases' composition, structures and optimization methods, initial structure space group, and method to obtain the final optimized structure).The Ag30Li70 sample was prepared as described in Ref. .The XRD data were obtained from 10 to 90° with a Bragg Brentano configuration for polycrystalline samples with a wavelength of λ = 0.1542 nm which is, in fact, an average of two closely spaced peaks.The theoretical background in Ref. explains the calculations of the Thermodynamic data included in this database; the theoretical principles were used as implemented in VASP , MT and Phonon .Each phase was optimized from a structure that was obtained using random substitution, special quasirandom structure, or substitutional search, depending on the type of structure."Since SQS's mimics well the local atomic structure of the random alloy, their electronic properties, calculable via first-principles techniques, provide a representation of the electronic structure of the alloy .Table 1 shows the stoichiometry of the compound, the initial space group, and the method used for obtaining the compounds whose thermodynamic data is included in the dataset associated with this work.
The Ag–Li system was analysed using first-principles calculations 10.1016/j.jallcom.2019.152811 [1].The method included using density functional theory to optimize the crystal structure of the phases constituting the binary phase diagram by relaxing atomic positions, volume, and shape.The optimized structures were subsequently used to calculate thermodynamic properties at different temperatures; by determining the zero-point energy, the vibrational internal energy, and the entropy, the heat capacity at constant volume was obtained as well as the phases' stability limits.Furthermore, optimized structures were used to calculate the XRD patterns and to compare them with experimental data.All the referred data are now accessible to researchers and industrials demanding to work with binary and higher-order systems that include Ag and Li, for example, for energy storage.Binaries should be well assessed prior to higher-order phase diagrams and in that resides additional usefulness to this data.
test.The underlying assumption is that temperature-dependent stiffness variations for the elastomers inside the mount occupies a central position for service ageing.From the observations in this study, it can be interpreted that pure thermal effects would change the elastic modulus of the elastomers inside the engine mount uniformly.Comparisons of the maximum principal strain pattern between Figs. 5 and 8 indicate accelerated ageing tests with pure thermal effects do not simulate correctly real ageing conditions experienced by service-aged engine mounts.Due to practical difficulty in supplying a large number of engine mounts, especially from used vehicles, results reported in this work are restricted to four samples representing four different ageing conditions.A larger number of tests would be needed to confirm the trend found in this study, especially for engine mounts with high mileage.Uncertainties in measurements are also difficult to quantify through repeated tests as it is almost impossible to retrieved engine mounts with exactly the same ageing conditions.All the aged mounts were indeed disassembled from real vehicles with ageing conditions likely to differ in terms of time in service and environment.However, the DIC approach developed in this study has demonstrated that strain field measurements carried out for a limited number of retrieved engine mounts can provide new insight into their ageing mechanisms.A new experimental procedure has been successfully developed to study ageing of engine mounts under real and practical conditions by analysing the evolution of geometry of the elastomeric main spring and its internal strain distribution.The DIC observations have provided a new insight into identifying the ageing mechanics of a service-aged engine mount.Specifically, the laboratory tests of accelerated thermal ageing samples indicate that the material ageing for the main spring is uniform and homogeneous and, therefore, can be represented using a nonlinear elastic modulus relationship.The DIC observations demonstrate that the strain distribution varies with ageing mileage.This new result suggests that the stiffening is caused non-uniformly.A secondary factor that was observed and thought to contribute, at least in part, was the development of additional contact areas.Meanwhile, the origin of the softening of the engine mount for a very high mileage is still not clear.However, the experimental observations in the paper suggest that it may be caused by a combined effect of two material ageing phenomena: creep deformation of the load-carrying region and micro-structural change.When the service mileage of an engine mount is high, the internal strain distribution indicates that the number of high-strain regions increase.This observation indicates that micro-structural change is highly likely to have occurred, in which case it would contribute to a reduction in the overall stiffness of the mount.To conclude, the results show that chemical creep deformation and the inhomogeneous elasticity of the main spring are of great importance in designing the overall life cycle for an engine mount.The classic thermal accelerated ageing test method has been shown to fail in predicting these features.As a result, it is suggested that a high-temperature fatigue testing method is needed to simulate the service ageing.This is an area for future work.The raw and processed data required to reproduce these findings cannot be shared at this time due to technical limitations related to file size for DIC data sets.Data is available upon request.
In general, understanding ageing-dependent stiffness is important for life cycle design.In this paper, a new experimental procedure is developed to study the ageing mechanisms of service-aged engine mounts using digital image correlation measurements.The present contribution demonstrates that the leading factors for ageing-dependent stiffness are, not only the elastic modulus variation, but also the creep deformation and micro-structural change.The results show that pure thermal effects, such as that used to simulate ageing, leads to a uniform change in the rubber component inside the mount.This is not the same as the service-aged mount behaviour.In addition, the cross-sectional creep deformation dominates the increase in rigidity.Finally, the results suggest that micro-structural change may also lead to the stiffness variation of the mounts with high working mileage.
Tables 1 and 2 show data obtained through the literature review and calculated using the method described.Table 1 shows fluoride concentration in water supplies in different provinces of Iran.Table 2 lists the AAMT data by provinces and the calculated optimal fluoride concentrations in their respective water supply systems.The results of Table 1 show that the reported fluoride concentrations of most provinces are less than the calculated values reported in Table 2.However, some provinces, such as, Chaharmahal and Bakhtiari, Qom, Hormozgān, Isfahan, and Khorasan, Razavi, have fluoride concentrations higher than values calculated using the standard formula and AMMT data.Fig. 1 shows the comparison of the calculated fluoride concentrations in drinking water for various provinces of Iran as well as the values reported in the literature against the allowable concentration level according to the WHO guideline .The minimum allowable concentration of fluoride is represented by the green line in Fig. 2, which also reveals that most of the selected provinces meet the stipulated guideline, except for Alborz, Khuzestan, and Hormozgan.The fluoride concentrations for these provinces were found to be less than 0.7 mg/L.In this study, fluoride concentrations in drinking water and ambient temperatures for selected Iranian provinces were obtained through a literature search and publically published data.The publications used in the literature search mainly included Pub Med, Science Direct, Iran Medex, and SID from 1990 to 2016, as well as original research articles that reported fluoride concentrations in drinking water.Monthly maximum ambient temperature data were then obtained for the selected provinces from a popular website that provides records of ambient air temperatures.Published concentrations of fluoride in drinking water were found for 31 provinces of Iran.Articles published in both Persian and English languages were used in this research.Data categorization and analysis of subgroups were carried out to decrease the impact of confounding factors such as consumption of fluoride-containing supplements that can affect the fluoride concentrations in drinking water .According to epidemiologists, ambient temperature is considered to be the most significant factor affecting fluoride concentration in drinking water.Therefore, categorization was based firstly on the province being studied, and secondly, on the fluoride concentration in drinking water.According to other studies, factors such as exposure time to fluoride in drinking water and any exposure to fluoride are note relevant to this study, and hence, these factors were not considered.The collected are reported in degree Celsius.The minimum, maximum, and average values of the fluoride concentrations in the drinking water from various provinces of Iran are presented in Table 1.Using AMMT data, fluoride concentrations in drinking water for select Iranian provinces were calculated.They reported in Table 2.
Fluoride concentrations in drinking water were analyzed relative to air temperature data collected in different provinces of Iran.Determining suitable concentrations of fluoride in drinking water is crucial for communities because of the health effects of fluoride on humans.This study analyzed fluoride concentrations in drinking water from selected Iranian provinces.The data were derived mainly from a detailed literature review.The annual mean maximum temperatures (AMMTs) were collected from a popular website that maintains records of daily ambient temperature measurements for the last five years (2012–2016).Using regional ambient temperatures, the optimal value of fluoride in drinking water for each province was calculated by the Galgan and Vermillion formula.These optimal fluoride concentrations in drinking water for different Iranian regions were calculated to be 0.64–1.04 mg F/L.Most of the selected provinces were found to have acceptable concentrations of fluoride, except for Alborz, Khuzestan, and Hormozgan, which reported concentrations of 0.66, 0.66, and 0.64 mg/L, respectively.
erythroblasts, the isolated CD34+ haematopoietic stem cells from adult peripheral blood were divided into 2 groups.Aliquots of approximately 104 CD34+ cells were maintained in the media of a 3-stage erythroid culture system containing heparin or without heparin.The numbers of erythroid cells were evaluated every other day.There was no difference in the number of cells between the group maintained in the media without heparin and those with heparin.Next the effect of insulin on the proliferation of erythroblasts was studied.The isolated CD34+ haematopoietic stem cells from adult peripheral blood were divided into 2 groups similar to in the previous experiment.The CD34+ haematopoietic stem cells were maintained in the media with or without insulin.Similar to the previous study, there was no difference in the number of cells between the two groups at every time point.Due to the observation in Figs. 1 and 2, further experiment was then performed to determine whether both heparin and insulin are required for the erythroid culture system.The isolated CD34+ haematopoietic stem cells were divided into 2 groups.The first group was maintained in the media with heparin and insulin, whereas the other group was maintained in the media without both insulin and heparin.The cultures were taken through the 3-stage erythroid culture system with number of cells evaluated every other day.There was no difference in the number of cells between the two groups.The morphology of cultured cells was analyzed by cytospin and Leishman staining throughout the culture.There was no significant difference in morphology of cells between both groups with similar percentages of enucleation obtained on day 20: 90.27% ± 1.12% for the control group and 89.70% ± 1.21% for the group maintained in the media without insulin and heparin.Several culture systems have been established for the generation of red blood cells in vitro.These culture systems require a number of cytokines and growth factors, which results in high cost of production.Some factors used in erythroid culture system are common among these cultures including stem cell factor, erythropoietin and transferrin , which indicates that they are necessary for erythroid differentiation.However, some factors such as insulin and heparin are used only in some culture systems and these two factors are present in our culture media.As we are using this culture to study haematopoiesis due to its ability to achieve up to 95% enucleation rate, we aimed to optimize it to reduce the cost.Interestingly, we have observed in this study that heparin and insulin, which have been believed to promote erythroid proliferation and maturation , are not required for our culture system.Therefore it can be omitted if this culture system is selected for erythroid differentiation from adult haematopoietic stem cells.Since haematopoitic stem cells from different individuals have variation in their proliferation capacity, the numbers of cells obtained from different experiments in this study were slightly different.However, within the same experiment we show comparable expansion between different conditions when using haematopoitic stem cells from the same individuals.This information could then be useful for the development of in vitro erythroid culture system because the high cost of production is one of the limitations of this process.
In vitro generation of red blood cells has become a goal for scientists globally.Directly, in vitro-generated red blood cells (RBCs) may close the gap between blood supply obtained through blood donation and high demand for therapeutic uses.In addition, the cells obtained can be used as a model for haematologic disorders to allow the study of their pathophysiology and novel treatment discovery.For those reasons, a number of RBC culture systems have been established and shown to be successful; however, the cost of each millilitre of packed RBC is still extremely high.In order to reduce the cost, we aim to see if we can reduce the number of factors used in the existing culture system.In this study, we examined how well haematopoietic stem cells proliferate and differentiate into mature red blood cells with modified culture system.Absence of extra heparin or insulin or both from the erythroid differentiation media did not affect haematopoietic stem cell proliferation and differentiation.Therefore, we show that the cost and complexity of erythroid culture can be reduced, which may improve the feasibility of in vitro generation of red blood cells.