text
stringlengths 330
20.7k
| summary
stringlengths 3
5.31k
|
---|---|
Enhancing the resistance of pathogens against standard antimicrobial treatments causes to increase morbidity and mortality in a worldwide.Many important pathogens show resistant to clinically important classes of antibacterial agents.Therefore, designing new compounds such as Ionic liquids which show antibacterial activities against Staphylococcus aureus, Escherichia coli, Pseudomonas aeruginosa and Enterococcus faecalis, are great importance.ILs are generally referred to be “green solvents” due to their low toxicity, low vapor pressure and remarkable chemical stability .ILs have great range of cation–anion combinations which provide flexibility properties on their chemical structure .They possess substituted nitrogen or phosphorus-containing cations and anions such as bromide, chloride, hexafluorophosphate, bissulfonimide and tetrafluoroborate.Polarization and ionization properties of these aromatic compounds enhance their pharmacokinetic features , and these properties improve their solubility and bioavailability.In this way, imidazole derivatives show pharmacological activities such as anticancer, antioxidant, anti-inflammatory, antiviral, antifungal, and antineoplastic well besides antimicrobial activities .It was observed that some imidazole based drugs could harm the membrane surfaces of pathogenic microorganisms, especially when used at high concentrationsina short time.They can directlyinteract with double lipid layerinthe outer membranes ofthe microorganismsand increase their cell membrane permeability.This affects membrane structure of the bacterial cell, and reduces its resistance capacity by making it difficult to repair the membrane damage ."Moreover, these cationic compounds interrupt synthesis of microorganism's DNA or RNA, and causing the metal ions release thus inhibits activities of certain enzymes on the bacterial cells .Recently, a strong relationship was found between the toxicity of the imidazolium based ILs and the alkyl-side chain length and cation ring planarity .Zheng et al. synthesized imidazolium type ionic liquid membranes, and investigated the effect of chemical structure, including carbon chain length of substitution and charge density of cations .In some ILs, an anion induced toxicity, which was caused by the relationship between lipophilicity and the number of fluorine atoms, was observed.Their toxicities towards prokaryotic cells can be elucidated by this way .The increase in lipophilic character of ILs with increasing alkyl chain length could be explained by the fact that IL incorporation into biological membranes may cause disruption of membrane proteins .Bacteria exist generally as not only free-floating planktonic organisms but also forming biofilms.A current definition of biofilm proposed by Rodney M. Donlan and J. William Costerton as follows; biofilm is a microbial derived sessile community characterized by cells that are irreversibly attached to a substratum or interface or to each other.They are embedded in a matrix of extracellular polymeric substances where they produce and exhibit an altered phenotype with respect to growth rate and gene transcription .This extracellular matrix can make slow drug-diffusion of biocides and antibiotics or can even act as a barrier due to its high viscosity.This formation is well developed as a communication system, which allows them to regulate microbial growth and metabolism.Biofilm formations are quite different from those of their planktonic forms.Eradication of biofilms on E. coli, P. aeruginosa and S. aureus was demonstrated by Ceri et al .Compared to planktonic cells of the same organism, to eradicate biofilm formation requires 1000-fold higher concentrations of certain antibiotics must be used.It is found that biofilms play an important role on distribution of microbial diseases in the body.Eight percent of microbial infectious diseases, such as periodontitis, endocarditis and chronic cystic fibrosis lung disease, in humans caused by biofilms are well known .On the other hand, biofilm-forming microorganisms have a tendency to develop by following themselves onto biotic or abiotic surfaces and thereafter onto surgical instruments, since exopolysaccharide glycocalyces provide a confluent protected biofilm .Biofilm formation in infectious diseases causes serious problems in treatment, and imidazolium salts with their antimicrobial activities can have a role in preventing biofilm formation .In this study, the antibacterial and antibiofilm activities of water-soluble imidazolium derivatives bearing different lengths of alkyl chains were evaluated.MIC values and antibiofilm properties of the synthesized compounds were determined against Gram-positive and Gram-negative bacterial strains.The lipophilicity of the new synthesized compounds was theoretically calculated by using ACD Chem Sketch Software.The obtained logP values of NIM-Br imidazolium derivatives are 7.80 ± 0.64, 8.56 ± 0.65, 9.92 ± 0.64, respectively.It is clearly seen that these values increase proportionally with increasing alkyl chain length.Lipophilicity value of the ITFSI compound was found to be 3.88 ± 0.9.The MIC values determined for the substances and gentamicin on different bacteria are presented in Table 1.Control group MIC values of gentamicin are 0.12–1 μg/ml for Staphylococcus aureus ATCC 29213, 0.25–1 μg/ml for Escherichia coli ATCC 25922, 0.5–2 μg/ml for Pseudomonas aeruginosa ATCC 27853 and 4–16 μg/ml for Enterococcus faecalis ATCC 29212 according to the Clinical and Laboratory Standards Institute.Our results showed that DMSO was inactive against bacteria at the used concentrations.The results of the microdilution tests, which are shown in Table 1, indicatedthat NIM-Br imidazolium derivatives and imidazolium-TFSI salt were exhibited antibacterial effects against Gram positive and Gram-negative bacteria.The compound 1b, which demonstrates the highest inhibitory effect against among these strains except P. aeruginosa, has the lowest MIC value.It also shows a good antimicrobial activity against E. faecalis when compared to commercial antimicrobial agent.It was found that the compound demonstrates a higher antimicrobial activity on the Gram-positive strains than the Gram-negative strains.Also, E. coli bacteria was shown great interest, considering that Gram-negative strains are generally less responsive to antimicrobial agents due to their outer membrane, which behaves as an additional barrier on the bacteria cells.Taking into account of these results, it may make an inference the antimicrobial activity of NIM-Br imidazolium derivatives | A series of imidazolium bromide salts (NIM-Br 1a, 1b and 1c) bearing different lengths of alkyl chains were synthesized and theirin vitro antibacterial activities were determined by measuring the minimum inhibitory concentration (MIC) values for Staphylococcus aureus, Escherichia coli, Pseudomonas aeruginosa and Enterococcus faecalis.All compounds were found to be effective against Gram-positive and Gram-negative bacteria, and also more effective on the S. aureus biofilm production than the others. |
converted to acetate during the fermentation process.Similar to the spectra of samples taken from the MEC using raw urine, the acetate peak decrease over time evidencing its consumption.Furthermore, and contrary to the raw urine, propionate was identified in the fermented urine demonstrating the success of the fermentation process.According to the NMR spectra, also the propionate peak decrease demonstrating its consumption during the operation.Electricity production from propionate oxidation in a MFC was previously reported by Jang and co-workers .Anaerobic digestion was implemented to convert fermentable compounds commonly present in urine into simple compounds, namely VFAs that are directly converted by the EAB to electricity.Therefore, the anodic performance of a MEC was effectively improved through the application of fermented urine, obtained after anaerobic digestion, in comparison to the use of raw urine.The efficiency of the anaerobic digestion process was evaluated through 1H NMR analysis, which showed acetate, propionate and methylamine as the major compounds fermented urine.In opposition, raw urine exhibited as main compounds: urea, creatinine but also acetate.MEC fed with fermented urine produced higher current density and demonstrated higher CE and higher COD removal rate.Moreover, higher current on MEC using fermented urine allowed higher NH4+-N removal.Consequently, it can be concluded that the integration of anaerobic digestion of urine with MEC in a two-stage operation is an effective option for the treatment of urine using BES.This provides an insight into the methodology to effectively treat effluents loaded with complex substrates to obtain a polished effluent and enhance the power production of BES. | This study investigated the effect of pre-fermented urine on anode performance of a two-chambered microbial electrolysis cells (MECs) compared to raw urine.Pre-fermentation of urine was performed by anaerobic digestion.The effect of this pre-fermentation on anode performance of a MEC was assessed by measuring the removal of chemical oxygen demand (COD), current density and Coulombic efficiency (CE).The MEC using fermented urine achieved a higher average current density (218 ± 6 mA m−2) and a higher CE (17%).Although no significant differences were observed in the COD removal efficiency between both urines, the MEC using fermented urine displayed the highest COD removal rate (0.14 ± 0.02 g L−1 d−1).The organic compounds initially found in both urines, as well as the metabolic products associated to the biodegradation of the organic matter were analyzed by proton nuclear magnetic resonance (1H NMR).The main compounds initially identified in the raw urine were urea, creatinine and acetate.In the fermented urine, the main compounds identified were methylamine, acetate and propionic acid demonstrating the effectiveness of the anaerobic fermentation step. |
Learning representations of data is an important issue in machine learning.Though GAN has led to significant improvements in the data representations, it still has several problems such as unstable training, hidden manifold of data, and huge computational overhead.GAN tends to produce the data simply without any information about the manifold of the data, which hinders from controlling desired features to generate.Moreover, most of GAN’s have a large size of manifold, resulting in poor scalability.In this paper, we propose a novel GAN to control the latent semantic representation, called LSC-GAN, which allows us to produce desired data to generate and learns a representation of the data efficiently.Unlike the conventional GAN models with hidden distribution of latent space, we define the distributions explicitly in advance that are trained to generate the data based on the corresponding features by inputting the latent variables that follow the distribution.As the larger scale of latent space caused by deploying various distributions in one latent space makes training unstable while maintaining the dimension of latent space, we need to separate the process of defining the distributions explicitly and operation of generation.We prove that a VAE is proper for the former and modify a loss function of VAE to map the data into the pre-defined latent space so as to locate the reconstructed data as close to the input data according to its characteristics.Moreover, we add the KL divergence to the loss function of LSC-GAN to include this process.The decoder of VAE, which generates the data with the corresponding features from the pre-defined latent space, is used as the generator of the LSC-GAN.Several experiments on the CelebA dataset are conducted to verify the usefulness of the proposed method to generate desired data stably and efficiently, achieving a high compression ratio that can hold about 24 pixels of information in each dimension of latent space.Besides, our model learns the reverse of features such as not laughing only with data of ordinary and smiling facial expression. | We propose a generative model that not only produces data with desired features from the pre-defined latent space but also fully understands the features of the data to create characteristics that are not in the dataset. |
not be proven.Since E2 can challenge the masculinisation response to temperature changes, statistical analysis of intersex induction and severity was rerun with female histopathology considered as an experimental response, with an intersex index of seven.However, the results were still no different from the original analysis.Phenotypic males produced by temperature induced sex reversal have been found to have histologically normal, male testes and were capable of successfully reproducing.Control fish in this study were also found to have histologically normal testes.This suggests that they still provide a sufficient platform for the induction of intersex by exogenous chemicals.Indeed, future experiments could even use temperature in a more controlled fashion to increase the n number of phenotypically male or female fish for assessment of sex reversal or intersex induction.Nonetheless, the male dominated sex ratios highlight the importance of maintaining experimental conditions during chemical exposure studies and recognising the possibility of non-chemical effects on this endpoint.It also supports the role of medaka in the fish sexual development test, since it has a well-defined genotypic sex determining gene, which could be used to assess non-chemical sex reversal in controls.Whilst we found no evidence that the anti-androgenic pharmaceuticals could enhance or diminish a response to oestrogen exposure, there still remains a possibility that oestrogenic and anti-androgenic contaminants could act on fish in combination in the environment, as indicated by Jobling et al.Indeed, compounds with similar mechanisms of action, such as AR antagonists and steroidogenesis inhibitors, have been shown to act additively or even synergistically in vivo on common endpoints in rodent models.In fish, flutamide has also been shown to increase the expression of oestrogen receptors β and γ in fathead minnow and oestrogen receptor α in Murray rainbowfish, suggesting that there may be some common modes of action between AR antagonists and oestrogens.Crucially, preliminary data from one study has reported an increase in the incidence of ovarian cavities in juvenile roach following exposure to a combination of anti-androgens identified in STW effluents and steroid oestrogens.It may be that the total anti-androgenic activity present in the environment, representing a larger number of compounds, could cause sexual disruption in fish in combination with oestrogens.However, so far attempts to characterise anti-androgenic activity in environmental samples has found that the identified anti-androgenic compounds cannot explain the total activity detected.It may be that a large number of compounds are contributing, of which the anti-androgenic pharmaceuticals are a factor.Indeed, preliminary results from the Tox21 programme found that almost 10% of the 1,462 compounds tested were androgen receptor antagonists in vitro, which supports the possibility of a large number of contributors to environmental activity.Consequently, further study is warranted to identify significant environmental anti-androgens and to determine their environmental impacts, particularly with respect to the possibility of a multi-causal aetiology to sexual disruption observed in wild fish.Hydrological modelling predicted that bicalutamide and cyproterone acetate are likely to be widespread contaminants in rivers in England and Wales.In a majority of cases, their concentrations are likely to occur below 10 ng/L, but at many “hot spots” concentrations are likely to be higher, in some cases exceeding 100 ng/L for bicalutamide.However, exposures of fish to these high environmental concentrations suggested that they are not likely to be a threat to fish reproductive health.Indeed, it is likely that concentrations one magnitude higher than those predicted to occur in the environment are required to induce significant responses associated with feminisation of wild fish.However, given the evidence for additive effects of anti-androgenic chemicals in whole organisms, these environmental contaminants should be considered as part of the wider issue of anti-androgenic activity in the environment.Critically, this study demonstrates that a mixture of steroid oestrogens, at concentrations present in the aquatic environment, can induce intersex at a rate comparable with that observed in UK and European rivers.Additional exposures to anti-androgenic pharmaceuticals known to co-occur with these oestrogens did not exacerbate the incidence or severity of intersex.Taken together, these data support the role of steroid oestrogens as major contributors to intersex in wild fish. | Sexual disruption in wild fish has been linked to the contamination of river systems with steroid oestrogens, including the pharmaceutical 17α-ethinylestradiol, originating from domestic wastewaters.As analytical chemistry has advanced, more compounds derived from the human use of pharmaceuticals have been identified in the environment and questions have arisen as to whether these additional pharmaceuticals may also impact sexual disruption in fish.Indeed, pharmaceutical anti-androgens have been shown to induce such effects under laboratory conditions.Consequently, predictive modelling was employed to determine the concentrations of two anti-androgenic human pharmaceuticals, bicalutamide and cyproterone acetate, in UK sewage effluents and river catchments and their combined impacts on sexual disruption were then assessed in two fish models.Crucially, fish were also exposed to the anti-androgens in combination with steroid oestrogens to determine whether they had any additional impact on oestrogen induced feminisation.Modelling predicted that the anti-androgenic pharmaceuticals were likely to be widespread in UK river catchments.However, environmentally relevant mixtures of oestrone, 17β-oestradiol and 17α-ethinylestradiol did induce vitellogenin and intersex, supporting their role in sexual disruption in wild fish populations.Unexpectedly, a male dominated sex ratio (100% in controls) was induced in medaka and the potential cause and implications are briefly discussed, highlighting the potential of non-chemical modes of action on this endpoint. |
dramatic changes are seen in La3Ni2-xCuxNbO9.La3Ni2NbO9 is a spin glass with Tg ∼29 K.The introduction of 12.5% Cu2+ causes the onset of long-range G-type magnetic ordering and increases the magnetic transition temperature to 96 K.In order to eliminate the spin-glass behaviour it is necessary to eliminate or reduce the frustration that causes it.We previously discussed the source of the frustration in La3Ni2NbO9 and concluded that it is most likely to arise from the presence of 180° Ni – O – Nb – O – Ni interactions that compete with Ni – O – Ni interactions.When x = 0.25, the presence of Cu2+ introduces 180 ° Cu – O – Ni, Cu – O – Nb – O – Ni and Cu – O – Nb – O – Cu interactions; further interactions are introduced when Cu2+ cations also partially occupy the 2d sites, as in La3Ni1.5Cu0.5NbO9.As a consequence of the different electron configurations of Ni2+ and Cu2+, the introduction of the latter cation will lead to changes in the magnitude and, sometimes, the sign of the relevant exchange constants.Any local variations in the bond lengths around the Jahn-Teller-active Cu2+ cations will also have consequences for the magnetic interactions.It is thus easy to understand why the magnetic behaviour might change but, with the data available, very difficult to predict or rationalise the nature of the changes in these heavily disordered compounds.Vasala et al. have previously described the consequences of a Jahn-Teller distortion for the magnetic properties of double perovskites containing Cu2+ and Mustonen et al. and Katukuri et al. have discussed the changes that occur in those materials when a d0 diamagnetic cation is replaced by a d10 cation.The compounds described in this paper are subject to both these factors, with cation disorder present as an additional complication.The introduction of Cu2+ cations into La3Ni2B’O9 has a marked effect on the magnetic properties.In the case where B’ = Sb the substituted composition is similar to the parent material in that, at low temperatures, it contains regions that show long-range magnetic order and regions that appear to show relaxor behaviour.The evidence for this comes from magnetometry and neutron diffraction and includes the observation that the ordered moment measured by neutron diffraction is very low compared to the magnetisation measured by magnetometry.The most striking consequence of the substitution in this case is thus that TC increases by ∼30 K. La3Ni2TaO9 is a relaxor ferromagnet , the low-temperature neutron diffraction pattern of which shows no evidence of long-range magnetic order.Following the introduction of 12.5% Cu2+, La3Ni1.75Cu0.25TaO9 shows long-range magnetic order at a temperature ∼30 K higher than that at which the copper-free composition begins to behave as a relaxor ferromagnet.However, as in the case B’ = Sb, the experimental evidence suggests that ordered and relaxor regions coexist below the transition temperature.The most dramatic changes are seen when B’ = Nb, in which case the introduction of Cu2+ transforms the spin-glass parent composition into a mixed magnetically-ordered/relaxor phase.Although the origins of all these changes remain to be elucidated, our data demonstrate again the sensitivity of the magnetic properties of these perovskites to chemical composition. | La3Ni2-xCuxB'O9 (x = 0.25; B’ = Sb, Ta, Nb: x = 0.5; B’ = Nb)have been synthesized and characterised by transmission electron microscopy, neutron diffraction and magnetometry.Each adopts a perovskite-like structure (space group P21/n)with two crystallographically-distinct six-coordinate sites, one occupied by a disordered arrangement of Ni2+ and Cu2+ and the other by a disordered ∼1:2 distribution of Ni2+ and B′5+, although some Cu2+ is found on the latter site when x = 0.5.Each composition undergoes a magnetic transition in the range 90 ≤ T/K ≤ 130 and shows a spontaneous magnetisation at 5 K; the transition temperature always exceeds that of the x = 0 composition by ≥ 30 K. A long-range ordered G-type ferrimagnetic structure is present in each composition, but small relaxor domains are also present.This contrasts with the pure relaxor and spin-glass behaviour of x = 0, B’ = Ta, Nb, respectively. |
inequalities in smoking are partly due to more smoking in higher socioeconomic groups than seen in other European countries, as a result of a delayed smoking epidemic.Although smaller inequalities in mortality in these countries thus seem to be a historical coincidence rather than the outcome of deliberate policies, the Spanish and Italian examples suggest that large inequalities in total mortality are not inevitable.This exploratory study suggests that both behavioral and structural factors contribute to between-country variations in the magnitude of socioeconomic inequalities in mortality.More detailed studies of these variations, preferably combining individual- and aggregate-level data, are likely to provide important clues for how to reduce inequalities in mortality.Supported by a grant from the European Commission Research and Innovation Directorate General, as part of the "Developing methodologies to reduce inequalities in the determinants of health" project.The sponsor had no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; and in the decision to submit the article for publication. | The magnitude of socioeconomic inequalities in mortality differs importantly between countries, but these variations have not been satisfactorily explained.We explored the role of behavioral and structural determinants of these variations, by using a dataset covering 17 European countries in the period 1970–2010, and by conducting multilevel multivariate regression analyses.Our results suggest that between-country variations in inequalities in current mortality can partly be understood from variations in inequalities in smoking, excessive alcohol consumption, and poverty.Also, countries with higher national income, higher quality of government, higher social transfers, higher health care expenditure and more self-expression values have smaller inequalities in mortality.Finally, trends in behavioral risk factors, particularly smoking and excessive alcohol consumption, appear to partly explain variations in inequalities in mortality trends.This study shows that analyses of variations in health inequalities between countries can help to identify entry-points for policy. |
The emergence of language in multi-agent settings is a promising research direction to ground natural language in simulated agents.If AI would be able to understand the meaning of language through its using it, it could also transfer it to other situations flexibly.That is seen as an important step towards achieving general AI.The scope of emergent communication is so far, however, still limited.It is necessary to enhance the learning possibilities for skills associated with communication to increase the emergable complexity.We took an example from human language acquisition and the importance of the empathic connection in this process.We propose an approach to introduce the notion of empathy to multi-agent deep reinforcement learning."We extend existing approaches on referential games with an auxiliary task for the speaker to predict the listener's mind change improving the learning time.", 'Our experiments show the high potential of this architectural element by doubling the learning speed of the test setup. | An auxiliary prediction task can speed up learning in language emergence setups. |
256 × 256, and the FoV was 256 mm.The set consisted of 160 contiguous sagittal images covering the whole brain.The ALE results showed that chronic pain is associated with a common core set of gray matter decreases in the bilateral medial frontal gyri, bilateral superior frontal gyri, right pre- and post-central gyri, bilateral insula, right cingulate cortex, basal ganglia, thalamus and periaqueductal gray.Increased gray matter was found in the bilateral post-central gyrus, left inferior parietal lobule, right pre-central gyrus, right post-central gyrus, in the dorsal prefrontal areas, in the caudate, thalamus, cerebellum, and pons.The jackknife analysis showed all the areas found in the ALE maps to have very high reliability.Areas with lower reliability were found in the thalamic and ventral prefrontal cortices.Overall the reliability of gray matter increases was 20% lower than that of decreases, probably due to the paucity of papers reporting gray matter increases.With the advent of models allowing the investigation of functional integration rather than mere functional segregation, researchers have begun to use a ‘network approach’ and study the co-variation of brain activity in response to noxious stimuli in order to unveil the functional significance of brain responses to those stimuli.This approach is able to provide new insights in the study of the complex mechanisms underpinning the emergence of pain.To investigate which of the large-scale brain networks show GM alterations, we compared the number of increased/reduced voxels in each of the brain networks described by Biswal et al.As shown in the upper left panel of Fig. 4, each type of chronic pain shows a different involvement of each large-scale network.To obtain this graph the data were centered; as a result, lower values represent a small deviation from the mean.The maximal variability was expressed by the thalamic–basal ganglia network, followed by the DMN and premotor and somatosensory networks.Trigeminal pain showed a reduced involvement of the Th–Ba and DMN networks whereas complex regional pain syndrome, blepharospasm, chronic fatigue syndrome and back pain were characterized by reduced involvement of the somatosensory and premotor networks.Interestingly, the salience and attentional networks were damaged in a very similar way by different chronic pain pathologies.The lower panel of Fig. 4 shows the mean number of voxels involved for each network.The DMN and Th–Ba, attentional and salience networks are the areas altered most.Thesimilarity representation analysis showed that with the exception of the thalamus–basal ganglia network, the other networks could be clustered into three groups showing a similar involvement in chronic pain pathologies: a first group was composed of the DMN and cerebellar and motor networks; a second group of the salience and attentional networks; and a third group of the OFC, premotor, sensorimotor, auditory and visual networks.The optimal number of clusters was calculated by minimizing the Akaike information criterion.This method showed an optimal cluster number of three for gray matter decreases and two for gray matter increases.The left panel of Fig. 6 shows the three clusters into which the gray matter decreases were decomposed.The first cluster was found to involve the fronto-parietal and medial wall areas, the second included the operculo-insular, cingulate, posterior thalamic and medial prefrontal areas, and the third comprised the temporal and pontine areas.The right panel of Fig. 6 shows the three clusters into which the gray matter increases were decomposed.The first included the somatomotor and somatosensory, premotor and parietal areas and the second the operculo-insular, basal ganglia, pontine and dorso-ventral prefrontal areas.Interestingly both decreases and increases were observed in the anterior insula, but in different portions of the region.We performed a resting-state connectivity experiment to explore whether the two anterior insular clusters derived from the decomposition of gray matter increases and decreases belong to different networks.As shown by Fig. 7, the two insular clusters presented a very different functional connectivity pattern.The increase cluster had a strong connectivity to the saliency detection network with anticorrelations with a series of medial, dorsal prefrontal and parietal areas of the default mode network.The decrease cluster showed a connectivity with areas anticorrelated with the increase cluster.These areas were mainly constituted by the anterior part of the DMN.The decrease cluster showed anticorrelation with the salience detection network.These results suggested that the functional connectivity profile of the two ROIs is coherent with the gray matter modifications.The opposite alteration was mirrored in the opposite functional connectivity of these two ROIs: while the “increase” ROI showed a correlation with the salience network and an anticorrelation with the DMN, the “decrease” ROI presented an anticorrelation with the salience network and a correlation with the DMN.This study was designed to: i) verify the presence of a core set of brain areas commonly modified by chronic pain; ii) investigate the involvement of these areas in a large-scale network perspective; iii) study the relationship between altered networks and; iv) find out whether chronic pain targets clusters of areas.Our results show that: i) gray matter alterations in single areas should be better conceived in a framework of large-scale brain networks and ii) large-scale brain networks present both pathology-unspecific and pathology-specific involvements.It has been demonstrated that the presence of long-lasting ongoing pain can modify the structure of the brain, inducing local morphological changes in the brain parenchyma.Indeed, in the last decade, the application of neuroimaging techniques, such as voxel-based morphometry, has provided considerable insight into structural brain reorganization in subjects suffering from chronic pain syndromes.The results of previous VBM studies have converged in concluding that chronic pain induces gray matter structural changes, often related to the | Representational similarity techniques, network decomposition and model-based clustering were employed: i) to verify the presence of a core set of brain areas commonly modified by chronic pain; ii) to investigate the involvement of these areas in a large-scale network perspective; iii) to study the relationship between altered networks and; iv) to find out whether chronic pain targets clusters of areas. |
Calcium is a ubiquitous signaling molecule and acts in several physiological processes from mammalian cells to parasites.To monitor the calcium fluctuations in Plasmodium falciparum invasive loading protocols are used and do not allow discrimination of signals from the host cell and intracellular parasites.Generation of transgenic Plasmodium falciparum expressing GECI represents an innovation, allowing monitoring calcium fluctuations in the cytosol of P. falciparum without invasive loading protocols and interference from the host cell.Construction of transgenic Plasmodium falciparum requires the following steps: cloning of GCaMP3 gene into P. falciparum expression vector pDC, transfection of P. falciparum with the pDC/GCaMP3 constructs, selection of the transfected population, analyses of PfGCaMP3 calcium responses.Step 1: Cloning of GCaMP3 gene into P. falciparum expression vector pDC.The ORF encoding the GCaMP3 gene was amplified from the original mammalian expression plasmid using the specific primers: 5′ GGATCCATGGGTTCTCATCATCATCATC 3′ and 5′ GGATCCTTACTTCGCTGTCATCATTTGTAC 3′.The BamH I cleavage site was added in the 5′ end of both primers.Approximately 100 ng of plasmid was used in a PCR reaction in the following conditions: 94 °C/5′, 34 cycles of 95 °C/55″; 50 °C/60″ and 68 °C/180″ and a final step of 74 °C/5′."The amplicons, approximately 1.3 kb, were purified from agarose gel using PureLinkTM Quick Gel Extraction Kit according with the manufacture's protocols.The purified amplicons were then cloned into bacterial propagation vector pJET and used to transform One Shot® TOP10 Chemically Competent E. coli.The transformant selection was made in LB agar plates at 37 °C for 20 h. Single colonies were grown in 5 mL of LB/ampicillin at 37 °C for 20 h at 180 rpm.Plasmid DNA was extracted with Wizard® Plus SV Minipreps DNA Purification Systems.To test for the presence of GCaMP3 gene, plasmid DNA was submitted to restriction analyses with the BamH I enzyme.The reaction was performed with approximately 10 U of enzyme at 37 °C for 2 h. All tested colonies had the GCaMP3 gene.The cloning confirmation was also obtained by sequencing reaction.The tested colonies presented a gene that was identical with the synthetic construct GCaMP3 gene present in Basic Local Alignment Search Tool website.The GCaMP3 gene was then transferred to P. falciparum transfection plasmid pDC .For this purpose, pJET/GCaMP3 construct and pDC vector were submitted to a new restriction reaction with BamH I enzyme and the fragments, approximately 1.3 and 6 kb, corresponding to GCaMP3 and pDC respectively, were purified from the agarose gel as described above.The ligation reaction was carried out at 16 °C overnight and was used to transform chemically competent E. coli using the heat shock method.From the 12 colonies obtained, colonies 8 and 11 were positive for GCaMP3.The correct insertion of the gene was confirmed by restriction analyses with Xho I, since digestion with this enzyme results in fragments with different sizes depending on the orientation of the insert.Colonies 8 and 11 showed the GCaMP3 inserted in the correct orientation.We selected colony 11 for sequencing and the gene was identical with the synthetic construct GCaMP3 gene present in Basic Local Alignment Search Tool website.Step 2: Transfection of P. falciparum with the pDC/GCaMP3 constructs.A synchronized ring culture of P. falciparum with a parasitemia of approximately 10% was submitted to electroporation with 50 μg of pDC/GCaMP3 constructs.Briefly, 200 μL of cells were resuspended in 500 μL of Cytomix, the plasmid was added and the mixture was transferred into a 0.4-cm cuvette.The electroporation conditions were as follows: 2.5 kV, 25 μF and 200 Ω.The cuvette was placed on ice for 5 min.The cells were then transferred to a culture flask with 10 mL of culture medium and placed in an incubator.After 48 h 5 nM of WR99210 was added to the culture medium to select for transformants.The transfected parasites started to appear approximately after 2 weeks and the fluorescent parasites could be observed in a fluorescence microscope.Step 3: Selection and enrichment of the transfected population.The transfected population was submitted to a cell sorting by flow cytometry to select those parasites with increased fluorescence intensity at 488 nm.The experiment was carried out in a FACSAria II™ cell sorter.The selected parasites were maintained for 1 week and then cloned by limiting dilution in 96-well plates.The parasites could be detected approximately 2 weeks after the limiting dilution by measuring the activity of LDH enzyme.LDH activity was measured with Malstat reagent, pH 9 in the final volume of 200 mL) and NBT/PES solution.Fifteen μL of the culture was taken from each well and added to a plate containing 100 μL of Malstat reagent and 25 μL of NBT/PES solution and the color development of the LDH plate was monitored colorimetrically at 620 nm .From the total clones obtained, we selected one with an increased percentage of fluorescent parasites for further studies.Step 4: Analyses of PfGCaMP3 calcium response.To test the calcium response in the presence of the calcium ionophore ionomycin, we selected one clone from the 96 well plate, since this clone had a higher percentage of fluorescence parasites.Infected erythrocytes at trophozoite stage were centrifuged, and the pellet was washed twice and resuspended in 1 mL of buffer A.Two μM ionomycin was added and the cells were incubated for 1 min.The calcium response was determined from dot plots of 105 cells acquired on a FACS Calibur flow cytometer using CELLQUEST software.GCaMP3 was excited with a 488 nm argon laser and the fluorescence emission was collected at 520–530 nm . | Calcium (Ca2+) signaling pathways are vital for all eukaryotic cells.It is well established that changes in Ca2+ concentration can modulate several physiological processes such as muscle contraction, neurotransmitter secretion and metabolic regulation (Giacomello et al.(2007) [1], Rizzuto and Pozzan (2003) [2]).In the complex life cycle of Plasmodium falciparum, the causative agent of human malaria, Ca2+ is involved in the processes of protein secretion, motility, cell invasion, cell progression and parasite egress from red blood cells (RBCs) (Koyama et al.(2009) [3]).The generation of P. falciparum expressing genetically encoded calcium indicators (GECIs) represents an innovation in the study of calcium signaling.This development will provide new insight on calcium homeostasis and signaling in P. falciparum.In addition, these novel transgenic parasites, PfGCaMP3, is a useful tool for screening and identifying new classes of compounds with anti-malarial activity.This represents a possibility of interfering with signaling pathways controlling parasite growth and development.Our new method differs from previous loading protocols (Garcia et al.(1996) [4]; Beraldo et al.(2007) [5]) since:It provides a novel method for imaging calcium fluctuations in the cytosol of P. falciparum, without signal interference from the host cell and invasive loading protocols.This technique could also be expanded for imaging calcium in different subcellular compartments.It will be helpful in the development of novel antimalarials capable of disrupting calcium homeostasis during the intraerythrocytic cycle of P. falciparum.MethodImaging calcium dynamics in Plasmodium falciparum. |
Apoptosis is a highly organized pathway, an essential process for maintaining the physiological balance between death and cell growth.Disruption of this process is involved in many pathological conditions such as Alzheimer disease, ischemia and autoimmune disorders.However, this process is needed for embryonic development and immune system function .Apaf-1 is an adaptor molecule in formation of apoptosome heptameric complex.When cytochrome c is released from mitochondria, it binds to Apaf-1 and nucleotide exchange occurs; this lead to Apaf-1 activation and oligomerization.Caspase-9, a key caspase in the mitochondrial cell death pathway is then activated by the oligomeric Apaf-1.Apaf-1 is a multi-domain protein, including caspase recruitment domain that interacts with procaspase-9, a central domain which involves in nucleotide-binding and oligomerization domain and C-terminal multiple WD-40 repeats that are suggested to play a regulatory role in Apaf-1 function .The released cytochrome c binds to WD-40 repeats of Apaf-1, being in a locked autoinhibited form , and exerts structural changes in Apaf-1, leading to exposure of nucleotide binding sites to dATP/ATP.By hydrolysis of the boundATP andADP exchange, Apaf-1 oligomerization occurs and the wheel like signaling complex forms .So far, two models have been proposed to explain the Apaf-1: Apaf-1 interactions .CARD: CARD interactions provides a platform for Caspase-9 binding but other interactions are required and this binding triggers caspase-9 activation .The presence of caspase-9 on the apoptosome complex caused more efficient cleavage of procaspase-3 that cause apoptosis .According to previous reports, removal of C-terminus WD-40 domain leads to the formation of mini-apoptosome even in the absence of cytochrome c .This apoptosome could activate caspase-9, but is unable to activate caspase-3 .These results indicate that WD-40 subdomains maintain their regulatory role in apoptosome even after Apaf-1 oligomerization .In this study, we used a new split luciferase complementary assay for investigation of Apaf-1: Apaf-1 interactions in cell-based and cell-free systems .Split luciferase complementary assay has been used to study apoptosome formation in differentiation of mouse embryonic stem cell to cardiomyocytes ."A similar strategy has been used to monitor of α-synuclein aggregation into amyloid fibrils, as a crucial factor leading to pathogenesis of Parkinson's disease .Here in, Apaf-1 is fused through its N-terminus to either the N-terminal fragment or C-terminal fragment of P. pyralis luciferase.It is suggested that Apaf-1: Apaf-1 interactions bring Apaf-1 CARDs into close proximity which in turn allow N-luc and C-luc fragments to come in close proximity and reconstitute luciferase activity.It should be pointed out that apoptosome formation were studied in two different systems.In cell-based system, after induction of apoptosis, the apoptosome formation is assessed in extracts of the cells expressing Apaf-1 fused to complementary fragments of luciferase while in cell-free system, the apoptosome formation is induced in extracts of the cells that produce the aforementioned Apaf-1 by addition of dATP and cytochrome c. Most importantly, we found that full length Apaf-1 generated higher levels of luciferase activity in cell-based system compared to extracts of cell-free system.Furthermore, we have shown that truncated Apaf-1 lacking both WD-40 subdomains is able to interact with endogenous Apaf-1 and make apoptosome complexes which have a variety of molecular weights.Overexpression of ΔApaf-1 brought about with caspase-9 cleavage without caspase-3 activation.For cloning of Apaf-1 we used pcDNA3.1 vector.Apaf-1 sequence was PCR amplified using high fidelity PrimeSTAR GXL DNA Polymerase from FastBAC vector containing the His tagged Apaf-1 .PCR-amplified N-terminal and C-terminal luciferase fragments were derived from pGL3 plasmid encoding Photinus pyralis luciferase and used for constructing split reporter .A flexible Gly-Ser peptide linker was used to fuse either N-luc or C-luc luciferase fragment to Apaf-1.Binding of N-luc and C-luc fragment of luciferase to Apaf-1 was performed by in-fusion system.To construct the truncated mutant, a stop codon was introduced.Finally, nucleotide sequences of plasmids were confirmed by DNA sequencing.The human embryonic kidney cells were transfected by the constructs using Polyethyleneimine."A day before transfection, 8 × 106 cells were cultured in Dulbecco's modified eagle's medium supplemented with 10% fetal bovine serum and 1% penicillin/streptomycin solution in 150 mm diameter dishes and for each dish, 40 μg DNA was used for transfection.Cells were then incubated for 24 h at 37 °C, 5% CO2.Briefly, to supply S-100 extracts, the cells were then harvested by trypsinization and resuspension in extraction buffer .Afterward, cells were lysed through three cycles of freezing and thawing in liquid nitrogen.Finally, the lysates were centrifuged at 100000 × g. Bradford assay carried out to determine the protein concentration of the lysates.Before transfection, sterile high-quality DNA plasmids were prepared by midi-prep kit.For transfection, 22 × 104 cells per well were cultured in 6-well plates and incubated at 37 °C, 5% CO2 for 24 h to reach 60–70% confluency.Starvation was carried out 4 h before transfection by incubating the cell with serum free DMEM.The mixtures of PEI and 2 μg of N-luc Apaf-1 and 2 μg of C-luc Apaf-1 constructs were prepared; and incubated at room temperature for 30 min.N/P ratio indicate the ratio of PEI to plasmid DNA.Finally, 50 μl of the transfecting complexes were added to each well and the cells were incubated for 4 h the medium were then replaced with 2 ml of DMEM supplemented with 10% FBS in each well and incubated for 24 h. Afterward, the culture media were replaced by the fresh ones and apoptosis was induced in the co-transfected cells by different concentrations of doxorubicin.Finally, the cell lysates were prepared 12, 24, 28 and 36 h after cell death induction and split-luciferase assays were carried out.The media were | This assay uses Apaf-1 tagged with either N-terminal fragment or C-terminal fragment of P. pyralis luciferase.In cell based-system, the apoptosome formation is induced inside the cells which express Apaf-1 tagged with complementary fragments of luciferase while in cell-free system, the apoptosome formation is induced in extracts of the cells.However, luciferase activity due to apoptosome formation was much higher in cell based system compared to cell-free system. |
be noted, upon addition of cytochrome c and dATP, the monomers of N-luc and C-luc full length Apaf-1 shift to fraction with higher molecular weight fractions which indicated apoptosome formation, as reported earlier .While immunoblotting of truncated Apaf-1 indicates its oligomerization is independent of cytochrome c and this confirms the previous results regarding spontaneous oligomerization of truncated Apaf-1 lacking both WD-40 subdomains.Interestingly, even in the absence of dATP/Cc the complexes with various size and form were observed which suggests that preformed complex has already existed due to the endogenous dATP and cytochrome c in the S-100 extract.It seems that presence of ΔApaf-1 can form apoptosome complexes with a range of complexity through interaction with endogenous Apaf-1.Moreover, while endogenous Apaf-1 and N-luc or C-luc truncated Apaf-1 have not been observed in fraction 14 and lower, indicates lack of ∼1.4-MDa apoptosome complex .On the other hand, binding of N-luc Apaf-1/C-luc Apaf-1; C-luc Apaf-1/N-luc ΔApaf-1 and N-luc Apaf-1/C-luc ΔApaf-1 mixtures to the affinity column were performed .It should be noted, binding of N-luc Apaf-1/C-luc ΔApaf-1 mixtures to the resin indicate role of non-specific interactions in binding.Washing of bound proteins with 20 mM imidazole did not elute protein from the resin, while washing of resin with 165 mM imidazole eluted only N-luc Apaf-1/C-luc Apaf-1 mixture from the resin.It may be concluded, while mixture of N-luc Apaf-1/C-luc Apaf-1 has only one His-tag, its binding takes place through specific interaction.Elution of C-luc Apaf-1 by 165 mM imidazole brought about with elution of N-luc Apaf-1 which for C-luc Apaf-1/N-luc ΔApaf-1 mixture in spite of presence of one His-tag in C-luc Apaf-1 was not happened.Therefore, it may be suggested only in mixture of N-luc Apaf-1/C-luc Apaf-1 right “juxtaposition” of CARD-domain of Apaf-1 molecules formed a real apoptosome structure which could bind to the resin.While in the other mixture in spite of intact CARD-domain geometry of Apaf-1oligomeres His-tag is not in right orientation for binding to Ni-NTA resin and they were bound through non-specific interaction.However, we could not rule out the possibility that CARDs of truncated Apaf-1 might have been disordered when exposed to full length Apaf-1.To test this idea, we showed that N-luc Apaf-1 is released from apoptosomes assembled with C-luc Apaf-1, when they were eluted with 165 mM imidazole.Lack of eluted protein even at high imidazole concentration for truncated mutant indicated predominant role of non-specific interaction in their binding.Role of truncated mutant of Apaf-1 overexpression on activation of caspase-9 and caspase-3 indicates a dominant negative effect for mutant Apaf-1 on endogenous Apaf-1 .Autoprocessing of caspase-9 without caspase-3 activation upon overexpression of ΔApaf-1 is similar to the results of a mini-apoptosome structure from an Apaf-1 without WD-40 repeats .In conclusion, according to the results in this manuscript, it may be concluded that preparation of a luciferase complementary assay for apoptosome formation may shed light on additional required constituents in apoptosome formation and effective ionic bond in latent form of Apaf-1.The authors report no conflict of interests. | Apaf-1 is a cytosolic multi-domain protein in the apoptosis regulatory network.When cytochrome c releases from mitochondria; it binds to WD-40 repeats of Apaf-1 molecule and induces oligomerization of Apaf-1.In cell-free system, cytochrome c dependent luciferase activity was observed with full length Apaf-1.The truncated Apaf-1 which lacks WD-40 repeats (ΔApaf-1) interacted with endogenous Apaf-1 in a different fashion compared to native form as confirmed by different retention time of eluate in gel filtration and binding to affinity column.The interactions between endogenous Apaf-1 and ΔApaf-1 is stronger than its interaction with native exogenous Apaf-1 as indicated by dominant negative effect of ΔApaf-1 on caspase-3 processing. |
licensed purchase websites would have been 2% lower in the absence of unlicensed downloading websites.Specific country estimate show that for Spain and Italy the elasticity is zero, while it is close to 0.04 for France and Germany and close to 0.03 for the UK.We also find evidence of heterogeneity in these effects according to individual’s characteristics.In particular, our results suggest that consumers with higher interest in music present higher degrees of complementarity between these two consumption channels.These results must be interpreted in the context of a still evolving music industry.It is in particular important to note that music consumption in physical format has until recently accounted for the lion’s share of total music revenues.If piracy leads to substantial sales displacement of music in physical format - as documented by existing research - then its effect on the overall music industry revenues may well still be negative.Second, our elasticity estimates show somewhat larger figures for the effect of online music streaming on the licensed purchases of digital music.Controlling for individual fixed effects leads to an elasticity of around 0.05, suggesting a complementarity between streaming services and purchases of licensed digital music.Again, country differences show that this effect is larger for France and Germany while it is smaller for Spain and Italy.Our results also suggest that consumers with higher interest in music consider licensed music streaming as a complement to licensed digital purchases to a larger extent. | We use clickstream data on a panel of more than 16,500 European consumers to analyze the relationship between different online music consumption channels.In particular, we revisit the question of sales displacement in the digital era, and analyze how licensed online music streaming affects digital music purchasing behavior.Our results show no evidence of digital music sales displacement by unlicensed downloading and present, for some countries in our sample, a rather small but positive elasticity of up to 0.04 between these two channels.We also find a positive relationship between the use of licensed streaming websites and licensed websites selling digital music, suggesting a stimulating effect of music streaming on digital music sales.Our results present important cross country differences in these effects, with elasticities ranging between 0.09 and 0.01.Finally, we find heterogeneous effects according to individuals' profiles.For both unlicensed downloading and licensed streaming alike, our results suggest that consumers with higher interest in music view these channels as complements to licensed digital purchases to a larger extent. |
RNA interference is a post-transcriptional inhibition of gene expression by a homology-dependent mRNA degradation mechanism evolved in many eukaryotes as a defence barrier against exogenous genetic materials such as transposons and viral genomes .RNAi utilises short, double-stranded RNAs termed small interfering RNAs to downregulate the expression of genes that contain complementary sequences.This mechanism was first discovered in Caenorhabditis elegans and subsequently in plants .The RNAi pathway is initiated by an enzyme called Dicer, which binds and cleaves long dsRNA molecules into short dsRNA fragments of approximately 20 bp length.One strand of the siRNA duplex binds to Argonaute, an RNase H–like protein which represses translation of an mRNA on the basis of sequence complementarity .In eukaryotes, endogenous siRNA pathways have many roles, including repressing repetitive and transposable genomic elements and defending the host against infection by RNA viruses.The RNAi pathway in insects includes several branches that function to silence the expression of both endogenous genes of the host and those of parasite and pathogen invaders.The exploitation of this pathway to block the expression of specific gene targets holds considerable promise for the development of novel RNAi-based insect management strategies .In addition there are a wide number of future potential applications of RNAi to control agricultural insect pests as well as its use for prevention of diseases in beneficial insects .The induction of RNAi by exogenous supply of dsRNA has been successful in a number of different organisms .Oral administration of dsRNA in pest insects has been demonstrated to induce RNAi and has significant potential utility for crop protection approaches .The size of the dsRNA product is also an important issue as short dsRNA have been reported to be limited in penetrance and expressivity .Initial studies using E. coli BL21 to generate dsRNA resulted in products that were degraded by RNase III to short, partially digested dsRNA .These short dsRNA molecules of about 12–15 bp in length are incapable of triggering RNA interference response in mammalian cells .An RNase III-deficient E. coli strain HT115 was engineered to produce intact specific dsRNA and when fed to C. elegans triggered strong interference .Synthetic siRNA oligonucleotide duplexes or hairpin RNAs for RNAi studies have also been widely employed.However, a number of caveats are associated with these approaches including the costs associated with the synthesis of the oligonucleotide and the time-consuming procedures including transfection and drug selection that are required for RNAi.The use of in-vitro transcription or bacteria to generate siRNAs can potentially reduce the cost of siRNAs and can provide a means of delivering siRNAs into cells.Large quantities of long dsRNAs can be produced by in-vitro transcription or in E. coli cells that lack RNase III with inducible T7 polymerase or φ6 RNA dependent RNA polymerase overexpression systems .More recently, alternative approaches using a plant viral siRNA binding protein have been used to generate and purify siRNAs generated in bacteria .However, for laboratory purposes we found it more practical to synthesise dsRNA using an inducible T7 polymerase overexpression system.A variety of methods have been developed and utilised for the selective purification of dsRNA, including strategies utilising recombinant dsRNA-binding protein , phenol extraction in conjunction with cellulose affinity chromatography and the use of differential solubility of nucleic acids in LiCl to extract total dsRNA from viral infested tissues or cells .A method based on anion exchange chromatography using convective interaction media monolithic columns has also be used to separate dsRNA from ssRNA .The resulting purified dsRNA is not readily amenable to conventional or next-generational sequencing techniques or directly compatible with downstream mass spectrometry analysis.NGS approaches generate only short reads therefore does not readily provide important quantitative characterisation of large dsRNA.The development of the analytical platform in this study enables the efficient and effective purification of dsRNA from bacterial cells prior to downstream RNAi applications.Furthermore, this approach enables relative quantification and characterisation of the dsRNA product providing high throughput analysis of the dsRNA, validation of the duplex nature of the dsRNA and characterisation of the dsRNA using mass spectrometry in conjunction with RNase mass mapping.Q5® High-Fidelity DNA Polymerase, dNTPs, NTPS, designed primers from MWG Eurofins were used for PCR and in vitro transcription performed using HiScribe™ T7 High Yield RNA Synthesis Kit and Synthetic gene from GeneArt® Gene Synthesis.The E. coli strain, HT115 was obtained from Cold Spring Harbor Laboratory, NY, USA.Ampicillin sodium salt, Tetracycline hydrochloride, Isopropyl β-d-1-thiogalactopyranoside ≥99%.TRIzol® Max™ Bacterial RNA Isolation Kit with TRIzol®, Max Bacterial Enhancement Reagent and the Ribopure™ bacterial RNA extraction kit were used for RNA extractions.The WAVE System Transgenomic HPLC, Proswift RP-1S Monolith column and buffers prepared with pH 7.0, acetonitrile, HPLC grade water were used for nucleic acid analyses.RNase T1, RNase A and DNase I were used for purification and mass mapping of dsRNA.The Accucore C18 column, the U3000 HPLC system, and maXis Ultra High Resolution Time of Flight Instrument were used in oligonucleotide analyses.DNA was amplified from the plasmid pCOIV that contains a 765 bp sequence flanked on both sides with T7 promoter sequences and optimised synthetic T7 terminator sequences.PCR was performed using primers flanking the dsRNA gene using the following conditions.0.02 U/μl Q5® High-Fidelity DNA Polymerase, 200 μM dNTPs, 0.5 μM each of forward and reverse primer and 10 ng DNA template.The following PCR parameters were used: the initial denaturation was 1 cycle of 30 s at 98 °C, 30 cycles of 30 s at 98 °C, 30 s at 68 °C, and 30 s at 72 °C | The exploitation of this pathway to block the expression of specific gene targets holds considerable promise for the development of novel RNAi-based insect management strategies.In addition, there are a wide number of future potential applications of RNAi to control agricultural insect pests as well as its use for prevention of diseases in beneficial insects.The potential to synthesise large quantities of dsRNA by in-vitro transcription or in bacterial systems for RNA interference applications has generated significant demand for the development and application of high throughput analytical tools for the rapid extraction, purification and analysis of dsRNA. |
in either the sense or antisense strands.However a number of monoisotopic masses were identified that correspond to unique oligoribonucleotides including the oligoriobonucleotides AAGAUp and GAAGGUp in the sense and antisense strands respectively.The corresponding MS and tandem MS data is shown in supplementary Fig. S6/7 providing further verification of the unique oligoribonucleotides in the sense and antisense strands of the dsRNA.The RNase mass mapping identified all of the theoretical RNase A oligoribonucleotides generated from the 765 bp dsRNA.The RNase A mass mapping of the dsRNA in conjunction with the tandem MS analysis of a number of unique oligoriobonucleotides provides further evidence for the identification of the corresponding dsRNA sequence.In an approach to improve the sequence coverage of dsRNA using RNase mass mapping, we developed a protocol for RNase T1 mass mapping of dsRNA.Although dsRNA is resistant to RNase T1 it was proposed that following denaturation of the dsRNA into ssRNA under conditions that retained RNase T1 activity this approach would enable base specific cleavage of the dsRNA.A range of denaturing reagents including urea and guanadinium hydrochloride were used in conjunction with RNase T1 without success.In contrast, the addition of RNase T1 following the heating of dsRNA in the presence of DMSO resulted in efficient cleavage of the dsRNA.Control experiments in the absence of RNase T1 demonstrate that under these conditions the dsRNA is effectively denatured to ssRNA.Following optimisation of the above method RNase T1 digest of the 765 bp dsRNA was performed using LC ESI MS analysis.Following LC ESI MS analysis the identified oligoribonucletides resulting from the monoisotopic masses obtained were compared to the theoretical monoisotopic masses expected from an in silico RNase T1 digest of the dsRNA.Analysis of the RNase mass mapping data shows that RNase T1 digestion specifically digests the dsRNA 3′ side of G.A map of all the identified oligoribonucleotides for both the sense and antisense strands of the dsRNA is shown in supplementary Fig. S8.For clarity the identified short oligoribonucleotides ≤3 mers are not included.Following RNase mass mapping of the dsRNA using both RNase A/T1 the combined RNase mass map is shown in Fig. 6c. Using this combined approach over 82% and 77% sequence coverage was obtained based on the identification of oligoribonucleotides for the sense and antisense strands respectively.We have developed a range of analytical tools that enable the high throughput purification and characterisation of dsRNA.In this study we have optimised standard, commercially available TRIzol extractions in conjunction with a single step protocol to remove contaminating DNA and ssRNA during the purification procedure.In addition, we have utilised and developed IP RP HPLC for the rapid, high resolution analysis of the dsRNA.This approach enables accurate sizing of the dsRNA, and further verification of the purity of the duplex dsRNA product over contaminating rRNAs and corresponding ssRNAs.These combined approaches enable the high throughput purification and analysis of a wide range of dsRNAs generated either via bacterial expression systems or in vitro transcription.In addition, we have developed and optimised RNase mass mapping approaches using RNase A and novel methods in conjunction with RNase T1 to further characterise dsRNA in conjunction with liquid chromatography interfaced with mass spectrometry analysis.The application of robust analytical methods to rapidly assess product quality following the purification of the dsRNA product from impurities including contaminating RNAs combined with methods to characterise and identify the dsRNA products are important requirements prior to the downstream application of dsRNA for RNAi studies.The development of RNase mass mapping approaches to characterise dsRNA is also hugely important to RNAi applications as dsRNA is not readily amenable to conventional or next-generational sequencing techniques.NGS approaches generate only short reads therefore does not readily provide important quantitative characterisation of large dsRNA. | RNA interference has provided valuable insight into a wide range of biological systems and is a powerful tool for the analysis of gene function.Here we have developed analytical methods that enable the rapid purification of dsRNA from associated impurities from bacterial cells in conjunction with downstream analyses.We have optimised TRIzol extractions in conjunction with a single step protocol to remove contaminating DNA and ssRNA, using RNase T1/DNase I digestion under high-salt conditions in combination with solid phase extraction to purify the dsRNA.In addition, we have utilised and developed IP RP HPLC for the rapid, high resolution analysis of the dsRNA.Furthermore, we have optimised base-specific cleavage of dsRNA by RNase A and developed a novel method utilising RNase T1 for RNase mass mapping approaches to further characterise the dsRNA using liquid chromatography interfaced with mass spectrometry. |
Feline calicivirus is a common pathogen of cats causing oral and upper respiratory tract disease .It has a single-stranded, positive-sense RNA genome , the plasticity of which is important for antigenic evolution, viral persistence recombination , and the sporadic outbreaks of highly virulent FCV strains causing severe disease .Despite high levels of variability, FCV strains are generally considered to comprise one diverse genogroup with a radial phylogeny and little evidence for sub-species clustering .This diverse genogroup is mirrored by a single diverse serotype; although individual strains are distinguishable antigenically, they generally show some cross-reactivity , allowing the development of several FCV vaccines based on different antigens .Whilst vaccines reduce clinical signs, none are licensed to reduce virus shedding post-challenge and FCV infection remains highly prevalent in both vaccinated and unvaccinated populations .Most live vaccines include FCV-F9 , whereas inactivated vaccines commonly include strains FCV-255, or a combination of FCV-431 and FCV-G1 .These vaccine antigens are chosen based on their ability to induce broadly cross-reactive antisera against contemporary isolates circulating at the time of vaccine development .The widespread use of such vaccines together with the high adaptability of FCV raises the theoretical possibility that vaccine resistant strains may evolve over time.Whilst some studies have supported this hypothesis , others have not .Here we describe the antigenic and genetic relationships between FCV-F9 and a representative panel of currently circulating FCV strains, obtained from randomly selected veterinary practices across six European countries.Ethical approval was from the Veterinary Research Ethics Committee, University of Liverpool.Informed consent was obtained from participating owners.Samples were collected between October 2013 and May 2014 from cats attending veterinary practices in the UK, France, Italy, Netherlands, Sweden and Germany.In the UK, three Unitary Authorities were randomly chosen from each of the nine regions of England, as well as from Wales, Scotland and Northern Ireland.Geographically remote islands were also selected based on convenience.From each of these 44 regions, a small animal practice was randomly selected from the Royal College of Veterinary Surgeons database.The remaining five countries were chosen based on convenience, divided into five regions based on official divisions and/or local geography, and a single practice randomly selected from each.If chosen practices declined to participate, a further practice was randomly selected.This process was repeated up to three times until a practice in each region agreed to take part.There is much debate regarding the most appropriate FCV isolates to use for assessment of in vitro neutralisation.Several studies have used isolates obtained by convenience from diagnostic laboratories to represent pathogenic viruses ; lack of random sampling means such results may not be generalizable to the wider population .Here we sample sick and healthy cats randomly to ensure our results are representative of the sampled population.The occasional description of non-pathogenic FCV strains requires us to justify the inclusion of isolates from healthy animals.In this regard, it should be noted that FCV isolates from healthy cats can still be pathogenic: virulent FCV continues to be shed from cats recovered from acute disease , and seropositive cats previously exposed to vaccine or field virus may shed virus in absence of clinical signs when subsequently challenged with virulent virus .Indeed, experimental challenge has confirmed that FCV from healthy cats can recreate typical disease .In each practice, veterinary surgeons were asked to collect oropharyngeal swabs from the next 30 or 40 cats presented at their surgery regardless of reason for presentation.Random recruitment of practices and random sampling of cats based on attendance at these practices were used to ensure results could be generalised to the sampled population, and is in contrast to an earlier study by the authors where sampling was by convenience .Swabs were collected into virus transport medium, stored at −20 °C before shipping to the laboratory.The veterinary surgeon and owner were asked to complete a short questionnaire capturing demographic data, vaccination history and information about current respiratory disease, mouth ulcers and chronic gingivostomatitis.Feline calicivirus was isolated using standard techniques based on presence of typical cytopathic effect.Samples were only considered negative after two passages .Viral RNA was extracted from positive cell cultures.One negative control was included for each three samples.Reverse transcription was performed using 200 ng random hexamers.A 529-nucleotide region of the capsid gene, equivalent to residues 6406-6934 of FCV-F9 and incorporating immunodominant regions C and E , was amplified according to manufacturer’s guidelines and published protocols using 25pmoles of each primer per 50 µl reaction .In addition, 486-nucleotides from the 3′ end of the FCV polymerase gene were also sequenced as previously described .Amplicons were purified, quantified and sequenced.Forward and reverse sequences were aligned, and pairwise p-distances and neighbour-joining trees calculated using MEGA7.A threshold of 20% uncorrected nucleotide distance was used to define distinct strains .Prevalence estimates with 95% confidence intervals were determined based on results of VI.Data from questionnaires were used to examine risk factors and associations with FCV carriage.Univariable and multivariable multilevel logistic regression allowing for clustering within practice was conducted using MLwiN.Potential risk factors included country, cat’s age, gender, breed, lifestyle, vaccination status, vaccine strain, neutering status, presence of mouth ulcers, URTD signs, CGS and number of cats in the household.Variables with P–values <0.25 in initial univariable analysis were considered in the multivariable model retaining variables with Wald P-values <0.05.Isolates for VN testing were randomly selected with stratification, approximately half were from the UK, the remainder from other participating countries.There is no approved standard for producing immune reagents for FCV neutralisation studies.Conventional FCV | Background Feline calicivirus (FCV) is an important pathogen of cats for which vaccination is regularly practised.Long-term use of established vaccine antigens raises the theoretical possibility that field viruses could become resistant.Oropharyngeal swabs were requested from 30 (UK) and 40 (other countries) cats attending each practice.Presence of FCV was determined by virus isolation, and risk factors for FCV shedding assessed by multivariable logistic regression.Despite being first isolated in the 1950s, FCV-F9 clustered with contemporary field isolates.The scale and random nature of sampling used gives confidence that the FCV isolates used are broadly representative of FCVs that cats are exposed to in these countries. |
vaccination induces insufficient neutralisation titres , such that previous studies have used infection with vaccine viruses to produce test sera .This will likely impact on both the quantity and range of any measured immune response compared to vaccination, especially when the tested vaccines often contain inactivated antigens.The plasma used in this study was collected from animals used in a standard vaccine safety study conducted by the funders.Four specific pathogen free cats were vaccinated subcutaneously with 10 commercial doses of Nobivac® TricatTrio at 8–9 weeks of age, and again four weeks later.Blood samples were taken three weeks after the second vaccination.Whilst such a challenge regime will induce a quantitatively higher response than routine vaccination, the antigenic targets for the response should be broadly similar to those of routine vaccination.The plasma from all four cats was used as a pool for all tested isolates, and also separately for 10 randomly selected isolates.Virus neutralisation tests were performed using a constant virus, varying plasma method.Briefly, duplicate, serial twofold dilutions of plasma were incubated with 32–320 TCID50 of virus at 37 °C for 1 h before addition to FEA cells which had been plated 24hours previously at approximately 1 × 104 cells/well of a 96 well plate.Plates were observed for CPE at 48 h and 120 h. Antibody titres were expressed as 50% end points .An internal FCV-F9 homologous control was included in each experiment.As homologous antibody titres can vary between experiments, between serum from different cats and depending on the method of challenge, antibody units for each isolate were calculated using the titre of this internal control.One antibody unit is the highest plasma dilution neutralizing 100TCID50 of homologous virus in 50% of cultures .AUs were also calculated using the mean FCV-F9 titre of all experiments , excluding those in which the internal homologous FCV-F9 titre was >2-fold either side of the mean FCV-F9 titre for all experiments .Fifty of the 64 recruited practices returned samples.Of the 2140 samples requested, 1521 were received.A total of 140 of 1521 samples tested positive for FCV 7.8, 10.8), ranging from 5.4% in Italy to 16.2% in the Netherlands.Questionnaires were not received for 1.2% of samples and therefore analysis was performed using 1502 questionnaire-sample matches.Nine of twelve predictor variables were significantly associated with FCV isolation in univariable analysis.Of these, five remained significant on multivariable analysis: Cats sampled in France, Italy and the UK were at a lower risk of shedding FCV than those from the Netherlands.Entire cats were 1.7 times more likely to shed FCV than neutered cats, regardless of gender.Cats in multi-cat households were 1.7 and 2.8 times more likely to shed FCV than cats living alone.Cats with CGS were 8.3 times more likely to shed FCV than those without.Finally, each additional year of a cat’s age reduced FCV shedding likelihood by 12%.Vaccination was not significantly associated with risk of FCV infection in the final model.A total of 128 partial capsid consensus sequences were obtained from the 140 FCV isolates.The unamplified isolates typical of such experiments are presumed to be caused by primer mismatches .In total, 110 strains were observed, ranging from 10 to 48.Of these strains, only 10 were represented by more than one isolate.The largest cluster included FCV-F9-like isolates from the UK and Sweden.All other strains with more than one variant were restricted to individual practices, with no evidence for widespread or international transmission.Similar phylogenetic results were obtained for the polymerase gene.The reproducibility of VN assays was assessed in two ways.Firstly, 10 field isolates were randomly repeated giving an average difference in neutralisation titres between repeats of 2.08, comparable to previous studies .In addition, the mean homologous titre for the internal FCV-F9 control across 19 experiments was 1 in 1658 ± 345 standard error.Viral neutralisation was attempted in 121 of the 140 FCV isolates.In total, 98 VN tests were successfully completed; the remaining 23 failed due to inability to regrow in cell culture, titration failure, or bacterial contamination.Of these 98 FCV isolates, 95 were neutralized at titres ranging from 1:4 to 1:5792.Whilst group sizes precluded statistical analysis, the pattern of neutralisation appeared to be broadly similar when isolates from different clinical presentations were compared.The VN results based on different countries are shown in Fig. 3b.When titres were standardised to homologous FCV-F9 titres derived within individual experiments, 26.5%, 35.7% and 50% of isolates were neutralized by 5, 10 and 20 AUs respectively.When using the same method as described previously , using only those experiments where the titre for the internal FCV-F9 control was within 2-fold of mean FCV-F9 titre across all experiments, 0%, 20% and 32% of 25 isolates were neutralized by 5, 10 and 20 AUs respectively.In order to analyse the variability of plasma from the four cats, viral neutralizations with single cat plasma were undertaken for 10 random field isolates and FCV-F9.The plasma from each cat had demonstrable neutralizing ability to each isolate.However, there was variation in the order of individual cat responses, with some cats’ plasma seemingly neutralising some viruses particularly well, and others less well.Widespread use of individual vaccines is associated with a theoretical risk for the emergence of vaccine resistance strains, particularly for RNA viruses.Here we have undertaken the first multinational European study to assess the current in vitro cross reactivity of FCV-F9, first isolated over 40 years ago, and still one of the most frequently used vaccine antigens .In order to maximise the generalizability of our findings to the European cat | In vitro virus neutralisation assays were performed to evaluate FCV-F9 cross-reactivity using plasma from four vaccinated cats.Risk factors positively associated with FCV shedding included multi-cat households, chronic gingivostomatitis, younger age, not being neutered, as well as residing in certain countries.Plasma raised to FCV-F9 neutralized 97% of tested isolates (titres 1:4 to 1:5792), with 26.5%, 35.7% and 50% of isolates being neutralized by 5, 10 and 20 antibody units respectively. |
population, a cross-sectional survey sampling cats from randomly recruited veterinary practices was undertaken.This approach also provided an opportunity to assess the epidemiology and molecular epidemiology of FCV infection.Consistent with previous studies, cats in multi-cat households, those with CGS, and younger cats, were more likely to shed FCV .Chronic gingivostomatitis affects 0.7% of the population , with most affected cats testing FCV positive .Previous studies have shown that FCV prevalence increases from around 10% in single-cat households to over 50% in some larger colonies .These large colonies are believed to drive antigenic diversity as strain variants evolve under positive selection within a variable population immunity .In addition, neutered cats were less likely to test positive for FCV regardless of age.This suggests behavioural changes associated with neutering, such as becoming less territorial, may lower FCV risk as has also been shown for feline immunodeficiency virus .We also found that cats in some countries had a higher prevalence of FCV infection than those from others.Whether this represents true population differences, or the relatively small sample sizes in some countries will need to be assessed further.The phylogenetic analysis is broadly in agreement with previous national and international studies , highlighting a radial phylogeny with little evidence for sub-species clustering except viruses sharing immediate temporal or spatial links.As previously , FCV-F9 variants were found in this population, five from the UK and two from Sweden, of which four had been vaccinated with FCV-F9 attenuated vaccines <25 days prior to sampling, one was un-vaccinated for at least three years, and one a rescue cat that was presumed unvaccinated.The only time such vaccine-derived viruses are not observed is when recently vaccinated cats are excluded from the sampled population .Our findings are consistent with experimental studies showing occasional shedding of vaccine virus following live-FCV vaccination .Looking at the diversity within this FCV-F9 clade, six of the seven strains were <3.6% distant from the FCV-F9 published sequence, suggesting they had not been replicating for long in the cat, consistent with recent vaccination history of most of these cats.In contrast, a Swedish isolate from a vaccinated cat, was 16.9% different to FCV-F9, possibly representing a rare persisting and evolving strain of FCV-F9 or an unrelated strain.Taken together, this confirms that whilst live vaccine viruses are occasionally shed following vaccination, they only seem to have a limited potential to persist in the general cat population.The balance between antibody- and cell-mediated immunity in FCV protection is somewhat uncertain.Some cats exposed to previous FCV antigens show protection to heterologous challenge, even when there are no demonstrable in vitro antibodies to the new challenge , suggesting that other factors including cellular immunity contribute to protection.That said, it is still believed that there is sufficient correlation between antibody levels and protection, for in vitro virus neutralisation tests to remain the accepted method of assessing cross-reactivity .Therefore, we have used a pool of plasma raised to 10 doses of FCV F9 vaccine, and demonstrated neutralising activity to the majority of this cross-sectional European panel of contemporary FCV isolates.These results are broadly similar to those observed in a similar cross-sectional study of FCV-F9 strain diversity in the UK in 2001 .When results are expressed as antibody units to try and control for variations in sera production, and the between-cat variation, the percentage of isolates neutralized by 20, 10 and 5 AUs was similar to, or higher than, that from the earlier study in 2001 .When taken together, this suggests antisera against FCV-F9 remains broadly cross-reactive against recently circulating FCV strain diversity.This is consistent with our observation that, despite its age, FCV-F9 remains an integral part of this contemporary phylogeny, and suggesting that FCV may not evolve in a linear fashion, such as is typical for other rapidly evolving viruses .These conclusions are in contrast with other studies suggesting the levels of FCV-F9 cross-reactivity have reduced over time .However, two important methodological differences between studies make direct comparisons impossible.Firstly, previous studies used isolates collected by convenience from diagnostic laboratories; these should not be considered representative of those in the general population .Secondly, previous studies have used infection rather than vaccination to produce antisera of sufficient titre for testing; differences in viral replication and antigen presentation between virus replicating locally in target tissues of the upper respiratory tract as opposed to the subcutaneous tissues at the site of vaccination, are likely to impact in albeit unknown ways, on the nature of the ensuing immune response, and this impact is likely to be greatest for viral antigens from inactivated vaccines.Here for the first time we used subcutaneous vaccination, of a live vaccine, using a cross-sectional sample of contemporary FCV isolates to maximise the generalizability of our results.Clearly these in vitro results cannot be used to suggest the rate of cross-protection in the field.To facilitate better comparison between these studies in the future, we recommend the development of an internationally agreed study protocol as exists for some other viral vaccines.This project was funded by MSD Animal Health who market a live attenuated FCV vaccine containing the FCV-F9 antigen. | This study aimed to assess the current ability of the FCV-F9 vaccine strain to neutralise a randomly collected contemporary panel of FCV field strains collected prospectively in six European countries.Methods Veterinary practices (64) were randomly selected from six countries (UK, Sweden, Netherlands, Germany, France and Italy).Phylogenetic analyses were used to describe the FCV population structure.Results The overall prevalence of FCV was 9.2%.Phylogenetic analysis showed extensive variability and no countrywide clusters.Conclusions This study represents the largest prospective analysis of FCV diversity and antigenic cross-reactivity at a European level.The in vitro neutralisation results suggest that antibodies raised to FCV-F9 remain broadly cross-reactive to contemporary FCV isolates across the European countries sampled. |
Superficial wounds heal well and without the need for surgical interventions.However, in the setting of a deep insult to the skin that destroys most of the dermis, patients do not have the capacity to heal with functional tissue.Thus, the current standard of care following full-thickness skin injury such as in severe burns, trauma, complicated soft tissue infections, or surgical resections is split-thickness skin grafting."In this procedure, the epidermis and a thin portion of the superficial dermis is harvested from an uninjured area of the patient's body and applied to the open wound after it has been adequately prepared through the removal of necrotic, neoplastic, or infected tissue.This provides effective wound closure, restoration of barrier function, and improved mortality rates among critically injured patients.However, the grafted skin typically remains chronically dysfunctional.These patients experience chronic pruritus, altered sensation, fragility, hypertrophic scarring, and contractures that may lead to reduced range of motion and ultimately impairments and disability.This scar formation and consequent dysfunction are due to the absence of a complete functional dermal tissue that is not harvested from the donor site in STSG.In healthy intact skin, the dermis provides critical structural and trophic support to the epidermis, as well as housing components of functional cutaneous appendages, such as hair follicles, nerves, blood vessels, and glands.Although various forms of skin grafting techniques have evolved over time, the current surgical procedures have not changed significantly since the adoption of tangential excision and grafting in the 1970s.Attempts have been made to improve the appearance of the resultant scar following STSG through the application of an acellular dermal matrix under an STSG in human patients.Engineered bilayer skin “substitutes” in which cultured sheets of human epithelial cells are combined with dermal fibroblasts have also shown similarly improved outcomes, highlighting the importance of neodermis to restoration of skin function.Despite these advances, function of the grafted area remains a concern for patients.Dermal fibroblasts are functionally diverse and each population lends distinctly different contributions to the wound healing process.Fibroblasts isolated from the upper dermis consistently display evidence of an anti-inflammatory, pro-regenerative phenotype, and the ability to support the epidermis in co-culture, whereas fibroblasts residing in deep layers appear preferentially fibrotic, suggesting the anatomical location of various fibroblast populations might also imply divergent functional and regenerative potential.Our work has demonstrated the existence of a specialized fibroblast, or a self-renewing dermal progenitor cell, residing at the base of the hair follicle that functions to continuously repopulate the mesenchyme and thus enable hair follicle regeneration.These cells can be isolated and expanded in vitro as self-renewing spherical colonies.When transplanted to FT skin wounds, rodent DPCs proliferate to fill the wound with neodermal tissue and, when combined with competent epithelial keratinocytes, are able to induce de novo hair follicles.Based on their unique inherent regenerative capacity in the absence of fibrosis, we hypothesize that autologous human DPCs could be isolated from a small biopsy of intact skin, expanded in culture, then transplanted underneath an autologous STSG in an effort to improve neodermal regeneration and functional outcomes following STSG in human patients requiring treatment of severe skin injury.To begin to investigate this concept, we have developed a human-to-mouse STSG xenograft model using homozygous nude mice.TdTomato-labeled hDPCs harvested from adult human scalp were transplanted into a FT skin wound in nude mice and covered by a human STSG.Here, we describe the isolation and characterization of adult hDPCs, the refinement of a xenograft model, and the fate and impact of hDPC transplant within the STSG environment.At 3 months, donor hDPCs successfully integrated into the grafted region and differentiated into various regionally specified phenotypes.Inclusion of a collagen scaffold greatly improved cell distribution and expansion within the graft.However, when an empty collagen scaffold was included, graft take was negatively affected.This reduction in graft viability was mitigated by inclusion of hDPCs.Within the graft, transplanted hDPCs generated neodermis that resulted in increased elasticity and reduced itch.Interestingly, the addition of cultured interfollicular dermal fibroblasts, under an STSG also showed modest benefit on itch.Adult DPCs were isolated from donor human scalp skin as described previously.Over 7–21 days, floating spherical colonies were observed and allowed to grow until approximately 200–300 μm in diameter before being dissociated and passaged.Following three to five passages, hDPCs-expressed proteins consistent with a hair follicle mesenchyme origin including extracellular matrix proteoglycans versican and biglycan, and the transcription factors RUNX, SOX2, and PAX1, all of which are enriched in the rodent hair follicle dermal papilla.Although both pro-collagen I and pro-collagen III peptides were expressed by hDPCs in culture, only full-length collagen III was present in spheres.Dermal fibroblast markers fibronectin, α-SMA, FSP-1, and PDGFR-α, were also present.Proteins involved in both Wnt and Notch signaling including NUMB, DLL4, and RSPONDIN 2 were similarly expressed by hDPCs in culture.NESTIN, an intermediate filament protein enriched in mesenchymal cells within the connective tissue sheath that harbors hDPCs in rodents, was similarly identified in isolated hDPCs.Fresh, FT abdominal surgical waste tissue from white female patients aged 34–56 years was collected and STSGs were harvested using an electric Padgett dermatome.These STSGs were then grafted onto 12 mm diameter, FT excisional back skin wounds on immune-deficient adult nude mice, either alone or atop a US Food and Drug Administration-approved collagen III scaffold and with or without TdT+ hDPCs or TdT+ hFs.Grafts were harvested at 1, 2, and 3 months following transplant for immunohistochemical analysis.The cell-sorting strategy and appearance of TdT+ hDPCs following lentiviral transduction can be observed in Figure S2.Transplanted human DPCs | Following full-thickness skin injuries, epithelialization of the wound is essential.The standard of care to achieve this wound “closure” in patients is autologous split-thickness skin grafting (STSG).However, patients living with STSGs report significant chronic impairments leading to functional deficiencies such as itch, altered sensation, fragility, hypertrophic scarring, and contractures.These features are attributable to the absence of functional dermis combined with the formation of disorganized fibrotic extracellular matrix.Recent work has demonstrated the existence of dermal progenitor cells (DPCs) residing within hair follicles that function to continuously regenerate mesenchymal tissue.At 3 months, human DPCs (hDPCs) had successfully integrated into the xenograft and differentiated into various regionally specified phenotypes, improving both viscoelastic properties of the graft and mitigating pruritus. |
of toluene might change, affecting the radical chemistry of the attachment.However, as shown above, in both the ‘scanned’ and ‘non-scanned’ areas particle attachment was similar, indicating that oxygen had no visual impact.To further verify the obtained results, the attachment of Fe-NPs was also performed ex-situ at lab-scale with conditions similar to those in the LP-TEM experiments.A batch of 9 nm Fe-NPs was used dispersed in toluene.As can be seen in Fig. 5C and D, also the lab-scale prepared samples showed the increased affinity of Fe-NPs towards CNF-Ox, both with and without the continuous electron beam scanning.This result was also supported by ICP-AES measurements, which was used to determine the iron weight loading on both supports confirming the iron weight loading was 2.7 wt.% on CNF-Ox and 1.3 wt.% on CNF.As evident from numerous literature, the attachment of colloidal particles onto surfaces and their interaction is a very complex process and depends on a number of parameters, including the chemistry of surface, size of colloidal particles, the presence of ligands and, type of dispersant.Observed differences in the attachment to CNT/CNT-Ox and CNF/CNF-Ox, also point to temperature as an important factor, as Fe-NP attachment to CNT/CNT-Ox was previously performed at high temperatures while in this study we performed the attachment at room temperature.Some studies argued that the attachment of nanoparticles to a carbon support was similar to a ligand exchange process, in which the oleic acid and oleylamine ligands of the nanoparticles competed with the carbon, acting as a new ligand.Other studies ascribed the driving force for attachment of colloidal particles to supports to electrostatic interaction, covalent bonding or Van der Waals interactions between the colloids and the support.While electrostatic interaction is less likely to take place in our system where non-polar solvent was used, it cannot be fully discarded since toluene was not dried and presence of water could have altered surface charge of the CNFs.However, based on previous research, it is more likely that particles interacted directly with the CNF surface via the partial electron-donor effect suggested by Ritz et al.The exact impact of CNF surface oxidation is hard to discern at this stage.Our study highlights the importance of the surface chemistry of support to both rate of the attachment and to the reversibility and the extent of the attachment.The presented methodology shows great potential to unravel the effects of temperature, colloidal particle concentration, surface area and surface structure of other carbon-based supports, on colloidal particle attachment.The possibility to image large CNTs in liquid by LPTEM has already been demonstrated.CNTs and CNT-Ox with a diameter and wall thickness of only 13 and 5 nm, for which the opposite affinity in Fe attachment compared to our study above was reported, present greater challenge for LP-TEM experiments.Nonetheless, our preliminary LP-TEM studies showed that by lowering the electron dose rate down to 4.3 e nm−2 s-1, selecting the liquid medium in which CNTs are the most stable and minimizing the liquid layer thickness, imaging of these thin CNTs is possible.Even though the inner diameter of these small CNTs could not be resolved, tubes were clearly visible in LP-TEM despite their size, structure and composition.This is a promising step for further particle attachment studies, but also towards enabling this technique for studying other nanometer-sized light element-based materials.In conclusion, the attachment of colloidal iron oxide nanoparticles to carbon supports was investigated using liquid phase transmission electron microscopy.First, the stability of both supports was investigated to find optimal imaging conditions without causing beam induced damage during scanning.It was found that in water, probably due to chemically induced radicals, both CNF and CNF-Ox were affected, and the damage was faster with CNF-Ox, likely due to defects caused by the surface-oxidation treatment.Both supports were stable in both ethanol and toluene.Finally, both CNF, CNF-Ox and the Fe-NPs were imaged in toluene in which all components were stable for the designated imaging time and conditions.Using LP-TEM the dynamics of the Fe-NP attachment to both CNF and CNF-Ox supports was studied in real time.In both in-situ experiments under electron beam radiation, without radiation and in lab-scale experiments, higher loadings of Fe-NPs were observed with CNF-Ox compared to CNF at room temperature in toluene.Furthermore, the attachment to CNF-Ox material was irreversible, pointing to stronger interaction and a different mechanism.We were able for the first time to image attachment of the catalyst particles onto a support, and future research will be carried out to investigate the influence of the temperature, concentration and the kinetics of the particle attachment.LP-TEM to investigate the attachment of nanoparticles can significantly enhance the understanding of the interactions between nanoparticles and supports.The manuscript was written through contributions of all authors.All authors have given approval to the final version of the manuscript. | By using liquid phase transmission electron microscopy (LP-TEM), the dynamics of iron oxide nanoparticle (Fe-NP) attachment to carbon nanofibers (CNFs) and oxygen functionalized CNFs (CNF-Ox) were studied in-situ.The beam effect on the stability of the sample in various liquids was examined, and it was found that toluene provided the highest stability and resolution to image both CNF supports and Fe-NPs.Flowing particles dispersed in toluene through the liquid cell allowed direct monitoring of the attachment process at ambient temperature.Using CNF-Ox as a support led to a large extent and irreversible attachment of iron nanoparticle compared to a lower extent and reversible attachment of Fe-NPs to pristine CNF, indicating the influence of surface functionalization on colloidal particle attachment.The results were confirmed by lab-scale experiments as well as experiments performed with the electron beam switched off, verifying the notion that beam effects did not affect the attachment.This study revealed previously unknown phenomena in colloidal particle - support interactions and demonstrates the power of LP-TEM technique for studying such nanoscale processes. |
charged NPs show a small secondary energy minima at around 4 nm, and a repulsive energy barrier at around 2 nm.The energy barrier would hinder attachment to the calcite surface and any NPs sitting in the secondary energy minima would be more weakly bound.Again, this is consistent with the poorer capture of positively charged NPs by calcite.While DLVO results supported the experimental data, XDLVO did not appear to provide a suitable interaction scenario.The addition of Lewis acid-base properties overwhelmed any influence from van der Waals or electrostatic forces, resulting in a significant repulsive force for both negative and positive NPs.Moreover, the XDLVO profiles were identical for both positive and negative nanoparticles and thus unable to discriminate any potential scenarios explaining the differences in capture efficiency.It is likely that Lewis acid-base properties for the dextran were not representative of the particular dextran on the NP surface and thus experimental verification of the Lewis acid-base properties of these specific nanoparticles would be required.Interaction energies were also compared for both large and small negative NPs.DLVO model shows that attractive forces are greater for the larger NP compared to the small NP.Despite the slightly smaller zeta potential of the larger NP, its larger size has facilitated greater attraction to the calcite in the DVLO calculations.Moreover, the larger particle should also generate more viscous drag as it is pushed forward by the growing calcite particle, adding an additional attractive force facilitating uptake of the large NP.However, experimental results showed that slightly less large NPs were captured compared to small NPs.Evidently, DLVO based interaction energies and viscous drag are not sufficient to explain the experimentally observed differences between occlusion of the large and small nanoparticle.It is speculated here that energy requirements for crystal growth around the larger NP, generating greater crystal distortion and strain, may reduce large NP occlusion.Occluding particles can generate strain which reduces the driving force for crystallization and cause difficulty in growing the mineral back over the occluded particle.The influence of the NP size on crystal growth may thus play a key role in controlling capture efficiency.The bacterium Sporosarcina pasteurii was used in this study as it is the commonly used model organism for ureolytic calcite precipitation investigations.However, it is notable that ureolytic capability is common in many groundwater and soil organisms, and thus there are a diverse range of organisms that can precipitate calcite via this pathway.The capture of nanoparticles by ureolytic calcite precipitation is thus likely achievable by a diverse range of bacteria.Different species, however, can produce different rates of ureolysis and thus different rates of calcite precipitation.The impact this has on nanoparticle capture is worthy of further investigation.For example, slower growing calcite crystals might generate less viscous drag on the NP, possibly reducing capture efficiency.Interestingly, different bacteria have been shown to generate calcite precipitates with different morphologies.However, whether these morphological differences might influence NP capture is unknown at this stage.Microbially driven precipitation of calcite is common in the environment.While this study explored ureolytic calcite precipitation, there are numerous other pathways which can precipitate carbonates, including photosynthesis, denitrification, and fungal metabolic processes.One would expect that calcite driven by these processes would also have the potential to immobilize NPs trough occlusion.While this study has focused upon bacterially driven calcite precipitation, there are numerous other microbially driven mineral precipitation systems which have the potential for occlusion of NPs, such as the oxidation of Mn, Fe oxidation and reduction, or enzymatic precipitation of phosphate minerals.Critically, the variety of minerals that can be precipitated by bacteria display a range of surface charges enabling them to interact with a wide variety of NP surface charges.Evidently there is significant potential for capture of nanoparticles by microbial mineral precipitation, whether naturally or through stimulated approaches for remediation of nanoparticulate pollutants.The successful identification of nanoparticles trapped in calcite by TEM of FIB milled samples indicates such an approach could be used to explore nanoparticles trapped in calcite, or indeed any other mineral, from natural settings."Calcium carbonate precipitation is common in a diverse range of environments, including aquifers, diagenetic pore waters, hot springs and the marine environment.Such systems may occlude nanoparticles, and thus might record past natural processes or pollution events.Indeed, the FIB-TEM approach has recently recorded uranium nanoparticles embedded within calcite from a deep aquifer.Moreover, the calcite was proposed to have been microbially precipitated by anaerobic oxidation of organic matter.The implications of nanoparticle/mineral surface charge, DLVO energies and viscous drag explored in this study could be applied to explore nanoparticle entrapment mechanisms for this and other naturally occurring nanoparticle/mineral systems.The results presented here demonstrate that microbially mediated calcite precipitation captured negatively charged NPs while positively charged NPs were captured much less successfully which facilitated occlusion.This was likely due to electrostatic attraction between the negative NP and positive calcite surface.This was supported by DLVO calculations which showed stronger attraction to the calcite by negative NPs, but weaker attraction by positive NPs.Analysis by TEM confirmed the NPs had been occluded inside the calcite minerals.kp and Scrit values were broadly similar for all NP types tested suggesting NPs were not acting as significant nucleation sites for calcium carbonate and that NP type did not impact precipitation rate.Overall, these results illustrate the potential of biogenic mineral precipitation to capture NPs in natural systems | Binding of nanoparticles (NPs) to mineral surfaces influences their transport through the environment.The potential, however, for growing minerals to immobilize NPs via occlusion (the process of trapping particles inside the growing mineral) has yet to be explored in environmentally relevant systems.As calcite crystals grew the nanoparticles in the solution became trapped inside these crystals.Thermodynamic and kinetic analysis, however, did not reveal a significant difference in kp (calcite precipitation rate constant) or the critical saturation at which precipitation initiates (Scrit), indicating the presence of different charged nanoparticles did not influence calcite precipitation at the concentrations used here.Overall, these findings demonstrate that microbially driven mineral precipitation has potential to immobilize nanoparticles in the environment via occlusion. |
while the largest reduction is to concern particulate matter, which is particularly dangerous for human health.The most optimistic in terms of the environment and human health is Scenario III because the calculations indicate more than 35% of reduction in particulate matter and a slightly smaller reduction of the rest of the analysed substances.It is necessary to note the likelihood of each scenario affects a number of unforeseeable factors.The most important of these include the policy of state authorities, who can introduce e.g. higher fees for the use of less green cars and the local authorities that can ban travel of cars to the city centres that do not meet e.g. standards Euro 5 and 6.In addition, there are many less important issues, which have a much smaller impact on whether the residents of the Lubuskie Province will be willing to buy green cars.An example would be the introduction in 2017 of free parking for owners of cars with hybrid and electric drives in Zielona Góra, which is the largest city in the Lubuskie Province."Road transport significantly contributes to the development of the Polish economy and due to the proximity of the Lubuskie Province with the German economy, which is the world's leading, this sector is rapidly developing in the region.Road transport is one of the key factors for economic development, thanks to which trade is developing and there are created job opportunities outside the place of residence.Unfortunately, road transport is not only energy intensive, but also has a strong impact on the environment.An encouraging sign, which suggests that the problem of low emission from road transport will be limited in the future is increasing public awareness of this issue.In recent years, local communities in the Lubuskie Province have been able to organize and force the local government to a significant increase in investment in the construction of infrastructure for pedestrians and, which is particularly noticeable, for cyclists.These actions allow to significantly reduce car traffic and thus the level of emission from road transport.Also the activities related to the development of public transport are very important.Recently it has been possible to observe the development of public transport, among others, there has been created local rail connections.However, these solutions were not effective enough to be able to compete with individual car transport.In this regard, there is still much to be done in the Lubuskie Province.Moreover, in Lubuskie in order to reduce low emission from road transport it will be necessary to impose restrictions that currently exist in many places in Europe.One of them is the entry ban on older cars that do not meet current environmental standards.Another option that could be an effective incentive to purchase newer and greener cars is the introduction of higher taxes on less ecological cars.These solutions are effective and used successfully in Germany.Unfortunately, in Poland the same solution are likely to encounter public resistance, which is why local and national authorities are afraid to make unpopular decisions.Scenario analysis indicates the possibility of reducing or increasing the level of emissions, depending on the decision taken by the authorities. | According to the report of the World Health Organization (WHO) on the list of 50 cities with the most polluted air in Europe as many as 33 are located in Poland.All the cities that are on the list exceed the maximum concentration of dust recommended by WHO at least three times.In the Lubuskie Province there is a very serious problem of maintaining good air quality.The air in Poland is among the most polluted in the European Union and this also applies to less-industrialized areas, such as Lubuskie, where the concentration levels of substances hazardous to human health and the environment are recorded as exceeded.One of the main factors affecting the poor air quality in the region is road transport.It is not just a problem near roads with heavy traffic, but also applies to the cities, where there is a large movement of cars, which are often old and do not meet current environmental standards.This article aims to identify the main sources of low emission from road transport and identify potential solutions to help reduce emission from this sector.The actions aimed at limiting low emission from road transport can bring a significant positive ecological effect.The aim of this article is to review one of the main sources of low emission in the province of Lubuskie, which is transportation.Moreover, the authors of the paper indicate the main problems associated with the emission coming from road transport and describe the possibilities for opportunities to reduce pollution from this sector.In addition, the article presents the three-scenario simulation of annual emissions from passenger cars that could take place in 2020. |
SS3 surface with the exception of the control and SS4 surface.It was demonstrated that a Sa and Sq value was obtained that was similar to that of control.However, it demonstrated that the greatest Spv was obtained than that of the control.The SS4 surface was produced using a high scanning speed of 1000 mm/s which resulted in a linear surface topography with irregularly clustered hair like projections.Results from the CSLM, demonstrated that at the micro scale, the SS4 surface was irregular and spiky in appearance.Line profilometry demonstrated the least differences in surface features when compared to the control and it was of narrower linear features with the maximum peak width, maximum peak height, maximum valley width and depth confirming the smallest surface features.The Sa and Sq values were similar to that of control surface while the Spv was higher compared with that of control.When the surfaces were produced at a laser speed of 10 mm/s and 10 μm but with low laser fluence than for SS5, the surface demonstrated elongated, rounded peaks with irregular shaped groove covered by liner hair like structures.SS5 had a linear pattern of rounded topped surface features and demonstrated intermediary vales for the peak width, peak height and valley width and depth of all the surfaces.This was the roughest laser textured surface with the intermediary Sa and Sq values.The Spv value was 6.17 μm.For the micro surface features, surfaces SS1, SS2 and SS5 were significantly different to the other surfaces.AFM was used to determine the nano-features of the laser etched surfaces.The results demonstrated that the nano-features for the SS3 and SS4 surfaces were more rounded in shape with sharp peaks like spikes than for the SS1, SS2 and SS5 surfaces.Moreover, the surface features for the SS2 surface in terms of the peak width and height and valley depth and width were of the largest sizes values and SS4 was of small features.The control surface demonstrated linear strips with irregularly spaced peaks.It demonstrated that the lowest peaks width was 0.09 μm, peaks heights 0.002 μm, lowest valley width 0.09 μm and lowest valley depth 0.002 μm.SS1 and SS2 were covered with rounded particles.The particles that covered SS2 were smaller in appearance and less sharp than those of SS1.The peak width and height and valley width and depth of both surfaces were nearly same.The values were peak width and height and valley width and depth of respectively.SS3 and SS4 were covered with irregularly spaced, sharp spiky peaks.The peak width and height and valley width and depth of both surfaces were mot significantly different.However, SS4 had a smaller peak width and height, and valley depth and width compared with SS3 and height and valley depth and width of all the laser textured surfaces.SS5 was also covered with rounded particles similar in structure to SS1 and SS2.However, the nanostructure of SS5 was more rounded.The peak width and height and valley depth and width were smaller compared with SS1 and SS2.The size of each surface macro, micro and nano scale of surfaces was determined.Regarding to laser generated topographies, it was clear that SS5 demonstrated the largest macro and micro size surface.SS1, SS2 and SS5 showed the largest nano features.SS4 had the smallest macro, micro and nano features with the exception of the control surface.For the nano topographies, surfaces SS1, SS2 and SS5 were overall significantly different to the other surfaces.The physicochemistry of the laser textured surface was characterised.The control surface demonstrated the greatest γs and γsLW values which resulted in a less hydrophobic surface.The most hydrophobic surfaces were found to have the greatest peak width and valley width topographies.SS2 demonstrated the most superhydrophobic surface with the lowest surface values for all the parameters tested.SS5 was also demonstrated to have the lowest γs and γsLW values.The least hydrophobicity surface was SS3 and this surface also demonstrated the greatest γsAB and γs− of all the surfaces produced.SS4 demonstrated surface characteristics similar to the control with high γs, γsLW and γs+ values and low γsAB and γs− values.All the results of the lasered surfaces for the contact angle, ΔGiwi, γs, and γsLW were significantly different from the control.For γsAB the SS2, SS4 and SS5 surfaces were significantly different from the control, for γs+, SS2, SS3 and SS5 were significantly different from the control and for γs, SS2, SS3, SS4 and SS5 were significantly different from the control.Energy dispersive X-ray spectroscopy demonstrated that the chemical composition of the surfaces following laser treatment was as expected and consisted predominately of iron, with oxygen, nitrogen, chromium and nickel with some fluorine.Interestingly, the atomic fluorine levels for SS1, SS2 and SS5 were higher than that obtained for SS3 and SS4.Since the X-ray penetration depth was about 1 μm–2 μm it was unlikely for fluorine to be found below the surface of the substrates.For the fluorine at.%, all the surfaces were significantly different to the control.The attachment, adhesion and retention of the bacteria was determined using three different microbiological assays.The SEM image of the E. coli bacteria attached on all the surfaces following all assays was demonstrated to show the distribution of the small amount of remaining cells on the surfaces.It was clear that the bacteria were retained the grooves.A small number of bacteria were observed on all the surfaces following all the assays.However, it was clear that following the attachment assay that the greatest numbers of bacteria were retained on the control surface then the SS4 surface, whereas, the | The Sa, and wettability of the surfaces all increased when compared to the control following laser treatment.One surface was demonstrated to be the best antiadhesive surface, which alongside being superhydrophobic (154.30°) had the greatest Sa and Spv (1.16 μm; 6.17 μm) values, and the greatest peak (21.63 μm) and valley (21.41 μm) widths. |
lowest numbers were retained on SS5.Following the adhesion assays, the greatest number of cells was retained on the control, then SS3 with the least number of cells being retained on SS5.Following the retention assay, the greatest numbers of cells were retained on the control surface followed by the SS1 with the least retained on SS5.There was clearly a significant difference in all the assays for all the laser treated surfaces when compared to the control surfaces.SS5 showed the least numbers of bacteria following all the assays whilst the control showed the greatest numbers of bacteria.There was a significant difference in the number of bacteria retained on the control surface when compared to the number of bacteria retained on the laser etched surfaces for all the assays tested.In this work, the physicochemistry of the surfaces was characterised.The surfaces were treated with FSA in order to stabilised the physicochemistry over time.All the manufactured surfaces were hydrophobic but there were differences in the degree of hydrophobicity.Taking into account the effect of laser parameters, it was found that the structures generated using very low speed and/or small hatch distance were hydrophobic with water contact angles >150°.The surfaces demonstrated hierarchical structures with increased roughness.This may be a result of the surface topographies resulting in increased air being trapped between the features thus increasing the hydrophobicity.However, it has been found that with increasing hatch distances and/or increasing scanning speed, the hydrophobicity was decreased.This might be attributed to the laser parameters; a decreased laser beam overlapping with increasing the hatch distances and increased scanning speed would result in decreasing the accumulated laser fluence irradiating the specific area which in turn results in the decreasing roughness.An understanding of how surface properties affect the attachment, adhesion and retention of bacteria may assist in designing or modifying the surfaces to discourage bacterial biofouling.The retention of bacteria on the surfaces depends on several factors such as surface topography, chemistry and surface wettability.The hierarchical ranges of surfaces roughness produced in this work showed that bacterial attachment, adhesion and retention was lower for the laser treated surfaces compared with the untreated surfaces.Overall, SS5 performed the best in all three assays; this surface had the widest peaks and values, but it was not the most superhydrophobic.However, it did have the greatest amount of adsorbed FSA.Its structures showed that it had the greatest macro grooves filled with smaller micro conical features covered by nanoparticles.The surfaces that retained the greatest number of bacteria were different for all three assays.All the surfaces that retained high numbers of bacteria demonstrated the lowest Sa, Sq and Spv values.No single physicochemical value was found to be attributed to all the surfaces that retained the greatest bacterial numbers.Thus, the results suggest that superhydrophobic properties of a surface are not enough to impede fouling and such surface parameters need to be used in conjunction with defined, specific surface topographies and chemistries in order to reduce bacteria attachment adhesion and retention.There are several studies carried out on to study the effect of superhydrophobic surfaces on bacterial adhesion using different substrates processed by different methods.Therefore, it is not surprising that contradictory results have been obtained.Li et al. fabricated bio-mimic superhydrophobic surfaces with contact angles of 160° on polymer surfaces using nanoimprinting lithography techniques and reported that this surfaces was self-cleaning surfaces since E. coli adhesion was reduced by 60%.Our work demonstrated a greater bacterial reduction 89%, 87% and 82% on the SS5 surface following the adhesion, attachment and retention assays respectively.Recently, Dou et al. found that the adhesion of bacteria was significantly reduced on superhydrophobic bioinspired hierarchal structures duplicated from rose petal surfaces with a contact angle of ≥150°.Privett et al. also demonstrated that the adhesion of Staphylococcus aureus and Pseudomonas aeruginosa was significantly reduced on a superhydrophobic coating obtained using fluorinated silica colloids.Within this work, the surfaces with the greatest hydrophobicity was not found to be the most antiadhesive to the bacteria.However, SS5 was a superhydrophobic surface with ΔGiwi = −91 mJ/m2.Surfaces produced in this work, all demonstrated surface free energy values of between 2.11 mJ/m2–24.88 mJ/m2 which were lower than the control surface.The effect of low surface free energy has been reported to reduce the adhesion of pathogens, and the SS5 had a surface free energy of 3.17 mJ/m2.The polar component of the surface free energy has also been suggested to reduce the adhesion of bacteria when less than 5 mJ/m2.However in our work, all the surfaces, including the control surface had γAB values of <5 mJ/m2.In our work, the treated surfaces demonstrated generally lower γs+ values than γs− values, confirming electron donor characteristics.This is in agreement with Rubio et al. and Santos et al. who demonstrated that the stainless steel was hydrophobic with electron donor characteristics.This work generated hierarchical structures on stainless steel using a picosecond laser.This study showed that the surface roughness, feature geometry, chemistry and physicochemistry all interplayed to affect bacterial attachment, adhesion and retention.The surface that demonstrated the most antiadhesive properties was a hierarchal superhydrophobic surface with the greatest Sa and Spv values, and the greatest peak and valley widths.Its structure had the greatest macro grooves filled with smaller micro conical features covered by nanoparticles, yet is was not the most superhydrophobic.This study reviled that picosecond laser surface texturing is a promising new method for producing different antiadhesive structures which may be useful in a range of applications. | A picosecond laser was used to produce hierarchical textures on stainless steel.Following microbial assays, the work demonstrated that on all the surfaces, following attachment, adhesion and retention assays, the number of Escherichia coli on the laser textured surfaces was reduced.This study showed that the surface roughness, feature geometry, chemistry and physicochemistry all interplayed to affect bacterial attachment, adhesion and retention Such a modified stainless steel surface may have the ability to reduce specific fouling in an industrial context. |
NRT with smoker identity, withdrawal symptoms, and attitudes towards non-combustible nicotine delivery devices.This cross-sectional study forms part of a larger, international study assessing the impact of long-term use of non-combustible nicotine delivery devices on health.The present study, which focuses on psychological measures collected only in the UK sub-sample, also involved the collection of biological samples as well as administration of a questionnaire at a single laboratory appointment, lasting approximately 30 min.Smokers and ex-smokers using either EC or NRT on a long-term basis of at least six months were purposively recruited, resulting in four groups of participants: current and ex-smokers using NRT and current and ex-smokers using EC.Participants were screened into these four naturally occurring groups to allow for comparisons between EC and NRT use, and between smoking status.Participants were reimbursed for time and travel.The study received ethical approval from the University College London Ethics Committee.Participants were told that this study was about the effects of long-term use of non-combustible nicotine delivery devices and recruited in the greater London, UK area during January–July, 2014 using various recruiting methods to access a diverse sample.These included adverts in newspapers, Facebook, online electronic cigarette forums, posters in independent pharmacies, emails to students and staff at UCL, the use of an online smokers panel as well as marketing companies.Participants were screened for eligibility via phone or online questionnaires.Inclusion criteria were based on long-term product use in order to control for a noted learning curve in effective EC use.Ex-smokers had to have quit any tobacco products for six months, use their non-combustible nicotine delivery device weekly for the past six months, and not use other non-combustible nicotine delivery devices regularly.Smokers had to smoke an average of one cigarette per day and meet the same non-combustible nicotine delivery device use criteria as ex-smokers.Current smoking status was verified using a breathalyser to assess expired air carbon-monoxide; readings above 10 ppm indicated current smoking.Due to the collection of biological samples, participants were excluded if they were younger than 18 years old, had a history of heart or lung disease, were pregnant, or had bleeding gums, illness, or infection within 24 h of their scheduled appointment."Thirty-six participants were recruited into each of the four study groups which provided sufficient power to detect a medium-sized effect on outcome measures.Data for all participants are provided in Table 1.Based on work underlining the validity of simple measures of smoker identity, the present study used an established item to determine smoker identity strength: participants were asked to rank their agreement with the statement, ‘Smoking is a part of me’ on a Likert scale of 1 to 5.Withdrawal symptoms were assessed with the validated Mood and Physical Symptoms Scale which assesses cravings and other general mood and physical symptoms related to withdrawal.Attitudes towards NRT or EC were assessed with three measures.Intention to stop product use was measured using a modified version of the motivation to stop scale, replacing the term ‘cigarette’ with ‘e-cigarette’ or ‘NRT’, and with higher values indicating greater motivation to stop use.Using 5-point Likert scales, participants were further asked whether they found the product helpful in enabling them to refrain from smoking, with response options ranging from ‘not at all helpful’ to ‘extremely helpful’ and whether they would recommend the product to a friend who wanted to stop smoking, with response options ranging from ‘definitely not’ to ‘definitely’.Standard socio-demographic and smoking characteristics, including age, sex, ethnicity, education, length of current/past smoking, current or past cigarettes smoked per day, cigarette dependence, motivation to stop, number of quit attempts were also measured.In addition, a number of product use characteristics were assessed such as length and frequency of product use.Participants were asked to indicate the length of use, latency to use the product in the morning as an indicator of dependence and consumption.The latter was assessed by asking NRT users to indicate the strength and the type of the product used and quantity used per day, week, or month.EC users were also asked about the type of the product they used; those using first generation EC and those using second or third generation EC were asked to indicate, respectively, either the nicotine content of the disposable/cartridge or the concentration of the e-liquid used as well as the quantity used per day, week, or month.Please refer to the supplementary information for the full questionnaire.Analyses were conducted with SPSS Version 21.0.Simple associations between study groups and continuous demographic variables, smoking characteristics, and product use characteristics were assessed with one-way ANOVAs or independent t-tests, and categorical variables were assessed with chi-square analysis, controlling for family-wise error rate using the false discovery rate and for multiple comparisons using the Sidak correction in post hoc analysis.Generalised linear models were used to assess main or interaction effects of product use and smoker status on smoker identity, withdrawal symptoms, and attitudes towards the product, controlling for relevant covariates.Compared with the UK general population, the present sample was younger, and more likely to be white, male, educated, and cohabiting or married.Cigarette consumption reflected national data and participants had smoked for nearly 20 years on average and had used either NRT or e-cigarettes for about one--and-a-half years.Ex-smokers had also stopped for about one-and-a-half years and had significantly lower levels of CO than current smokers.The four groups were balanced among the majority of socio-demographic and smoking characteristics measured.However, there were significantly more male smokers using EC than smokers using NRT, and more cohabiting smokers using | Methods: UK participants were recruited from four naturally occurring groups of long-term (≥6 months) users of either EC or NRT who had stopped or continued to smoke (N= 36 per group, total N= 144).Participants completed a questionnaire assessing socio-demographic and smoking characteristics, nicotine withdrawal symptoms, smoker identity and attitudes towards the products they were using. |
NRT than EC.As would be expected, current cigarette consumption was lower among dual users than past cigarette consumption reported by ex-smokers and four out of five smokers reported trying to cut down cigarette consumption.Amongst smokers, NRT users reported having made more recent quit attempts and being more motivated to quit than EC users.While there were no differences in terms of the length of product use, EC users reported greater daily nicotine consumption than NRT users, and ex-smokers generally had a shorter latency to product use in the morning than smokers.In order to control for possible confounding influences, all socio-demographic, smoking, and product use characteristics with data available for all groups were included as covariates in further analysis.Generally, a stronger smoker identity was associated with greater current or past cigarette consumption = 4.6, p = 0.031) and with being female = 7.6, p = 0.006).However, there was no interaction of smoking status by the product type = 1.1, ns).As would be expected, there was a main effect of smoking status on smoker identity = 29.5, p < 0.001): current smokers expressed a stronger smoker identity than ex-smokers.There was also a main effect of the product type = 3.9, p = 0.048): smoker identity was more pronounced among EC users than NRT users, irrespective of smoking status and other covariates.In terms of withdrawal symptoms, higher current/past cigarette consumption = 8.7, p = 0.003) and being female = 4.5, p = 0.034) were associated with more pronounced mood and physical withdrawal symptoms in this sample.In addition, there was a significant interaction of product type and smoking status = 6.1, p = 0.014).As shown in Fig. 1B, while there was no product-dependent difference among smokers, ex-smokers who use NRT reported higher mood and physical withdrawal symptoms than ex-smokers using EC.These findings are largely mirrored when looking at reported cravings.As before, there was a significant product type by smoking status interaction, such that NRT use was associated with greater cravings only among ex- but not current smokers = 8.5, p = 0.003, Fig. 1C).In addition, the results indicated that lower product use was associated with stronger cravings = 6.8, p = 0.009).Non-white participants in this sample were more likely to consider stopping the use of non-combustible nicotine delivery devices = 6.2, p = 0.013) as were those participants who had used products for longer = 9.4, p = 0.002).In addition, there was also a clear product type by smoking status interaction on intention = 17.6, p < 0.001): whilst NRT users were generally more likely to intend to stop using their product than EC users, this difference was significantly stronger among ex-smokers than smokers.Similarly, product type interacted with smoking status on the perceived helpfulness of the product = 4.8, p = 0.028).ECs were generally rated as more helpful for keeping off cigarettes than NRT but again, this difference was significantly stronger among ex-smokers than smokers.Lastly, there were main effects of product type = 4.6, p = 0.032) and smoking status = 5.1, p = 0.024) for recommending the product to others but no interaction = 0.4, ns).EC users or ex-smokers were significantly more likely to recommend the product as an aid to smoking cessation than NRT users or current smokers.The long-term use of e-cigarettes compared with licensed NRT by ex- and current smokers is associated with a stronger smoker identity and product endorsement.Among ex-smokers only, EC as compared with NRT use is associated with lower withdrawal symptoms, greater perceived helpfulness of the product for stopping smoking and weaker intention to stop product use.As in this study, previous work suggests that smoker identity may play a role in product use and smoking status.Given e-cigarette users had a stronger smoker identity than NRT users irrespective of whether they smoked or had stopped smoking, the present results support the common-sense assumption that ECs have a particular appeal for those who identify more strongly with smoking.This may be due to a greater similarity between smoking cigarettes and vaping and could also reflect the possibility that EC may be viewed as a consumer product for recreational use whereas NRT is seen as a medicinal product for treatment purposes.Alternatively, it could be that EC and NRT users do not differ initially but that EC use sustains smoker identity or that NRT use undermines this identity over time.This cross-sectional study cannot distinguish these possibilities.Nicotine craving and mood and physical withdrawal symptoms were virtually non-existent among ex-smokers using EC and significantly lower than among ex-smokers using NRT.While previous research indicates both NRT and EC can be useful for cessation and harm reduction purposes, our study suggests that in experienced users EC may be especially effective at reducing nicotine withdrawal.Given that this sample comprises long-term users, this effect is unlikely to be the result of incorrect product use.Notwithstanding the adjustment for smoking characteristics in the analysis, this result may also again reflect self-selection.ECs were rated as more helpful for stopping smoking than NRT by ex-smokers using these products.EC users, in particular, ex-smokers, were consequently less likely than NRT users to intend to stop using the product.In addition, motivation to stop smoking and the number of past year quit attempts were greater among smokers who concurrently used NRT than EC.Taken together, these findings are consistent with a gradual transition towards a non-smoker identity among long-term NRT users who smoke and reflect a possible identity shift among ex-smokers which may involve the want to be free | Results: Adjusting for relevant confounders, EC use was associated with a stronger smoker identity (Wald X2(1)=3.9, p=0.048) and greater product endorsement (Wald X2(1)=4.6, p=0.024) than NRT use, irrespective of smoking status.Among ex-smokers, EC users reported less severe mood and physical symptoms (Wald X2(1)=6.1, p=0.014) and cravings (Wald X2(1)=8.5, p=0.003), higher perceived helpfulness of the product (Wald X2(1)=4.8, p=0.028) and lower intentions to stop using the product (Wald X2(1)=17.6, p<0.001) than NRT users.Conclusions: Compared with people who use NRT for at least 6 months, those who use EC over that time period appear to have a stronger smoker identity and like their products more. |
of any nicotine products.On the one hand, findings are encouraging insofar as they suggest that EC could be a powerful harm reduction tool, at least as effective as the established NRT, and that they may be particularly helpful in engaging those smokers who are not motivated to quit and/or strongly identified as smokers.Indeed, it has been reported that a minority of long-term ex-smokers maintain a strong smoker identity.For these ex-smokers, in particular, EC may enable complete substitution of combustible with non-combustible nicotine delivery devices.On the other hand, it is possible that if a stronger smoking identity is maintained by EC use, this may undermine long-term outcomes as establishing a firm non-smoker identity may be important to resist relapse to smoking.However, such speculations need to be tested in experimental design.This study has several limitations which restrict the conclusions that can be drawn from the present study.First, although diverse recruitment methods were used, the sample was purposively selected and thus findings may not generalise to the general population.However, relevant confounders were controlled for to reduce selection bias, and the distribution of participant characteristics was roughly similar to those found in larger, broadly representative studies of non-combustible nicotine delivery devices.Second, due to the cross-sectional design, it is not possible to determine the direction of the association between product choice and outcome variables as these may be due to self-selection.While a prospective design would be preferable, given the relative novelty of EC and the associated lack of data on this topic, we chose this pragmatic design to pin-point important associations with long-term use now which can be investigated further in longitudinal studies.Third, although smoking status was verified and validated self-report measures were used, these may not be able to capture fully complex concepts such as smoker identity.In light of these findings, future research should continue to explore and clarify the association of smoker identity, withdrawal symptoms and attitudes towards the products with long-term use of NRT and EC among smokers and ex-smokers.In particular, it would be important to establish whether smoker identity and intentions to stop nicotine use influence product choice or whether product use impacts the strength of smoker identity and the decision to stop nicotine use completely.Notwithstanding the potential of self-selection bias given the cross-sectional nature of data, the observed interactions of product use with smoking status are consistent with EC being a particularly suitable harm reduction tool to switch smokers from combustible tobacco to permanent non-combustible nicotine use whereas NRT may be more suitable as harm reduction tool in the short-to-intermediate term.In conclusion, long-term EC use is associated with a stronger smoker identity and positive attitudes towards the product than the long-term NRT use.ECs are generally perceived as more helpful than NRT for stopping smoking by ex-smokers and may be more effective at reducing withdrawal symptoms.Based on self-reported intention to stop product use, NRT compared with EC may also be more likely to result in complete cessation of nicotine among long-term users who have stopped smoking.We are grateful to Cancer Research UK for funding the study.E.B., J.B., A.Mc., L.S. and R.W. are members of the UK Centre for Tobacco and Alcohol Studies.K.S. is funded by a CRUK Lynn MacFadyen Scholarship.J.B.’s post is funded by the Society for Study of Addiction.E.B. is funded by CRUK and the National Institute for Health Research’s School for Public Health Research.The views are those of the authors and not necessarily those of the funders.The funders had no involvement in the design of the study, collection, analysis or interpretation of the data, the writing of the report, or the decision to submit the paper for publication.L.S. has received a research grant and honoraria for a talk and travel expenses from a Pfizer, manufacturer of smoking cessation medications.M.L.G. received a research grant from Pfizer, manufacturer of smoking cessation medications.J.B. and E.B. have both received an unrestricted research grant from Pfizer to study population trends in smoking.R.W. has received travel funds and hospitality from, and undertaken research and consultancy for, pharmaceutical companies that manufacture or research products aimed at helping smokers to stop.V.N. and K.S. have no competing interests.L.S. conceived this study and contributed to the write-up.L.S. takes full responsibility for the integrity of the data and the accuracy of the data analysis.V.N. had full access to all the data in the study and wrote the initial draft.K.S. and V.N. collected the data and M.G., E.B., J.B. and R.W., contributed to the write-up of the manuscript. | Background: Electronic cigarettes (ECs) and nicotine replacement therapy (NRT) are non-combustible nicotine delivery devices being widely used as a partial or a complete long-term substitute for smoking.This study aimed to provide preliminary evidence on this and compare users of the different products.Among long-term users who have stopped smoking, ECs are perceived as more helpful than NRT, appear more effective in controlling withdrawal symptoms and continued use may be more likely. |
with the accumulated local knowledge.Similarly, local knowledge of place-based interdependencies causally linked smaller and wider scales, but not as widely, geographically speaking, as spatial planning knowledge did.Interdependencies that connected parts of the urban infrastructure were evident to all actors.Community actors tended to see only one or two points on the smaller scale, and mostly perceived them through social processes and communicated about them in terms of quality of life.By contrast, planners considered social impacts across a wider space and in terms of urban processes.In order to relate to local knowledge of interdependencies, planners had to ‘translate’ the lay input.For example the smaller scale view could be interpreted as a sign of the relative importance of links between places, or pieced together into a larger picture of how parts of the area worked together as a single, functional unit.These reflections suggest that the local knowledges of communities have strong reframing power for spatial planning, being policy-holistic, multi-dimensional and experiential in nature.As demonstrated in this case, the institutions of planning that seek out learning with communities for spatial strategy-making can rework their thinking about the identity of their collaborative groups and develop their understanding of space in several ways as a direct result of including local knowledge.It is clear that holistic social details support joined-up policy thinking and that very local, site-specific issues contain explanatory power for inter-scalar connections.Lived space has strong learning value in that it counters abstraction and sheds light on priorities and assets, likewise local knowledge helps explain spatial interdependencies.The challenge lies in relating the different spatial knowledges and communicating with an appreciation of the different approaches to validity and accuracy.These conclusions suggest that there is a need for greater awareness in planning of the potential for socio-spatial learning in the arena of public participation, and increased attention to its internalisation within the longer-term memories of planning institutions.Recognising the value of lay input in this way, may also foster ongoing communicative processes, by enabling public scrutiny and deliberation over scalar legitimacy, and help to build trust in deliberative processes over the longer term. | This monograph looks at experiences of communities with spatial planning and applies those empirics to an underexplored area of participatory theory.While issues of power and communication have been well examined this work rests on the argument that the associated production of knowledge needs to be better understood.Theories of engagement draw on issues of ‘voice’ and the means to achieving deeper democracy.Similarly, participatory planning theories frame the debate in terms of communicative processes or competing rationalities.Within that body of work, however knowledge is seen as an adjunct of power and there is little focus on the spatial particularity of knowledges.In particular there has not as yet been a thorough study of how understandings of space are produced in a spatial planning context that includes lay participants.This monograph starts to broach that gap, conceptualising a potential ‘socio-spatial learning’ where community engagement is framed as a collaborative learning arena within spatial planning.Through an English case study it unpacks the dynamics between different types of knowledge around spatial planning where there is lay participation.This draws on two years of embedded observation within a joint planning unit and a review of the North Northamptonshire Core Strategy of 2008, which culminated in substantial community engagement work early in 2011.Findings indicate that local knowledge has a distinctive spatiality and that there is a clear role for lay knowledge in the context of spatial strategy-making.It is hoped that this work can help in understanding the production of planning knowledge, help identify non-tokenist engagement of the public, and inform interactions between communities and policy makers. |
In this article are presented the data analyses from leaf count and rosette diameter for three lines AtFTOE compared with WT Arabidopsis plants.Data corresponding to differential expression from AtFTOE 2.1 line vs WT Arabidopsis are visualized by Mapman.Some data corresponding to down-regulated genes are presented in Table 3.Transgene FT was amplified by PCR from three AtFTOE lines.Droplet digital PCR was employed to determine transgene copy variation number.As template were used 2.5 ng of genomic DNA previously digested with HindIII.Droplets were generated for PCR reaction with the specific primers AtFT-qPCR, FT-qPCR and probe TaqMan ddPCRFT TCCTGAGGTCTTCTCCACCA .The 152 bp of PCR-amplified product Arabidopsis HMGB1 was used as an internal, reference gene.The primers used as reference were HMGB1 probe AGGCACCGGCTGAGAAGCCT, HMGB1F and HMGB1-R.The HMGB1 PCR-amplified product was 96 bp.After cycling, the PCR nano droplets were counted using the droplet reader Bio-Rad QX100 system.Three AtFTOE lines and WT Arabidopsis Columbia-0 ecotype were employed.Seeds were stratified, kept at 4 °C for 3 days in the dark and then germinated and grown in hydroponic system at 22 °C under controlled conditions, initially in short days and after 21 days the seedlings were transferred to long-day photoperiod under 100–120 mmol m−2 s−1.Plants grown under these conditions were used for determining rosette leaf number by individual counting and rosette diameter quantified by image analyses using the software Imagej.Cuantitative data were statistically analyzed with ANOVA and T-Student test at.For RNA-Sequencing by illumina HiSeq was used total RNA from rosette leaves of 35-day old of AtFTOE 2.1 line and Wild Type plants.RNA was extracted with the RNeasy Plant Kit.The RNA-seq experiments were conducted with RNA isolated from three biological replicates per AtFTOE 2.1 line and two biological replicates for WT with accession numbers SRR2094583 and SRR2094587 respectively.Illumina sequencing was performed at Otogenetics.Illumina HiSeq Sequencing with PE50 yielded 20 million reads by triplicate.The raw data files are available at the National Center for Biotechnology Information Sequence Read Archive accession numbers SRR2094583 and SRR2094587 for AtFTOE replicates 1–3 and AtWT for control replicates 1–2 respectively.The paired-end reads were aligned to the reference Arabidopsis genome using Tophat and Bowtie2 .The reference Arabidopsis genome and gene model annotation files were downloaded from the Illumina iGenomes.Differential expression was determined by Cufflinks as described by Trapnell et al. and then was visualized by CummeRbound, an edgeR package .To analyze the variation in expression between two replicates from WT and three replicates from AtFTOE it was calculated the absolute difference of the log2 fold change and adjusted to P-value ≤0.05.In order to visualize differential expression we used Mapman tool, mapping the Mapman databases using raw data of differential expression log2 fold change and adjusted to P-value ≤0.05.Primers design was performed by OligoArchitect™ Primer and Probe Design Solutions.Gene sequences were obtained from Tair database. | In this dataset we integrated figures comparing leaf number and rosette diameter in three Arabidopsis FT overexpressor lines (AtFTOE) driven by KNAT1 promoter, "A member of the KNOTTED class of homeodomain proteins encoded by the STM gene of Arabidopsis" [5], vs Wild Type (WT) Arabidopsis plats.Also, presented in the tables are some transcriptomic data obtained by RNA-seq Illumina HiSeq from rosette leaves of Arabidopsis plants of AtFTOE 2.1 line vs WT with accession numbers SRR2094583 and SRR2094587 for AtFTOE replicates 1-3 and AtWT for control replicates 1-2 respectively.Raw data of paired-end sequences are located in the public repository of the National Center for Biotechnology Information of the National Library of Medicine, National Institutes of Health, United States of America, Bethesda, MD, USA as Sequence Read Archive (SRA).Performed analyses of differential expression genes are visualized by Mapman and presented in figures."Transcriptomic analysis of Arabidopsis overexpressing flowering locus T driven by a meristem-specific promoter that induces early flowering" [2], described the interpretation and discussion of the obtained data. |
high neutron and gamma radiation shielding performance were designed and produced.The nickel was added to enhance heat-resistance and radiation shielding properties.Chromium was added to increase the structure against rust, melting point, tensile strength and radiation shielding capacity.Chromium with the carbon element creates hard carbide as Cr7C3 and Cr2-3C6 and increased hardness of steel.The molybdenum element added in the mixture to raise the value of impact strength of steel after annealing.In the production process, vanadium, cobalt, tungsten, tantalum and titanium elements added at certain rates into steel compound to increase hardness, impact resistance and radiation shielding ability.Hydraulic press and sulfuric acid were used to test some mechanical and abrasion properties of the samples.In order to perform the abrasion test, the samples were kept under sulfuric acid for 48 h. By using a microscope, no color or surface changes were observed for the samples.This means that the produced steel samples do not interact with the acid.The total macroscopic cross section is the most important parameter for neutron radiation shielding.By using GEANT4 simulation code the obtained MCSs are presented in Fig. 6.It is well-known that, the higher total macroscopic cross section of the material is, the more the interaction probability of target material with neutron particles.It is well known that neutron particles can only be slowed down by collisions .It is clear from Fig. 6 that the MCSs calculated by GEANT4 for the novel stainless-steel alloys are higher than the most commonly used 316LN stainless steel in nuclear applications.By using BF3 proportional detector, the obtained experimental equivalent dose rates absorbed by the detector are shown Fig. 7.The low equivalent dose rate value means that the sample is a good neutron shielding material because of its high dose absorption.So, according to the simulation and experimental results, neutron shielding ability of the prepared samples is higher than 316LN quality reference steel.Experimental mass attenuation coefficients were measured for the photon energies of 160, 223, 276, 302, 356, and 383 KeV with their standard deviations and GEANT4 Monte Carlo simulation results at available photon energies of 160, 223, 276, 302, 356 and 383 keV are given in Table 3 and Fig. 8.In general, experimental results were found to be in good agreement with those obtained by GEANT4.It is seen from Fig. 8 and Table 3 that the radiation shielding capacities of all the produced stainless steel alloys are higher than reference stainless steel.The mass attenuation coefficient is an important parameter used to describe gamma shielding design .This indicates that the produced three new high alloyed stainless-steel samples have high gamma absorption capacity when compared to 316LN steel.Especially the SSA1 steel has the highest absorption value among the produced new steels.The doped elements Ta and W in the SSA1 steel give rise to better radiation absorption than other steels.As a matter of fact, the percentages of other elements added are of the same value as SSA2 and SSA3 steel.HVL is defined to be the thickness required to reduce the incident photon intensity by a factor of 1/2 .Fig. 9 shows the HVL of newly produced stainless steels compared with 316LN reference steel.The less the HVL of the material the more the shielding effectiveness of the material .It is clearly seen that all SSA steels have less HVL than the reference steel.The SSA1 steel has a smallest HVL value at the all examined energies.In addition to the mass attenuation coefficient results, the HVL values show that the absorption values of new produced steels are much better than the reference steel.High alloyed new stainless steels were produced reinforced with nickel, chromium, molybdenum, manganese, copper, titanium, tungsten, tantalum and vanadium, and the content was determined by the Geant4 simulation program.It was identified that they have a super feature from the mechanical and chemical tests.The total macroscopic cross sections of stainless steel alloys are higher than 316LN nuclear steel.While the dosage rate of new stainless steels was small when the samples were placed between the source and detector and the dose rate was higher when reference sample 316LN nuclear steel was used.According to this result, 316LN nuclear steel absorbed less neutron radiation than the present prepared samples.Neutron radiation shielding material must also have good gamma radiation shielding.The produced three stainless steel alloys seem to carry these characteristics.Mass attenuation coefficients and half thickness values were measured for six different energy gamma radiation.It was seen that the mass attenuation coefficients of the present high alloyed stainless steels were higher than those of high 316LN nuclear steel at the energies under investigation.On the other hand, the half-thickness values of the present samples were lower than the HVL of 316LN steel, this indicated better gamma radiation absorption for the prepared samples.When it is compared to 316LN stainless steel that is used in nuclear applications the experimental results and those obtained by Geant4 showed that for all samples under study have higher absorption of gamma and neutron.These produced new high alloyed stainless-steels can be used to prevent gamma and neutron radiation leaks in nuclear applications such as nuclear environment, nuclear power plants, nuclear waste repositories.It was shown that new high alloyed stainless-steels can be used as an alternative to 316LN steel especially in radiation shields.It is expected that the high alloyed stainless-steels produced in this study, which has the high absorption ability for 4.5 MeV energy, fast neutrons and gamma rays will contribute to the radiation shielding technology and can be used in practice. | Before the production, GEANT4 Monte Carlo simulation toolkit was used to estimate the total fast neutron macroscopic cross sections and gamma mass attenuation coefficients.We tested samples’ chemical and mechanical strength.Samples were exposed to both gamma rays and fast neutrons.The obtained simulation and experimental results for both neutron and gamma radiation are compatible.According to the simulation and experimental results, neutron shielding capacity of the new stainless-steel alloys is higher than the most commonly used 316LN stainless steel in nuclear applications.Among the prepared samples, SSA1 steel has the smallest half value layer at the all examined energies.All the prepared samples posses higher mass attenuation coefficient values and lower half value layer than 316LN steel.This indicates that the produced three new high alloyed stainless-steel samples have high gamma absorption capacity when compared to 316LN steel. |
sources.Road dust was found to account for more of the total variance at urban sites compared to receptor sites.Lin et al. also applied PCA to MOUDI and nanoMOUDI sample results from Pingtung, Taiwan.This study found four factors in nano/ultrafine particulates which corresponded to elements associated with diesel, gasoline, fuel oil and industrial emissions.They reported strong correlations between Ba, Pb and Zn in this size range, which supported a traffic source for these elements.The relative abundance for these elements in PM0.056–0.1 and PM0.056 was more similar to diesel emissions than to gasoline.PMF is increasingly widely used for source apportionment.Unlike CMB, there is not a requirement for source profiles to be input into the model.However, in order to interpret the factors reported by the model as pollution sources, some knowledge of source profiles is still required.PMF can provide a robust solution provided that the sample:variable ratio is above 3:1.PMF is particularly suitable for analysing long time series, and observations need not be continuous for the method to work effectively.Assigning factors to sources is complex and to some extent subjective, as there are problems with the inconsistent use of species as source tracers.PMF results therefore have to be carefully interpreted.As of yet now, of PMF for nanoparticle source apportionment has been limited, owing to the difficulty of collecting enough samples to ensure the results are valid.Because of the long sampling times necessary, short term variations which would make factorisation easier tend to be averaged out, suppressing the impact of individual sources.While much progress has been made in understanding the sources contributing ultrafine particles to the atmosphere and characterising them in terms of particle number and size distributions, there is far less knowledge of the metal components of the ultrafine fraction.There is some knowledge of physical and chemical properties such as whether metals are free-floating or hosted on or within carbonaceous or mineral matrices, single particles or agglomerates.In some cases these properties have been associated with different source types.At this stage there are few studies reporting bulk ambient metal concentrations.Progress has been made in understanding how physical and chemical atmospheric processes affect ultrafine particles, and understanding how concentrations, composition and distribution vary in different environments.Progress has been made in establishing elements, compounds and particle types which are diagnostic of particular emission sources in larger size ranges, but their applicability to ultrafine particles is very limited.Future research needs to consider the distribution of metals between size ranges, with a view to determining the chemical composition arising from different sources, especially as these may vary from the bulk, for example, the case of Al being less reliable as a crustal origin marker in ultrafine particles than coarse particles has been discussed.The greatest weaknesses in characterising the metallic content of nanoparticles in the atmosphere derive directly from the very low concentrations present and the fact that these require very long sampling times for conventional analytical methods to give useful data.Attempts at higher time resolution have to date had limited success although single particle techniques, and especially those based on mass spectrometric methods, offer the possibility of characterising both size and metal content of particles.However, current instruments have difficulties sampling particles right into the nanoparticle size range and low resolution mass spectrometry suffers from isobaric interference with ions derived from non-metallic species.Given the current rapid expansion in the use of nanomaterials in everyday consumer products, many of them metal-based, there is a pressing need for real-time measurements of specific nanoparticles.If nanoparticle measurement technology is to rise to the associated challenges, there is a need to develop instruments capable of the sensitivity and specificity needed to characterise nano-sized particles for a range of chemical constituents in real time.Much yet remains to be done to meet this objective. | Knowledge of the human health impacts associated with airborne nanoparticle exposure has led to considerable research activity aimed at better characterising these particles and understanding which particle properties are most important in the context of effects on health.Knowledge of the sources, chemical composition, physical structure and ambient concentrations of nanoparticles has improved significantly as a result.Given the known toxicity of many metals and the contribution of nanoparticles to their oxidative potential, the metallic content of the nanoparticulate burden is likely to be an important factor to consider when attempting to assess the impact of nanoparticle exposure on health.This review therefore seeks to draw together the existing knowledge of metallic nanoparticles in the atmosphere and discuss future research priorities in the field.The article opens by outlining the reasons behind the current research interest in the field, and moves on to discuss sources of nanoparticles to the atmosphere.The next section reviews ambient concentrations, covering spatial and temporal variation, mass and number size distributions, air sampling and measurement techniques.Further sections discuss the chemical and physical composition of particles.The review concludes by summing up the current state of research in the area and considering where future research should be focused.© 2014 The Authors. |
In Germany, an average of 243 cases and 20 deaths of invasive meningococcal disease due to serogroup B were reported to the Robert Koch Institute each year between 2009 and 2012.Over this period MenB accounted for 68.5% of IMD cases; 22% were due to MenC, 5.2% due to MenY, 3.4% due to MenW and the remainder due to groups A, Z and 29E.While most people recover, the disease can leave survivors with a range of disabling sequelae, from deafness to amputation .As in other European countries, annual IMD incidence has decreased markedly in Germany, with MenB IMD decreasing from a mean of 0.49 to 0.30 cases/100,000 inhabitants from 2002–2005 to 2009–2012, and MenC IMD from 0.18 to 0.11 cases/100,000 inhabitants .The decrease in MenC disease was disproportionately greater than for MenB disease due to the introduction of MenC vaccine for one-year old children in 2006 .Quadrivalent MenACWY vaccination is not recommended as part of the routine vaccination programme in Germany, but is recommended for those at increased risk after individual risk assessment, such as household contacts of cases, laboratory workers and immunocompromised persons .In January 2013 Bexsero® became the first vaccine to be licensed in the EU to provide broad protection against MenB disease.This vaccine is based upon a number of surface proteins and an outer membrane vesicle component, and is thus potentially immunogenic against strains with sufficient expression of the vaccine antigens regardless of the capsular group .In Germany the Standing Committee on Vaccination is the independent advisory group whose recommendations are required for inclusion of a vaccine in the national vaccination schedule and for reimbursement by statutory health insurance.Currently STIKO recommends Bexsero® for persons at increased risk of acquiring IMD, but not for universal childhood vaccination .Modelling the potential impact of a new vaccine on disease burden provides valuable evidence to STIKO and while assessment of the cost-effectiveness of a new vaccine is not obligatory for development of a STIKO recommendation, results are valuable for deciding on an overall immunisation strategy.To support decision making in Germany we adapted the independently developed model for England to the German setting to predict the potential health impact and cost-effectiveness of universal vaccination with Bexsero® against MenB disease.We used two models to estimate the potential impact of universal Bexsero® vaccination in Germany due to the uncertainty about the effect of the vaccine on carriage : a cohort model allowing for direct vaccine protection against disease only, and a dynamic transmission model that includes additional vaccine protection against carriage.These models are described fully elsewhere .Due to existent universal MenC vaccination in Germany and an extremely low incidence of meningococcal disease due to non-B serogroups, we considered MenB disease exclusively in the models.Both models are age-structured with yearly age classes; individuals are born susceptible.Upon disease, quality of life losses for the acute episode were included.Following disease, individuals have three possible outcomes: survival without sequelae, survival with sequelae or death.Those dying from the disease are assumed to lose the average life expectancy for the age at which they die.Individuals may die from other causes; published mortality rates were adjusted to remove deaths due to meningococcal disease as these are explicitly modelled.Vaccine induced protection was assumed to start one month after the second vaccine dose and we allowed for waning protection.We considered several vaccination strategies, comparing these to no universal vaccination against MenB and treating cases as they arise, over a 100 year time horizon.A Markov model with monthly cycles was used.Disease cases were generated through applying the age-specific probability of disease to the susceptible population; survivors of disease were removed from the susceptible pool.Years of life were weighted by the age-specific quality of life.Cohort sizes were based upon 2011 population statistics.Single birth cohorts were considered for routine infant or toddler vaccination; multiple cohorts were considered for strategies with catch-up vaccination.A Markov model with monthly cycles was used.Disease cases were generated through applying the age-specific probability of disease to the susceptible population; survivors of disease were removed from the susceptible pool.Years of life were weighted by the age-specific quality of life.Cohort sizes were based upon 2011 population statistics.Single birth cohorts were considered for routine infant or toddler vaccination; multiple cohorts were considered for strategies with catch-up vaccination.Transmission of meningococcal carriage was represented using a Susceptible-Infected-Susceptible model without considering co-infection and using a daily time step.Disease cases were generated by applying an age-specific case:carrier ratio to the number of new carriage acquisitions.Vaccinated individuals with immunity could have protection against carriage acquisition as well as disease.Transmission of meningococcal carriage was represented using a Susceptible-Infected-Susceptible model without considering co-infection and using a daily time step.Disease cases were generated by applying an age-specific case:carrier ratio to the number of new carriage acquisitions.Vaccinated individuals with immunity could have protection against carriage acquisition as well as disease.Details of the data sources used to estimate parameters are summarised below with full details provided in Appendix.Details of the data sources used to estimate parameters are summarised below with full details provided in Appendix.National surveillance data from RKI were used to estimate age-specific disease incidence and case fatality for MenB disease; the longer time period was used for case fatality due to the small annual number of meningococcal deaths.For the dynamic model MenB carriage prevalence estimates were based on a systematic review of all serogroup carriage combined with serogroup specific information from a carriage study in Germany .Each case was assumed to be hospitalised, with | Bexsero, a new vaccine against serogroup B meningococcal disease (MenB), was licensed in Europe in January 2013.In Germany, Bexsero is recommended for persons at increased risk of invasive meningococcal disease, but not for universal childhood vaccination.To support decision making we adapted the independently developed model for England to the German setting to predict the potential health impact and cost-effectiveness of universal vaccination with Bexsero® against MenB disease.German specific data were used where possible from routine surveillance data and the literature. |
48% requiring ambulance transfer.The proportion of survivors with mild and severe sequelae was estimated from the literature .Quality of life losses for survivors with sequelae were based on currently unpublished data from the MOSAIC study, a case–control study of MenB survivors in the UK ; losses for carers of a person with sequelae were also considered .Acute health care costs included the cost of: ambulance transfer; hospitalisation; hearing assessment; and public health management.Costs due to loss of work were also included.The costs of aftercare included one follow-up appointment for those aged under 5 years, cochlear implants, scarring treatment, physical therapy and logopaedics treatment for the year following illness.Annual support costs were included for mild sequelae and severe sequelae.We assumed that all cases with an amputation would result in a 50% work loss over their lifetime, either for a parent or for themselves at a later time.We considered several vaccination strategies including routine infant immunisation at varying ages with or without a catch-up campaign.In the dynamic model we investigated routine adolescent vaccination alone, or in combination with an infant programme.Vaccination uptake was estimated based on the uptake of other vaccines with similar age-specific schedules in current use.Vaccine strain coverage was estimated using results of the Meningococcal antigen typing system assay on German strains .The 2015 pharmacy retail price of €96.96 was used as the cost per vaccine dose.Costs of vaccine administration were estimated from administration costs for other vaccines in Germany.We included the costs of hospitalisation for severe fever and anaphylaxis as possible adverse events following vaccination, but did not include possible quality of life losses associated with adverse events, which were assumed to resolve quickly.We calculated the number needed to vaccinate to prevent one case by dividing the number of persons vaccinated by the number of cases averted under various model assumptions.Health outcomes were defined as cases averted, deaths averted and quality adjusted life years gained under vaccination.All costs were measured in Euros at 2013 prices, with previous costs adjusted based on the German consumer price index .In the base case, future costs and benefits were discounted back to their present value at a rate of 3.0% as recommended in Germany and the analysis was undertaken from the payer perspective.Parameter uncertainty was handled through scenario analyses and by probabilistic sensitivity analyses.Factors considered in scenario analyses included: disease incidence, population mixing, vaccination uptake, strain coverage, vaccine price, societal perspective and discount rates.The PSA was used to characterise the uncertainty around other model parameters.Parameter uncertainty was handled through scenario analyses and by probabilistic sensitivity analyses.Factors considered in scenario analyses included: disease incidence, population mixing, vaccination uptake, strain coverage, vaccine price, societal perspective and discount rates.The PSA was used to characterise the uncertainty around other model parameters.Table 2 shows the predicted impact of vaccination in birth cohorts over their lifetime."In the absence of MenB vaccination the model estimates 224 cases of MenB disease and 19 deaths would occur over a cohort's lifetime.Assuming 65% vaccine uptake and 82% strain coverage, vaccinating infants with a 2, 3, 4 + 12 months schedule is estimated to avert 34 of these cases and 3 deaths, with a similar number prevented under a 2, 4, 6 + 12 months programme.Vaccination at 6, 8, 12 months of age averted 25 cases as the assumed increased duration of protection does not compensate for missing the cases that occur before vaccination.To consider catch-up strategies additional birth cohorts are included.Adding a large one-off catch-up strategy for 1–17 year olds to the routine infant schedule averted more cases.However, the percentage averted is reduced because incidence and assumed vaccine uptake are lower in 1–17 year olds compared to under one-year olds.We assumed a 30% vaccine efficacy against acquisition.When considering routine infant vaccination alone, strategies starting earlier in life remained most favourable in reducing cases.The greatest health benefit in the short term, however, is achieved through routine infant vaccination with large-scale catch-up, which could reduce cases by 24.9% after 5 years and 27.9% after 10 years.In the long term policies including routine vaccination of 12 year olds are most favourable; after 50 years routine adolescent vaccination leads to an annual case reduction of 37.9% compared to no vaccination."Considering direct effects only 12,668 children would need to receive the vaccine to prevent a single case over a cohort's lifetime with a 2, 3, 4 + 12 months schedule.Assuming 30% vaccine effectiveness against carriage, this reduces to 8461 children and becomes even more favourable if older children are also vaccinated, reducing to 6373 children for the vaccination strategy 6, 8, 12 months +12 years.At a vaccine price per dose of €96.96 vaccination of infants at 2, 3, 4 + 12 months within the cohort model is expected to cost €191.9 M annually."The predicted reduction in healthcare costs over a cohort's lifetime as a result of direct vaccine effects is €873,500 with a resulting incremental cost-effectiveness ratio of €2.0 M per QALY gained.Assuming direct vaccine effects only, all vaccination strategies considered resulted in very high ICERs, with strategies that included catch-up being least favourable.Allowing for herd effects improves the cost-effectiveness of vaccination, however, the ICER remains over €500,000 for all considered strategies.The inclusion of herd effects makes catch-up in addition to routine infant immunisation more economically favourable than routine infant immunisation alone.The lowest ICERs in this context are produced by strategies with routine adolescent immunisation, due to the reduced dosing schedule and therefore lower costs for vaccination, and | Vaccination strategies included infant and adolescent vaccination, alone or in combination, and with one-off catch-up programmes.We assessed the impact of vaccination through cases averted and quality adjusted life years (QALY) gained and calculated costs per QALY gained.Assuming 65% vaccine uptake and 82% strain coverage, infant vaccination was estimated to prevent 15% (34) of MenB cases over the lifetime of one birth cohort.In the short term the greatest health benefit is achieved through routine infant vaccination with large-scale catch-up, which could reduce cases by 24.9% after 5 years and 27.9% after 10 years.In the long term (20+ years) policies including routine adolescent vaccination are most favourable if herd effects are assumed.Under base case assumptions with a vaccine list price of €96.96 the incremental cost-effectiveness ratio (ICER) was >€500,000 per QALY for all considered strategies. |
did not include long-term costs for mild learning disability or institutional care for patients with severe disability, making our cost estimate for severe sequelae rather conservative .We did include costs for rehabilitation, physical therapy, and speech therapy in the year after illness for a proportion of the patients.Not including the full range and costs of possible sequelae from meningococcal disease will have increased the estimated cost per QALY gained of the vaccination strategies, however in sensitivity analyses ICERs remained high even when the proportion of patients with sequelae and their associated costs were increased.In other aspects the model parameters were potentially vaccine favourable.For instance, we did not include quality of life losses from adverse vaccine reactions, allowances for strain replacement or potential deleterious effects of reducing meningococcal transmission.In addition, duration of protection in scenarios that included catch-up vaccination of toddlers may be overoptimistic based on a recently published small study of hSBA persistence .Modelling and cost-effectiveness studies on the use of Bexsero® have been published for England , the Netherlands , France , Belgium and Canada .In Spain the direct health impact alone was considered .As for the German models presented here, the England and Belgian analyses included the use of dynamic transmission models to appropriately allow for any herd effects.In France herd effects were estimated through incorporation into a Markov model and direct protection was principally considered in the Dutch and the Canadian studies primarily due to limited evidence of the effect of Bexsero® on meningococcal carriage and transmission.The predictions here for vaccination in Germany are in line with those estimated elsewhere, namely that in the absence of herd effects routine immunisation early in life offers the greatest health impact, but with the inclusion of herd effects routine immunisation of teenagers becomes the best long-term strategy.Although the ICERs under base case conditions have been found to be high in all countries considered thus far, those presented for Germany are amongst the highest to date.This is in part explained by a higher vaccine price, the lower sequelae costs assigned to MenB patients as well as the very low MenB incidence.Our models suggest that maximal health impact in the short term could be achieved in Germany by vaccinating infants early in life.However, a recent study of paediatricians in Germany suggested only 13.4% of physicians preferred this strategy, in contrast to the 66.7% who preferred vaccination at 6, 8, 12 months .Paediatricians were concerned about acceptance and safety of concomitant vaccination and possible parental refusal of other recommended vaccines since vaccinating MenB early in life would usually involve three vaccine shots per appointment.Thus, any immunisation decision will need to balance the potential benefits of any given vaccination strategy, the likelihood of the strategy being adopted in practice, as well as potentially unfavourable effects on the uptake of other vaccines.Given the current very low incidence of MenB disease in Germany, implementation of universal infant vaccination with Bexsero® would prevent only a small absolute number of cases.If the vaccine has an effect on carriage, the prevented number of cases and deaths increase significantly when vaccinating adolescents alone or – even more – when adding adolescent vaccination to a routine infant vaccination strategy.Whilst cost-effectiveness is not a central requirement for immunisation decision-making in Germany, the majority of scenarios considerably exceeded commonly used economic willingness to pay thresholds.Funding: This work was supported by the Robert Koch Institute."HC's work was supported by the National Institute for Health Research .This work is produced by the authors under the terms of these research training fellowships issued by the NIHR.HC is a member of the NIHR Health Protection Research Unit in Evaluation of Interventions at University of Bristol.The views expressed in this publication are those of the authors and not necessarily those of the NHS, The National Institute for Health Research or the Department of Health.The NIHR had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.Conflicts of interest: CLT reports receiving a consulting payment from GSK in 2013.HC reports receiving an honoraria, paid to her employer, from Sanofi Pasteur in 2015.Remaining authors: no reported conflicts. | We used both cohort and transmission dynamic mathematical models, the latter allowing for herd effects, to consider the impact of vaccination on individuals aged 0–99 years.Given the current very low incidence of MenB disease in Germany, universal vaccination with Bexsero® would prevent only a small absolute number of cases, at a high overall cost. |
quantitatively identify the exact resilience indicators and express the resilience formulation with respect to the specific type of geo-environmental disaster in a convincible way.Furthermore, it is worth mentioning that the output of the proposed resilience model should not be only limited to continuous numerical values, but can also be linguistically understandable or fuzzy concepts.As summarised in , different forms of frameworks describing the relationships between vulnerability, resilience and adaptive capacity have been discussed in the literature.Given our proposed framework and associated formulations, we can easily consider the adaptive capacity as part of the resilience in terms of the system ability to adjust positively in response to an exterior change or disturbance.On the other hand, improving resilience is regarded as one intrinsic ingredient to reduce the vulnerability which also factors the specific quantification of a disaster and natural and social environments.The aim of the proposed framework is to provide a holistic and systematic view on the resilience and vulnerability aspects of the built environment, while examples of categories of indicators listed above are no means of exhaustive.In a broader sense, built environment resilience and vulnerability with respect to different threats can certainly be attributed to the distinct involvement of indicators and also distinct models designed to describe the underlying mechanism.It is thus imperative to identify and reach a consensus on the detailed key indicators of resilience and vulnerability for the major geo-environmental disasters, or at least the unified approach leading to such identification.Regarding a particular threat, the model also needs to consider certain extent of uncertainties arising from the absence of less important or recognised indicators, which therefore lead to model errors or residuals.In addition, model errors can also be found in the qualitative approach regarding subjective reasoning and the quantitative approach regarding monitoring, and model structure and parameter determination.Built environment resilience requires a comprehensive programme of research spanning several interrelated disciplines.In that respect, four key strategic areas requiring further research are identified and briefly discussed below, namely: risk based cost optimal resilient design and standards of buildings and infrastructures, model based evaluation and optimisation of buildings and infrastructures, integrated risk modelling, inference and forecasting, and heterogeneous disaster data acquisition, integration, security and management.Current approaches to building design demand that buildings meet several serviceability performance criteria related to each of their constituent systems .Serviceability requirements are formulated in the form of range values to be satisfied.When serviceability requirements are outside the range of these specified values, undesired conditions can be induced which can cause stress and potential harm to the building and its occupants.However, many model parameters are subject to variation and change over the projected building lifecycle.From a wider scale, urban concentration of populations, as well as intense social interactions and economic activity, characterise our modern cities.It is essential to understand how disasters propagate from buildings through cities and disrupt physical, socio-cultural and economic city systems, how can the impact of these disasters be reduced and managed?, how can cities become more resilient?,Research suggest that most resilience-related initiatives focus on a building/block of buildings level and do not address the complexity of urban environments that depend on the interaction between social, economic and technical systems.Building data analytics is often aimed at energy benchmarking and environmental performance monitoring, which if combined with structural monitoring, can provide useful data about whole-building resilience.Live datasets from current building monitoring are at best sporadic, often comprising an ad-hoc combination of off-the-shelf building management systems and distributed data metering equipment combined using traditional database solutions.The ad-hoc combination presents many challenges for extracting meaningful relationship between datasets, due to the variations in information exchange protocols across systems – resulting in distributed data.Moreover, the complex interplay between the variables that underpin building systems behaviour precludes a simple set of rules or guidelines and necessitates the development of more complex data rich models which better inform designers about the lifecycle trade-offs that can be made between different systems of a building and devise appropriate response strategies to unexpected solicitations.In that respect, a systems thinking perspective is essential as it provides a foundation for building systems modelling necessary to understand how the different components within a building interact, the involved variables, their dependencies, and the dynamic forces that affect their performance.There is an urgent need to develop cost-effective methods, tools and guidelines for acquiring, integrating, secure management and streaming of distributed heterogeneous data on disaster risks and impact on buildings.Existing approaches to built environment risk modelling lack a holistic understanding of disaster risks, their boundary conditions and impact on building standards.It is important to “analyse together” geo-environmental data, building performance and socio-economic activities with the objective of inferring correlations that are not directly observable and inferring knowledge about their interdependencies.Integrated risk modelling, inference and forecasting should make use of fused and streamed data from heterogeneous sources to infer knowledge of impact, risk and performance of building systems on geospatial and temporal scales.Decisions on resilience design interventions and standards often rely on an estimate of cost and associated benefits.Existing methods for assessing cost of resilience measures do not factor in the following costs: pre-construction or non-construction, construction, ancillary, operation and maintenance, and cost of disruption due to a disaster event.There is a lack of resilience characterisation techniques and methods that consider various scales from micro and macro taking into account nonlinear and continuously changing governing variables and their boundary conditions.One of the key challenges in disaster management, response and resilience is | Resilience, in general, is widely considered as a system's capacity to proactively adapt to external disturbances and recover from them.However, the existing resilience framework research is still quite fragmented and the links behind various studies are not straightforwardly accessible.The paper provides a critical state-of-the-art review of both quantitative and qualitative considerations of resilience, approached from a built environment engineering perspective, with a focus on geo-environmental hazards.A research gap is identified and translated into a holistic and systemic approach to conceptualise resilience, factoring in related concepts such as vulnerability, adaptive capacity and recoverability.A generic built environment resilience framework is proposed informed by a critical and comprehensive review of the related literature.The paper concludes with insights into four key strategic areas requiring further research, namely: (a) risk based cost optimal resilient design and standards of buildings and infrastructures, (b) model based evaluation and optimisation of buildings and infrastructures, (c) integrated risk modelling, inference and forecasting, and (d) heterogeneous disaster data acquisition, integration, security and management. |
as enzymes by the same fungal treatment.Likely, the same theory about accessibility of carbohydrates can be applied to oak wood chips.Unlike wheat straw, oak wood chips probably did not contain easily accessible nutrients, since the IVGP did not decrease in the first week.In addition, ADL degradation by L. edodes and C. subvermispora started already after 2 weeks of treatment and the L/C ratio numerically decreased already after 1 week of fungal treatment.Also, hemicelluloses degradation only started after 2 weeks of treatment, while in wheat straw it started in the first week.This suggests that hemicelluloses are less accessible in oak wood chips.The lower accessibility in oak wood chips can be explained by incomplete fungal colonization due to the dense structure of oak wood chips as observed by microscopy.Colonization, and delignification, not only requires physical space for the fungus to grow, but also oxygen within the tissue.The fact that the percentage of Cα-oxidized lignin in wood chips does not clearly increase upon fungal treatment, suggests that the availability of oxygen is the limiting factor.Indeed, L. edodes mycelium does not grow well where oxygen is limited, and when it grows actively the O2 demand becomes even much higher than that of other fungi.O2 and CO2 are important factors in the cultivation of mushrooms .To increase the availability of oxygen, the wood structure has therefore first to be degraded to allow oxygen entry before further degradation and growth can occur.This stepwise “delignification and colonization” requires a longer treatment time for biomass with a dense structure like oak wood.A strategy to decrease treatment time is to increase the surface to volume ratio of dense biomass to allow for more entry points for fungi.The second phase in the fungal treatment of oak wood chips was characterized by little changes in composition of the substrate after 4 weeks of treatment.Ergosterol data showed that C. subvermispora only grew during the first week of colonization.On oak wood chips, in contrast to wheat straw, no further growth of the fungus was observed.Similarly, Messner et al. showed a plateau in ergosterol development during C. subvermispora treatment of oak wood chips.The fungal growth stopped between 6 and 14 days to continue again afterwards .These authors confirmed this observation by the temperature development, as the temperature did not change during the plateau period in ergosterol.Messner et al. suggested that lignin degradation should take place before carbohydrates can be degraded by the fungus.Lignin is degraded by C. subvermispora without growing.The production of alkylitaconic acids shows that the fungus is producing secondary metabolites without active growth.The fungus cannot grow until the carbohydrates are accessible, meaning that lignin should be degraded first.Messner et al. described manganese peroxidase activity to be high during the plateau in the ergosterol data.This indicates that lignin degradation is the first step in the degradation process by C. subvermispora.However, in the current study C. subvermispora degraded lignin and hemicelluloses without changing the cellulose mass fraction of the dry biomass in both wheat straw and oak wood chips during the plateau period in ergosterol.The present experiments show various aspects of fungal delignification that occur in parallel and their combined results give a better understanding of the lignocellulose biodegradation in general.The white rot fungi C. subvermispora and L. edodes preferentially degrade lignin, without changing the cellulose mass fraction of the dry biomass, during growth on wheat straw and oak wood chips.Most chemical changes occurred during the first 4 weeks of fungal treatment, and on both substrates, C. subvermispora degraded more lignin than L. edodes.Both fungi have a different strategy in degrading the lignocellulosic materials.L. edodes continuously grows and degrades lignin during the growth, while C. subvermispora colonizes the material predominantly during the first week and degrades lignin and hemicelluloses without growing.The density of biomass seems to limit the infiltration by fungi.As a result of the selective lignin degradation, the IVGP and the sugars released upon enzymatic saccharification increases. | Wheat straw and oak wood chips were incubated with Ceriporiopsis subvermispora and Lentinula edodes for 8 weeks.L. edodes continuously grew during the 8 weeks on both wheat straw and oak wood chips, as determined by the ergosterol mass fraction of the dry biomass.C. subvermispora colonized both substrates during the first week, stopped growing on oak wood chips, and resumed growth after 6 weeks on wheat straw.L. edodes continuously degraded lignin and hemicelluloses in wheat straw while C. subvermispora degraded lignin and hemicelluloses only during the first 5 weeks of treatment after which cellulose degradation started.Both fungi selectively degraded lignin in wood chips.After 4 weeks of treatment, no significant changes in chemical composition were detected.In contrast to L. edodes, C. subvermispora produced alkylitaconic acids during fungal treatment, which paralleled the degradation and modification of lignin indicating the importance of these compounds in delignification.Light microscopy visualized a dense structure of wood chips which was difficult to penetrate by the fungi, explaining the relative lower lignin degradation compared to wheat straw measured by chemical analysis.All these changes resulted in an increased in in vitro rumen degradability of wheat straw and oak wood chips. |
of deformation appeared to be greater in younger skin.This rate of change further appeared to be greater in tissue chronically photoexposed between the young and aged volunteers when compared to the response observed in buttock skin.Hence, in intrinsically aged skin of color subtle variations in biomechanical properties existed when compared to young skin and were largely due to differences in skin elasticity, while parameters relating to deformation were comparable.However, in aged forearm skin, virtually all biomechanical properties were markedly different from those exhibited in young forearm.Therefore, if one defines optimal biomechanical skin function as the ability to deform and return to its original position without the onset of fatigue, then these data shows that intrinsically aged, and to a greater extent, chronically photoexposed, skin does not behave mechanically in an optimal manner.In addition to noninvasive biomechanical measurements, all volunteers were assessed at both anatomic sites for their pigmentary phenotype using a Chroma meter and classified according to their individual typology angle.Skin biopsies were also obtained from all volunteers at both test sites and processed for histologic investigation using the Warthin-Starry method for detection of melanin.Our study cohort demonstrated the diversity of pigmentation levels in black African-American individuals, with skin types ranging from lightest to darkest.A Pearson product-moment correlation coefficient was computed to assess the relationship between an individual’s individual typology angle and their epidermal melanin content; a negative correlation exists between the two variables indicating that those individuals with a higher melanin content also have a lower individual typology angle value—indicative of a darker skin color phenotype.Furthermore, younger individuals tend to have darker skin than the aged cohort.Next, cryosections were stained with hematoxylin and eosin and epidermal morphometrics assessed for all volunteers.Young buttock and forearm were largely indistinguishable from one another with regard to epidermal thickness, with strong interdigitation of rete ridges at the DEJ apparent at both anatomic sites.In aged skin, epidermal thickness was significantly reduced at both buttock and forearm sites.In intrinsically aged buttock, although significantly reduced, interdigitation of rete ridges persisted, whereas in chronically photoexposed forearm, near complete effacement of rete ridges and flattening of the DEJ were apparent.Thus, it appears that epidermal thinning is characteristic of intrinsic aging in skin of color; whereas effacement of rete ridges is severely exacerbated by chronic photoexposure.The biomechanical property of elasticity was significantly impaired in both intrinsically aged and chronically photoexposed skin of color; hence, we next performed immunohistochemical analyses of the major dermal elastic fiber components elastin, fibrillin-rich microfibrils and fibulin-5 on biopsy samples of buttock and forearm.In young individuals at photoprotected buttock and photoexposed forearm sites, elastic fibers were arranged in distinctive candelabra-like arrays, connecting oxytalan fibers of the DEJ to elaunin fibers of the superficial papillary dermis.Immunohistochemical staining of intrinsically aged buttock skin identified a depletion of these structures at the DEJ; loss of elastic fiber architecture was accompanied by significant reductions in abundance for both FRMs and fibulin-5.Loss of elastic fiber architecture and abundance was further exacerbated in chronically photoexposed forearm; elastin and FRMs were severely truncated at the DEJ and their abundance was significantly reduced compared to young skin.Similarly, there was almost complete depletion of fibulin-5 at the DEJ and throughout the papillary dermis of the forearm which had been chronically photoexposed.Fibrillar collagens within the dermal ECM provide skin with the biomechanical property of tensile strength; therefore, we next examined the abundance of organized fibrillar collagens within skin of color.Organized fibrillar collagens, when visualized by Picrosirius Red staining and polarized light microscopy, were not altered between body sites of young individuals.However, both intrinsically aged buttock and chronically photoexposed forearm displayed significant loss of organized fibrillar collagens as compared to young subjects.Immunofluorescent detection of mature collagen I further confirmed that the overall intensity of collagen I was significantly reduced in the papillary dermis of aged buttock and forearm.Using an X-Y-Z plot, the relationship between epidermal morphology and FRM organization and biomechanical function was explored.Using this visualization method, young buttock, young forearm, and intrinsically aged buttock all share similar biomechanical properties and exhibit the architectural features of strong rete ridge interdigitation and arborizing FRMs at the DEJ.However, chronically photoexposed forearm does not share these properties with the other groups; rather, these individuals cluster as a cohort where effacement of rete ridges, combined with significant truncation of FRMs at the DEJ, is strongly associated with a marked decline in in vivo biomechanical function.In this study, we established in aged skin of color that loss of DEJ convolution, disruption to elastic fiber arrangement, and reduced collagen organization appear to be detrimental to skin’s biomechanical behavior.Cutometry and ballistometry are useful methods that describe related, but not identical, aspects of skin biomechanics.The differences in measuring principle suggest that cutometry predominantly measures skin elasticity, while ballistometry predominantly measures stiffness.Our findings suggest that ballistometry is a less-sensitive method than cutometry, as this device failed to identify any discernible differences between young and intrinsically aged buttock.That said, the cutometry time–strain curves for photoprotected buttock demonstrate how well the biomechanical function of intrinsically aged skin is preserved compared to young buttock skin—implying that this anatomic site largely functions at close to its optimal level for the life course of the individual.However, for chronically photoexposed forearm both devices were concordant in their findings; in the aged cohort all biomechanical properties were significantly impaired except for indentation and total deformation—essentially the same biomechanical measure.Previous studies of skin aging using individuals of Fitzpatrick phototypes I–III have identified several | In contrast, intrinsically aged buttock skin was significantly less resilient, less elastic, and was accompanied by effacement of rete ridges with reduced deposition of both elastic fibers and fibrillar collagens.In chronically photoexposed dorsal forearm, significant impairment of all biomechanical functions was identified, with complete flattening of rete ridges and marked depletion of elastic fibers and fibrillar collagens. |
impact of disease and aging.Our data examine the consequences of cutaneous aging in skin of color and identify that despite the photoprotective properties of melanin, chronic photoexposure exacerbates features of aging skin at both the histologic and functional level.This further promotes the need for improved public health advice regarding the consequences of chronic sun exposure and the importance of multimodal photoprotection use for all, regardless of ethnicity.A better understanding of skin health is of global importance, as highlighted by the recent World Health Organization report on aging; only when we understand how to promote and maintain skin health throughout the life course—in all of its diversity—will we be able to make significant improvements to the clinical management of skin disease and relieve the socioeconomic burden related to an aging global population.Healthy, black African-American volunteers were recruited to the study.Local ethical approval was obtained from The Johns Hopkins Institutional Review Board.Written informed consent was obtained from the participants and the study adhered to Declaration of Helsinki principles.Basic demographic information was collected and participants were asked to self-declare their ethnicity.Test sites were selected on the buttock and forearm.The Cutometer MPA580 with a 4-mm aperture probe and the ballistometer were applied to three adjacent but nonoverlapping areas at each anatomical test site.A Chroma Meter was used to measure the L* and b* parameters of the standard CIE L*a*b* color space at both anatomic sites.Further details are provided in Supplementary Materials and Methods online.To further interrogate the biomechanical behavior, a standard exponential regression model of deformation on time was fitted to data collected in the deformation cycle, while a Weibull-type regression model was fitted to data collected in the relaxation cycle.In the latter case, time was better modeled as a power function rather than linearly as indicated by a higher model-adjusted R2 and smaller residuals.The full functional form of both models is presented in Supplementary Materials and Methods.Regression coefficients for the second, sixth, and tenth deformation-relaxation cycles were analyzed, that is, an early, mid, and late cycle, to assess their consistency of biomechanical response.Once all noninvasive measurements had been completed, 6-mm diameter punch biopsies were obtained from all volunteers at the two anatomic sites.Each skin biopsy was obtained under 1% lidocaine local anesthesia.At the time of procurement, biopsies were snap-frozen in liquid nitrogen and stored at –80°C.Biopsies were cryosectioned at 7 μm in a single run, using the same blade and the same cryostat settings.Epidermal melanin was assessed using the modified Warthin-Starry procedure, epidermal morphology was assessed using hematoxylin and eosin staining and Picrosirius Red staining for fibrillar collagens.Immunohistochemistry was performed using mouse monoclonal antibodies to detect elastin and FRMs.Rabbit polyclonal antibody was used to detect fibulin-5.See Supplementary Materials and Methods for detailed protocols.Brightfield and cross-polarized images were captured using a BX53 microscope and image analysis was performed using ImageJ software.DEJ convolution index was measured using the method described previously.Statistical analysis was performed using GraphPad Prism, version 7.01.Results were considered significant if P < 0.05.The Centre for Dermatology Research is in receipt of research grants from Walgreens Boots Alliance and Unilever UK Limited.Chris E.M. Griffiths is Director of CGSkin Limited.The remaining authors state no conflict of interest. | Maintaining optimal skin function is essential for healthy aging across global populations; yet most research focuses on lightly pigmented skin (Fitzpatrick phototypes I–III), with little emphasis on skin of color (Fitzpatrick phototypes V–VI).Here, we explore the biomechanical and histologic consequences of aging in black African-American volunteers.We found that healthy young buttock and dorsal forearm skin was biomechanically resilient, highly elastic, and characterized histologically by strong interdigitation of rete ridges, abundant organized fibrillar collagen, and plentiful arrays of elastic fibers.We conclude that in skin of color, both intrinsic aging and photoaging significantly impact skin function and composition, despite the additional photoprotective properties of increased melanin.Improved public health advice regarding the consequences of chronic photoexposure and the importance of multimodal photoprotection use for all is of global significance. |
is also a small structure and one has to take caution with voxel based data with close proximity to enlarged ventricles,.However, appropriate steps were taken to exclude the possibility that hypometabolism was caused by volume averaging.The hypometabolism is also throughout the entire dorsal striatum and not just the periventricular regions.The results were also bolstered by the post hoc, ROI-based analysis of striatal metabolism.Additionally, Fig. 5 shows an example iNPH patient depicting the caudate hypometabolism accurately registering despite the ventriculomegaly.This was a retrospective pilot study and as a result many variables could not be controlled for.The time between CT/PET and MRI was not statistically significant between groups, but remains a source of potential error if pathology were to change between the two scans.In summary, we present a subcortical FDG-PET pattern in iNPH that may differentiate it from age related changes in gait, bladder function and cognition.Despite cognitive tests in the abnormal range of other neurodegenerative diseases, the cortical metabolism in this subset of iNPH patients is not significantly different from control patients.The frontosubcortical dementia pattern in iNPH patients correlates well with these dorsal striatum hypometabolism findings.Whether or not FDG-PET can serve as a biomarker for disease progression and response to intervention remains to be seen.Future studies should prospectively enroll a larger group of patients with iNPH and analyze PVC corrected FDG-PET imaging before and after shunting.In doing so, FDG-PET can be properly incorporated into the diagnostic and prognostic process, hopefully resulting in earlier intervention in appropriately selected patients. | Background: Idiopathic normal pressure hydrocephalus (iNPH) is an important and treatable cause of neurologic impairment.Diagnosis is complicated due to symptoms overlapping with other age related disorders.The pathophysiology underlying iNPH is not well understood.We explored FDG-PET abnormalities in iNPH patients in order to determine if FDG-PET may serve as a biomarker to differentiate iNPH from common neurodegenerative disorders.Methods: We retrospectively compared 18F-FDG PET-CT imaging patterns from seven iNPH patients (mean age 74 ± 6 years) to age and sex matched controls, as well as patients diagnosed with clinical Alzheimer's disease dementia (AD), Dementia with Lewy Bodies (DLB) and Parkinson's Disease Dementia (PDD), and behavioral variant frontotemporal dementia (bvFTD).Partial volume corrected and uncorrected images were reviewed separately.Results: Patients with iNPH, when compared to controls, AD, DLB/PDD, and bvFTD, had significant regional hypometabolism in the dorsal striatum, involving the caudate and putamen bilaterally.These results remained highly significant after partial volume correction.Conclusions: In this study, we report a FDG-PET pattern of hypometabolism in iNPH involving the caudate and putamen with preserved cortical metabolism.This pattern may differentiate iNPH from degenerative diseases and has the potential to serve as a biomarker for iNPH in future studies.These findings also further our understanding of the pathophysiology underlying the iNPH clinical presentation. |
atmosphere can be attributed to the oxidation of the C2H5 fragment of the ligand to produce acetaldehyde, and to yield 2-butanone upon recombination of CH3CO and C2H5· according to α*.On the other hand, in the absence of oxygen, upon simple recombination without oxidation of the radicals, butane could be formed.From Fig. 6a, it is apparent that when the powder mass in a humid oxygen atmosphere reaches a critical value, the TG curve no longer exhibits a smooth evolution but shows a very abrupt mass loss and the differential thermal analysis curve exhibits a very sharp exothermic peak; these two features are characteristic of a thermal runaway .When a thermal runaway occurs, the reaction becomes locally unstable; it reaches a high temperature state and accelerates enormously so that it is virtually adiabatic .The aforementioned equations and the determination of the physical parameters given in Table 2 are described in the supporting information; in particular, the kinetic parameters were derived performing TG experiments in humid O2 at 4 different heating rates, and the curves were analyzed using isoconversional kinetic methods , specifically the Friedman method.Thermal conductivity and specific heat capacity were obtained experimentally from differential scanning calorimetry in a specific temperature range for the sample in the form of powder .Knowing the aforementioned parameters and solving equation 1, the critical thickness above which combustion would occur for a Y-Prop3 thin film is 937 μm at a heating rate of 5 K/min, which means that for films a thermal explosion is impossible to reach .This can be explained thanks to the greater surface of the substrate, which helps dissipating the heat, preventing combustion from occurring.On the other hand, from equation 2, the sample critical mass for Y-Prop3 was found to be around 13 mg for a heating rate of 5 K/min.This is in agreement with the evolution of the TG curve with the sample mass shown in Fig.S2 of the Supporting info, where, for a 13-mg sample, we observe the characteristic sharp mass loss related to a thermal runaway.In this study, we have analyzed the thermal decomposition of yttrium propionate as a function of film thickness, particle size, heating rate and gas atmosphere, comparing samples in the form of film and powder.We have shown that the volatiles depend on the aforementioned parameters.This behavior is related to the competition between two different mechanisms: one related to the hydrolysis and oxidation of yttrium propionate in the presence of water or oxygen, and a second mechanism related to a radical reaction.The first one is enhanced by oxygen and water vapor, and in films due to the easy diffusion of the reacting species.Conversely, the radical decomposition is favored in inert conditions and when oxygen diffusion or atmosphere renewal around the sample are hindered.Finally, we have observed that films decompose differently than powders; they exhibit different kinetics and the decomposition route is also different.For instance, films decompose at significantly lower temperatures and their decomposition is accelerated by the presence of water vapor.Conversely, within the standard parameters of TG analysis, powders may undergo combustion for sample masses of the order of 10 mg.Taking into account this different behavior between films and powders, it is of the utmost importance to analyze films to disclose the actual phenomena occurring during YBCO precursor pyrolysis in the synthesis of superconducting tapes. | The processes involved in the thermal decomposition of yttrium propionate in oxidizing and inert atmosphere were analyzed with thermoanalytical techniques (thermogravimetry and evolved gas analysis) and with the help of structural characterization (X-ray diffraction, infrared spectroscopy and elemental analysis) of intermediate and final products.Samples in the form of films and powders were analyzed.The decomposition behavior studied as a function of particle size and film thickness was investigated.We conclude that, as a consequence of the gas and heat transport, films decompose differently than powders.Finally, two decomposition mechanisms are proposed that are in agreement with the observed volatiles and intermediate phases. |
Many disciplines in clinical neurology and neuroscience benefit from the analysis of eye motion and gaze direction, which both rely on accurate pupil detection and localization as a prerequisite step.Over the years, eye tracking techniques have been contributing to the advancement of research within these areas.Examples include the analysis of attentional processes in psychology or smooth pursuit assessment in patients with degenerative cerebellar lesions.One important area of application for eye tracking is vestibular research, where measurements of the vestibulo-ocular reflex and nystagmus behavior are essential in the diagnostic pathway of balance disorders.Beyond neuroscientific applications, eye-tracking was also utilized by autonomous driving industry for driver fatigue detection.Other than that, the trajectories and velocities of eye movements over a viewing task can serve as individual biometric signature for identification purpose.In consumer-behaviour research, eye-tracking has been used to study the dynamics and locations of consumers’ attention deployment on promoted products in order to improve the design of advertisement.It is clear that pupil detection and tracking techniques build a fundamental block for eye movement analysis, enabling advancement in neuroscientific research, clinical assessment and real life applications.Despite their importance, robust, replicable and accurate eye tracking and gaze estimation remain challenging under naturalistic low-light conditions."Most of the gaze estimation approaches, such as Pupil-Centre-Corneal-Reflection tracking and geometric approaches based on eye shapes depend on inferring gaze information from the pupil's location and shape in the image.However, the pupil is not always clearly visible to the camera.As summarized in, the pupil appearance can suffer from occlusion due to half-open eyelids or eyelashes, from reflection of external light sources on the cornea or glasses, from contact lenses or from low illumination, low contrast, camera defocusing or motion blur.All these artifacts pose challenges to pupil detection, and eye tracking algorithms which were not specifically designed with these artifacts in mind, may fail or give unreliable results under these circumstances.In medical image analysis and computer vision, dramatic improvements in dealing with such artifacts have been achieved in recent years due to the introduction and rapid advancement of deep learning, specifically convolutional neural networks.An important distinction to hand-designed algorithms is that a CNN can achieve robust pupil segmentation, by automatically learning a sequence of image processing steps which are necessary to optimally compensate for all image artifacts which were encountered during training.Conventional gaze estimation is often based on the Pupil-Centre-Corneal-Reflection method, which requires accurate localization of the pupil centre and glints, i.e. corneal reflections.Localization algorithms for the pupil and glints are often based on image processing heuristics such as adaptive intensity thresholding, followed by ray-based ellipse fitting, morphological operators for contour detection, circular filter matching, Haar-like feature detection and clustering, or radial symmetry detection.It is important to note that most of these approaches assume the pupil to be the darkest region of the image, which is susceptible to different illumination conditions and may require manual tuning of threshold parameters.Previous to our approach, several deep-learning based pupil detection approaches have been proposed to improve the robustness to artifacts by learning hierarchical image patterns with CNNs.PupilNet locates the pupil centre position with two cascaded CNNs for coarse-to-fine localization.In Chinsatit and Saitoh, another CNN cascade first classifies the eye states of “open”, “half-open” and “closed”, before applying specialized CNNs to estimate the pupil centre coordinates, based on the eye state.However, current CNN approaches output only the pupil centre coordinates, which alone are not enough to determine the gaze direction without calibration or additional information from corneal reflection.Some studies focus on end-to-end training of a CNN, directly mapping the input space of eye images to the gaze results, but they are confined to applications in specific environment, such as estimating gaze regions on the car windscreen or mobile device monitors, which are not suitable for clinical measurement of angular eye movement.In this work, we propose DeepVOG, a framework for video-oculography based on deep neural networks.As its core component, we propose to use a fully convolutional neural network for segmentation of the complete pupil instead of only localizing its center.The segmentation output simultaneously enables us to perform pupil center localization, elliptical contour estimation and blink detection, all with a single network, and with an assigned confidence value.We train our network on a dataset of approximately four thousand eye images acquired during video-oculography experiments at our institute, and hand-labeled by human raters who outlined the elliptical pupil contour.Though trained on data from our institute, we demonstrate that the FCNN can generalize well to pupil segmentation in multiple datasets from other camera hardware and pupil tracking setups.On consumer-level hardware, we demonstrate our approach to infer pupil segmentations at a rate of more than 100 Hz.Beyond pupil segmentation, we re-implement a published and validated method for horizontal and vertical gaze estimation and integrate it as an optional module into our framework.We show that the integration of gaze estimation is seamless, given that our FCNN approach provides elliptical pupil outline estimates.We further show that by considering ellipse confidence measures from our FCNN output, the accuracy of the gaze estimation algorithm can be increased.Our implementation is fully Python-based and provided open-source for free usage in academic and commercial solutions.Our code, pre-trained pupil segmentation network and documentation can be found under: www.github.com/pydsgz.For this study, we acquired three datasets at the German Center for Vertigo and Balance Disorders, two for training validation of the pupil segmentation network and one for validation of the gaze estimation.Training sequences were acquired in a challenging environment, i.e. inside | Several existing techniques face challenges in images with artifacts and under naturalistic low-light conditions, e.g.New method: For the first time, we propose to use a fully convolutional neural network (FCNN) for segmentation of the whole pupil area, trained on 3946 VOG images hand-annotated at our institute.Results: The FCNN output simultaneously enables us to perform pupil center localization, elliptical contour estimation and blink detection, all with a single network and with an assigned confidence value, at framerates above 130 Hz on commercial workstations with GPU acceleration.We provide our code and pre-trained FCNN model open-source and for free under www.github.com/pydsgz/DeepVOG. |
segmentation may then aid or even replace the extra measurements of skin conductance and heart rate in some studies of emotion.Additionally, the output as a probability map informs the user about the confidence of the segmentation, which gives valuable information on data reliability, interpretation and blink detection.The pupil ellipse estimates and confidence estimates from our FCNN lay the foundation for accurate gaze estimation with median angular errors of around 0.5°, as compared to RMSE of 1.6° in the original study of, and 0.59° in EyeRecToo, one of the best-performing, recently proposed methods."We further show that if the network's confidence output is considered for 3D model fitting and gaze estimation, the accuracy can be further improved to angular errors around 0.38°–0.45°.Such accuracy could improve the validity of results in eye-tracking based experiments, for example, clinical assessment of vestibular and ocular motor disorders as well as visual attention studies in cognitive neuroscience.Further, DeepVOG demonstrates a high repeatability given multiple trials of two unassisted calibration paradigms, making it a stable tool for gaze data acquisition.Naturally, a projector-assisted, fixation-based calibration routine as in the neuro-ophthalmological examination laboratory of our clinical center can further improve the accuracy of gaze estimates.However, if such a procedure is impossible, for example due to hardware constraints, or in patients with fixation problems, the investigated unassisted calibration and gaze estimations in DeepVOG might be a very interesting option.Finally, we highlight the accessibility of DeepVOG as an open-source software, which does not depend on corneal reflections or stimulus-based calibrations, leaving a head-mounted low-cost camera as the only required equipment."Even though DeepVOG's FCNN-based pupil segmentation can generalize well to unseen datasets, mis-segmentations still do occur.In particular, if videos are recorded from a longer distance, thus containing other facial features such as eyebrows or the nose, DeepVOG is likely to fail, since it did not encounter such images during training.Further, if DeepVOG is used for gaze estimation, our experiments demonstrated that a narrow-angle calibration yields inferior accuracy during unassisted calibration.Hence, study conductors should make sure that study participants cover a sufficiently wide angular range of gaze directions, to achieve highly elliptic pupil shapes ideally in the entire visual periphery.A fundamental limitation of the gaze estimation method which we employ in DeepVOG is the assumption of a spherical eye model, as proposed by Świrski and Dodgson.Several improvements can be made here, since the real pupil is not exactly circular, and elliptical shapes are distorted by light refraction through the cornea.To this end, in a very recent work by Dierkes et al. and Pupil Labs Research, the Le Grand eye model was employed instead, which assumes the eye to consist of two intersecting spheres, i.e. the eyeball and the cornea.The non-elliptical appearance of pupils caused by corneal refraction leads to reported gaze estimation errors similar to those observed in our experiments.An improved 3D eye model fitting loss function and algorithm were proposed, which could help in further improving gaze estimates in future work.Further, DeepVOG is not applicable in eye tracking setups where no video can be recorded and provided as input to the algorithm as a video file or as a real-time video stream.Certain eye tracking systems, especially those operating at high frequencies around 1 kHz, commonly process eye tracking data internally and do not provide an interface to high-quality video data in real-time and at a high framerate.DeepVOG is a software solution for gaze estimation in neurological and neuroscientific experiments.It incorporates a novel pupil localization and segmentation approach based on a deep fully convolutional neural network.Pupil segmentation and gaze estimates are accurate, robust, fast and repeatable, under a wide range of eye appearances."We have made DeepVOG's pupil segmentation and gaze estimation components open-source and provide it to the community as freely available software modules for standalone video-oculography, or incorporation into already existing frameworks.In future work, we aim to incorporate a large number of images from third-party public eye datasets into training of the DeepVOG FCNN."This would extend the FCNN's generalization capability and robustness to an even wider variety of eye and pupil appearances and avoid mis-segmentations that still do occur.An easy-to-use graphical user interface will also be a focus of development.To this end, it is possible to integrate our segmentation part into other existing frameworks where gaze inference is based on pupil information, since DeepVOG is modularised as two parts: pupil segmentation by FCNN and gaze estimation by Świrski et al. model.Especially Pupil Labs Research, with its more realistic Le Grand eye model and its Python-based open-source user interface,2 serves as an inspiration to our next step of improvement. | Background: A prerequisite for many eye tracking and video-oculography (VOG) methods is an accurate localization of the pupil.We integrate the FCNN into DeepVOG, along with an established method for gaze estimation from elliptical pupil contours, which we improve upon by considering our FCNN's segmentation confidence measure.Pupil centre coordinates can be estimated with a median accuracy of around 1.0 pixel, and gaze estimation is accurate to within 0.5 degrees.The FCNN is able to robustly segment the pupil in a wide array of datasets that were not used for training.Conclusions: Our proposed FCNN-based pupil segmentation framework is accurate, robust and generalizes well to new VOG datasets. |
It is fundamental and challenging to train robust and accurate Deep Neural Networks when semantically abnormal examples exist.Although great progress has been made, there is still one crucial research question which is not thoroughly explored yet: What training examples should be focused and how much more should they be emphasised to achieve robust learning?In this work, we study this question and propose gradient rescaling to solve it.GR modifies the magnitude of logit vector’s gradient to emphasise on relatively easier training data points when noise becomes more severe, which functions as explicit emphasis regularisation to improve the generalisation performance of DNNs.Apart from regularisation, we connect GR to examples weighting and designing robust loss functions.We empirically demonstrate that GR is highly anomaly-robust and outperforms the state-of-the-art by a large margin, e.g., increasing 7% on CIFAR100 with 40% noisy labels.It is also significantly superior to standard regularisers in both clean and abnormal settings.Furthermore, we present comprehensive ablation studies to explore the behaviours of GR under different cases, which is informative for applying GR in real-world scenarios. | ROBUST DISCRIMINATIVE REPRESENTATION LEARNING VIA GRADIENT RESCALING: AN EMPHASIS REGULARISATION PERSPECTIVE |
of moment forces occurred.The moment force in column A5 increased from 3.14 kN m to 76 kN m peak before settling down at a constant value of 34.2 kN m for case 2.A large redistribution of moment forces took place for case 4.The moment force in column A5 increased from 6.78 to 112 kN m peak, before settling down at 39.8 kN m.A large redistribution of moment forces occurred.The moment force in column A2 increased from 10.08 kN m to 24.8 kN m peak.As shown in Fig. 25, the moment force in column A2 increased from 10.07 kN m to a peak of 20.5 kN m. Fig. 25 shows that the magnitude of moment force in column A2 changes frequently during analyses for cases 5 and 7.A great redistribution of moment forces took place; the moment force increased in column A5 from 9.94 kN m to 32.5 kN m.In addition, the moment force in column A5 increased from 7.29 kN m to 19.8 kN m peak for case 8.In addition, all of reaction moments are listed in Table 6.By comparing the increase of moment force in cases 1 and 2, the initial moment before column removal for both columns A2 and A5 is 3.14, but after column removal, they become 61 and 76, respectively, clearly showing that side case removal in moment frame system is more critical and destructive compared with corner case removal.By comparing the increasing moment force in cases 3 and 4, the initial moment values before column removal for columns A2 and A5 is 5.2 and 6.7, respectively, but after column removal, they become 61 and 112, respectively, indicating that side case removal in moment plus CBF system is more critical and destructive compared with corner case removal.Comparing case 1 with case 3, case 2 with case 4, case 5 with case 7, and case 6 with case 8, for the two different lateral resistance systems, the dynamic response of columns are different, but are not significant.Comparing all models with regular and irregulars plans, moment differential of adjusting column in regular models damped sooner than irregular models.This means that in irregular models the structure fluctuations are larger and longer.Furthermore, increasing moment force after column removal on regular structures is significantly greater than irregular structures; however, it was expected because of diminishing of whole structures’ mass.When the concrete slabs are affected by tensile stresses, its tensile strength decreases after observing the first crack.These cracks may be due to shear force or bending stress.Fig. 27a–d, show the tensile damage of concrete slab on models cases 1 to 4, respectively.As it is show tensile cracks under side column removal in building with irregular steel frames are more than other cases which indicate case 4 is in the dangerous condition.Fig. 27d show that a vast area of concrete slabs above side column removal cracked.In addition, by comparison between models with regular an irregular moment frame it can be observe that models with irregular plane had poor performance and concrete slab should be enrich.In addition, the tensile damage areas are listed in Table 6.The compressive behavior of the concrete is such that after resistance in the elastic region, it exhibits resistance to the plastic strain of 0.0025, then with increasing compressive stress, the resistance will collapse.This section examines the failure of concrete slab for two types of column removal scenarios.Fig. 28a–d, respectively, show the compressive damage of concrete slab on models cases 1 to 4.As it is noticed that damages area under corner column removal in building with irregular steel frame is more than other cases which show case 3 has critical condition.Fig. 28c show that a part of all concrete slabs above corner column removal cracked.In addition, by comparison between models with regular an irregular moment frame it can be observe that models with regular plane had better performance and compressive damages is minimum.In addition, the compressive damage areas are listed in Table 6.Two full scale experimental models were developed for the validation of the proposed modeling method.The elastic and plastic properties of steel and concrete materials were introduced.Element failure for steel members and cracking for concrete slabs were considered.All models were analyzed using dynamic explicit analysis.To ensure the accuracy of modeling, the numerical results were presented and compared with the experimental data.It suggests a reliable and affordable alternative to laboratory testing.The behavior of eight types of high rise steel composite frame buildings exposed to two lateral resistance systems, two column removal scenarios and two types of planes were investigated, applying a 3-D finite element modeling.The results provide the following information:Side case removal in moment frame and moment with centrically braced frame systems was more critical and destructive compared with corner case removal.Comparing the models, for the two different lateral resistance systems, the dynamic response of columns were different, but were not remarkable.Comparing all models with regular and irregular plan, it was observed that moment differential of adjusting column in models with regular plan damped sooner than irregular models.After all column removal cases, it was noticed that the increasing moment force on buildings with regular plan is greater than irregular buildings.The results by comparison between models with regular an irregular moment frame shows that models with irregular plane had poor performance and concrete slab should be enrich.To avoid potential progressive collapse, it suggests that the columns were designed and controlled for DL + 0.25LL load combination.The authors have no conflicts of | The results of this study shows that side case removal in moment frame and moment with centrically braced frame systems was more critical and destructive compared with corner case removal.Comparing the models, for the two different lateral resistance systems, the dynamic response of columns were different, but were not remarkable. |
families such as titanate , vanadate and double perovskite are also reported to be redox stable/reversible and with promising performances when run on hydrocarbon fuels .Besides these requirements, carbon deposition, tolerance to impurities, long term operational reliability and durability remain challenges, especially when liquid hydrocarbons such as diesel/bio-diesel are used as the fuel.Therefore some element of pre-reforming is essential in these cases .For gaseous hydrocarbons, reforming of the fuels will generate H2 and CO to be used as the fuels for SOFCs however, care must still be exercised to prevent reverse water gas shift reactions resulting in the formation of solid carbon.Anti-coking can also be achieved through tailoring of microstructure.It has been reported that hierarchically porous Ni-based anode deposited with a nanocatalyst layer has improved coking resistance when methane was used as the fuel.A thin layer of nano samaria doped ceria catalyst was infiltrated on the walls of Ni-yttria-stabilised zirconia anode.The cell efficiency has been improved with a power density of 650 mW cm−2 at 800 °C when methane was used as the fuel while the performance is stable for over 400 h.This study provides an excellent strategy in developing anti-coking anodes for SOFCs .In addition to carbon deposition and poisoning, sintering of catalysts is a further key challenge in order to realise a stable robust anode for SOFCs.In general, oxidation of hydrogen is easier than that for CO and hydrocarbons.An anode with integrated reforming or oxidation catalysts for in situ hydrogen production at the anode may facilitate the fuel oxidation thus reduce anode polarization resistance resulting in enhanced performance .Although the focus of this review is on the catalytic activity, the development of hydrocarbon reforming for hydrogen or hydrogen rich syngas generation for fuel cells has been the focus of a remarkable quantity of research efforts in order to understand factors such as the reforming catalyst structure, the reforming reaction and fuel processing.These areas are also important but are not within the scope of this paper.Tolerance to impurities such as sulphur and anti-coking are important for fuel cells which are also highlighted.By performing a comparative literature study it can be seen that the most common preparation method for the reforming catalyst is the wet impregnation technique.By using the proper preparation method and appropriate catalyst support then it is possible to maximize the key catalyst parameters such as hydrocarbon conversion, hydrogen production and selectivity.Key catalyst parameters including hydrogen selectivity, thermal stability, chemical stability and carbon deposition tolerance can be improved by the addition of catalyst promoters as well as the choice of catalyst support.The catalyst promoters are usually in the form of other metals which are added to the Ni catalyst thereby forming an M-Ni bi-metallic alloy catalyst.From the literature it can be found that the good performing catalyst promoters are Co, Cu, Sn, Pt, Pd, Mn, Rh, Ru and Au which have been reported to greatly improve the hydrogen production of the Ni catalyst while also decreasing carbon deposition.To conclude, the growth in the exploration in the design and identification of new reforming catalysts has led to the development of new effective catalyst with improved performance for hydrocarbon reforming for hydrogen or hydrogen rich gas generation for fuel cells.To avoid or simplify the gas separation process, sorption – enhancement and chemical looping steam reforming of hydrocarbons, particularly methane is promising.However, the cyclability of the CO2 sorbent, the oxygen carrier catalyst, anti-coking and slow kinetics etc are all major challenges.It is desired to develop new catalysts with low cost, that are both chemically and mechanically stable, tolerant to impurities, anti-coking with high catalytic activities towards reforming and partial oxidation of hydrocarbons for hydrogen production and fuel cell applications. | One of the most attractive routes for the production of hydrogen or syngas for use in fuel cell applications is the reforming and partial oxidation of hydrocarbons.The use of hydrocarbons in high temperature fuel cells is achieved through either external or internal reforming.Reforming and partial oxidation catalysis to convert hydrocarbons to hydrogen rich syngas plays an important role in fuel processing technology.The current research in the area of reforming and partial oxidation of methane, methanol and ethanol includes catalysts for reforming and oxidation, methods of catalyst synthesis, and the effective utilization of fuel for both external and internal reforming processes.In this paper the recent progress in these areas of research is reviewed along with the reforming of liquid hydrocarbons, from this an overview of the current best performing catalysts for the reforming and partial oxidizing of hydrocarbons for hydrogen production is summarized. |
Inspired by the adaptation phenomenon of biological neuronal firing, we propose regularity normalization: a reparameterization of the activation in the neural network that take into account the statistical regularity in the implicit space.By considering the neural network optimization process as a model selection problem, the implicit space is constrained by the normalizing factor, the minimum description length of the optimal universal code.We introduce an incremental version of computing this universal code as normalized maximum likelihood and demonstrated its flexibility to include data prior such as top-down attention and other oracle information and its compatibility to be incorporated into batch normalization and layer normalization.The preliminary results showed that the proposed method outperforms existing normalization methods in tackling the limited and imbalanced data from a non-stationary distribution benchmarked on computer vision task.As an unsupervised attention mechanism given input data, this biologically plausible normalization has the potential to deal with other complicated real-world scenarios as well as reinforcement learning setting where the rewards are sparse and non-uniform.Further research is proposed to discover these scenarios and explore the behaviors among different variants. | Considering neural network optimization process as a model selection problem, we introduce a biological plausible normalization method that extracts statistical regularity under MDL principle to tackle imbalanced and limited data issue. |
and/or classifications were reported, in some cases with previous traditional single cell segmentation.As an example, one study used labeled full resolution images to train a deep neural network which gave slightly better treatment level classification results compared to previously reported predictions using segmentation and factor analysis .Notably, a relatively low number of images were used to train the network and no previous segmentation and labeling of single cells was required.Nevertheless, labeled training data sets imply the a priori knowledge of phenotypes, which can contradict the unbiased strategy of image-based profiling.Two recent studies propose to use generic deep neural networks that were pre-trained on millions of ‘consumer’ images for image-based profiling tasks .The approaches are based on the assumption that generic neural networks learned general properties of natural images and are thus capable of extracting biologically meaningful information without additional training.Both studies report better results compared to traditional feature extraction when predicting small molecule MOA and provide a proof-of-concept for the applicability of generic deep neural networks for image-based small molecule profiling.As noted by the authors, additional studies with larger data sets across conditions to sample broader biological and technical space will be required for further validation.Another recently explored application of supervised learning in image-based profiling, particularly deep neural networks, is a novelty detection framework to identify unexpected phenotypes .Label-free profiling and the prediction of targeted drug screening assays are also future approaches exploiting image-based profiling data .Image-based profiling studies demonstrated the capability to improve the pre-clinical development of small molecules at almost any step of the pipeline from target identification over mechanism of action prediction to toxicity profiling.Increasing the throughput and extending more complex analysis methods of image based phenotypic screens and profiling approaches will help to increase the methodological portfolio of cellular screens to support the drug development process.Community efforts to create annotated datasets that can be shared across laboratories will be required to test and optimize the potential of strategies such as transfer learning to improve discovery science .Furthermore, large-scale chemical-genetics approaches inspired from successful studies in model organisms might harbor great potential to characterize drugs and drug-gene interactions in a systematic manner.Particularly, image-based profiling approaches in pre-selected informer panels of human cell lines might be a scalable and versatile tool to deprioritize compounds harboring adverse effects, asses compound efficacy and to generate hypotheses for drug synergism and repurposing.This work was in part supported by an ERC Advanced Grant of the European Commission. | The increase in imaging throughput, new analytical frameworks and high-performance computational resources open new avenues for data-rich phenotypic profiling of small molecules in drug discovery.Image-based profiling assays assessing single-cell phenotypes have been used to explore mechanisms of action, target efficacy and toxicity of small molecules.Technological advances to generate large data sets together with new machine learning approaches for the analysis of high-dimensional profiling data create opportunities to improve many steps in drug discovery.In this review, we will discuss how recent studies applied machine learning approaches in functional profiling workflows with a focus on chemical genetics.While their utility in image-based screening and profiling is predictably evident, examples of novel insights beyond the status quo based on the applications of machine learning approaches are just beginning to emerge.To enable discoveries, future studies also need to develop methodologies that lower the entry barriers to high-throughput profiling experiments by streamlining image-based profiling assays and providing applications for advanced learning technologies such as easy to deploy deep neural networks. |
the same number of artificial barriers as in alternative 7, but choose the reaches and barriers randomly.Note that alternatives 2–7 illustrate alternative management strategies, whereas alternative 1 is used to compare the other alternatives with the current state.Comparing alternatives 7 and 8 illustrates the different effects of strategic or random selections of rehabilitation activities.The alternatives are explained and visualized in Section S5 in the supporting information.For morphologically rehabilitated river sections at sites without rehabilitation constraints within 15 m we assumed 50% probability for the best and 50% probability for the second best level of discrete attributes and uniform distributions from 10 to 15 m for the riparian zone width.For sections with rehabilitation constraints on one side, we assumed uniform distributions from 2 to 5 m for the riparian zone width at the constrained side of the river.For barriers, we assumed that they were removed or replaced by a construction that can be passed by fish, e.g. a bed ramp with large blocks.If not otherwise mentioned, agricultural land use and thus water quality remained the same as in the current state.For alternatives 4, 7 and 8 in which land use by intensive agriculture was limited to 40%, current land use fractions were modified accordingly.In both cases, water quality valuation and its uncertainty was predicted based on the linear regression model considering parameter and residual uncertainty.Median costs were estimated to be CHF 2′000 per m of morphologically rehabilitated river and CHF 100′000 per replacement of an artificial barrier by a bed ramp.We used normal distributions with standard deviations of 33% around these estimates to account for uncertainty.We did not account for costs for the reduction of intensive agriculture, as we assumed that farmers can earn a similar salary by organic farming.Fig. 9 shows the predicted value distributions of the relevant nodes of the objectives hierarchy shown in Fig. 5 for all decision alternatives.Removing culverts or choosing rehabilitation sections randomly leads to a considerably smaller gain in the ecological state of the river network than a strategic choice of sections and nodes at similar costs.The importance of integrative planning is demonstrated by the comparison of rehabilitation of a main branch with and without accompanying water quality improvements.It is remarkable, that the significant differences in the valuation of outcomes at lower levels of the objectives hierarchy are strongly decreased at the highest level.This is a consequence of two mechanisms: First, cheaper alternatives tend to have less effect.Second, the high uncertainty about willingness to pay for river rehabilitation tends to make still existing differences less significant.Only two alternatives, 4 and 7, lead with some confidence to a good ecological state of the river network.Of these two, 7 is more expensive, but leads to better results in particular regarding connected habitats.Given these results, further steps would be to acquire more local information at the rehabilitation sites of these alternatives and try to find better alternatives starting with modifications of these two alternatives.This process could be stimulated by the detailed geographical outline of the alternatives and their consequences as shown in Section S5 in the supporting information.We argue for combining probability theory and scenario planning with multi-attribute utility theory as a conceptual framework for environmental decision support.We discuss the need for adaptations, extensions, and didactical support of these theories to improve their applicability in environmental management.This partially accounts for weak points criticized by developers and users of alternative approaches.In the following sub-sections we briefly summarize the most important suggested adaptations and extensions and conclude with final comments.Depending on the context, knowledge may be described by objective or subjective probabilities.In decision making for environmental management, probabilities should represent the state of knowledge of the scientific community about outcomes of decision alternatives.We argue that intersubjective probabilities provide the best framework for this purpose.This is rarely discussed explicitly, although combinations of probability statements of multiple experts are often used for scientific prediction, and multiple opinions in peer review processes are the basis of scientific quality control.Although there are convincing arguments for using probabilities to describe scientific knowledge, the limited capability of experts to quantify these probabilities and disagreements between experts can call for an extension to imprecise probabilities.The degree of imprecision can then be used to quantify the transition from cases in which quantitative decision support is suitable to cases in which the knowledge is insufficient.In the latter case, other criteria, such as the precautionary principle or probability distributions of the predicted change instead of absolute predictions may be used to support decisions.In some cases, due to too large ambiguity, scientists may even hesitate to formulate their predictions as imprecise probabilities.Here, it may be useful to combine alternative future scenarios with conditional probabilistic predictions and search for decision alternatives that are robust regarding the scenarios.Although utility and not value functions are the basis for rational decision support under risk, we emphasize the importance of value functions.Eliciting values and transforming them to utilities only at high hierarchical levels has several advantages compared to eliciting utilities directly throughout the objectives hierarchy: elicitation of a hierarchical, multi-attribute value function is easier than of a utility function; this avoids confounding the strength of preference for outcomes with risk attitudes and makes it possible to analyze the degree of fulfillment ofobjectives to stimulate the improvement of alternatives; the probability distribution of values can already give relevant insights into the decision problem under risk, even if finally utilities are required to generate the ranking of alternatives; | Environmental decision support intends to use the best available scientific knowledge to help decision makers find and evaluate management alternatives.The goal of this process is to achieve the best fulfillment of societal objectives.This requires a careful analysis of (i) how scientific knowledge can be represented and quantified, (ii) how societal preferences can be described and elicited, and (iii) how these concepts can best be used to support communication with authorities, politicians, and the public in environmental management.The goal of this paper is to discuss key requirements for a conceptual framework to address these issues and to suggest how these can best be met.We argue that a combination of probability theory and scenario planning with multi-attribute utility theory fulfills these requirements, and discuss adaptations and extensions of these theories to improve their application for supporting environmental decision making.With respect to (i) we suggest the use of intersubjective probabilities, if required extended to imprecise probabilities, to describe the current state of scientific knowledge.To address (ii), we emphasize the importance of value functions, in addition to utilities, to support decisions under risk.We discuss the need for testing "non-standard" value aggregation techniques, the usefulness of flexibility of value functions regarding attribute data availability, the elicitation of value functions for sub-objectives from experts, and the consideration of uncertainty in value and utility elicitation.With respect to (iii), we outline a well-structured procedure for transparent environmental decision support that is based on a clear separation of scientific prediction and societal valuation.We illustrate aspects of the suggested methodology by its application to river management in general and with a small, didactical case study on spatial river rehabilitation prioritization. |
actin and myosin XI isoforms to respond to strong and various environmental stimuli.Furthermore, the specific expression of many actin and myosin XI isoforms in reproductive tissues may reflect that their appearance has been necessary for the revolution of reproductive system from using the sperm to using the pollen tube in angiosperms.In this regard, it is interesting to investigate the involvement of pollen specific myosin XI isoforms on the pollen tube guidance which is the essential system for reproduction of angiosperms.Taken together, the actin–myosin XI cytoskeleton may have acquired diverse higher functions during the coevolution of myosin XI and actin isoforms in higher plants.For comprehensive understanding of actin-myosin XI as a control network, it will be necessary to determine the functions of all the myosin XI and actin isoforms.However, it is difficult to reveal the function of individual myosin XI isoforms because most myosin XI single knockouts exhibited no significant phenotype in Arabidopsis.Our previous study established a technique to produce chimeric myosin XI-2 with an altered speed by replacing the original motor domain of Arabidopsis myosin XI-2 with high- or low-speed motors.The transgenic Arabidopsis expressing the chimeric myosin XI-2 indicated remarkable phenotypes and an apparent relationship between cytoplasmic streaming and plant growth .This chimeric myosin XI system provides a powerful tool to elucidate the specific functions of individual myosin XI isoforms, because the chimeric myosin XIs that used the same tail domain sustained the binding ability to specific cargo of native myosin XIs in Arabidopsis.As more information about the enzymatic and motile activities, genetic analysis in gene knockouts and the chimeric myosin XIs becomes available, future research will elucidate the mechanism of actin–myosin XI cytoskeleton for intracellular transport in various tissues of higher plants in details.This work was supported by grants from the Japan Society for the Promotion of Science KAKENHI, and a grant from the Japan Science and Technology Agency ALCA.The authors have no conflicts of interest to declare. | Actin is one of the three major cytoskeletal components in eukaryotic cells.Myosin XI is an actin-based motor protein in plant cells.Organelles are attached to myosin XI and translocated along the actin filaments.This dynamic actin–myosin XI system plays a major role in subcellular organelle transport and cytoplasmic streaming.Previous studies have revealed that myosin-driven transport and the actin cytoskeleton play essential roles in plant cell growth.Recent data have indicated that the actin–myosin XI cytoskeleton is essential for not only cell growth but also reproductive processes and responses to the environment.In this review, we have summarized previous reports regarding the role of the actin–myosin XI cytoskeleton in cytoplasmic streaming and plant development and recent advances in the understanding of the functions of actin–myosin XI cytoskeleton in Arabidopsis thaliana. |
dummy equal to one if the household hosted another family or family member during the conflict.This indicator can be considered symmetrical to the one for residence damage.In fact, the further away a household is from the border, the less it is affected by the conflict and the more it is expected to offer help to other families or individuals.Furthermore, this indicator can be considered as a proxy of the household’s capacity to react and resist to shocks.As expected, the effect of hosting another family or family member has a symmetrical effect compared to the effect of reporting residence damage, which is the direct indicator of conflict exposure.In fact, we find a statistically significant and positive effect on the RCI as a result of hosting another family or family member, through an increase in AC and a reduction in SSN and ABS.First-step results are reported in column 2 of Table A13.As a final robustness check, we adopt an alternative indicator of household food security as an outcome variable.Specifically, we employ the HFIAS score.Table A15 shows the second-step IV results of the conflict’s effect on the HFIAS variation.As expected, the effect of the conflict is positive.The conflict has increased the level of food insecurity for Gazan households.However, the effect is not statistically significant when conflict exposure is instrumented with the distance from the household to the border.Some unobserved factors, such as for example aspirations or expectations on the future, may play a role in explaining household food security measured by HFIAS, due to subjective components of the questions.This may explain why, when the unobserved heterogeneity is controlled for by the IV approach, the effect of the conflict loses significance.In this paper, we study how a short but intense conflict affected the resilience capacity and food security of households in the Gaza Strip.By comparing the resilience capacity of households just before and after the 2014 conflict, we are able to identify the causal effects on key outcomes of interest.We find that while conflict reduced the overall resilience capacity of households to a certain extent, it also induced an aid response which led to an increase in access to basic services and to social safety nets for conflict-exposed households in the Gaza Strip.The importance of this finding is threefold, including from a policy perspective.Firstly, and in line with the significant volume of literature on the micro-economics of conflict, the results highlight the importance of health and social sectors for development in a conflict-affected economy.From medical services, to potable water access and sanitation, to education, the recovery and resumption of these basic services is critical for household resilience capacity, both for households that are directly and indirectly affected by conflict.Second, and beyond basic government services, the results indicate the importance of labor markets in achieving sound household resilience capacity.In particular, labor markets in the Gaza Strip were unable to provide the income streams households needed in order to maintain their livelihood.However, labor markets in the Gaza Strip are highly regulated and by no means free and flexible.Yet in the case of the Gaza conflict, the negative effects of restrictive labor markets for Palestinians in the Gaza Strip were compounded further by the conflict.Third, the results indicate the importance of the humanitarian response to conflict.Development and humanitarian responses to conflict are often analyzed separately.This paper demonstrates the relevance that quick, short-term humanitarian aid deliveries can have for the resilience capacity of households.This is likely to have a long-lasting impact in the Gaza Strip, in what continues to be a challenging environment for human development even in the absence of active conflict.In other words, this paper contributes to support the idea of bridging humanitarian and development interventions, at least within the framework of conflict response mechanisms.Another major finding of this paper, which also contributes to the literature on the nexus between conflict, resilience and food security, is the reduction of adaptive capacity which ultimately translates into a contraction of household resilience.While a potential negative effect on education is not detected with a short time panel dataset such as the one that we adopted, the analysis clearly demonstrates how the 2014 conflict has induced a contraction in income sources and stable employment.The reduction of local employment opportunities is an immediate negative effect which can be attributed to the conflict.This, besides being in line with existing literature, provides clear policy indications as an immediate response plan.From a policy perspective, the case of the Gaza conflict also demonstrates that immediate and significant support to victims of conflict can indeed help restore resilience capacity.This is an important finding in times when support for conflict victims is being increasingly encouraged by people in Western democracies.What remains to be investigated is if such support could even be provided while conflict is ongoing, such as in the case of the recent conflicts in Syria and South Sudan.From a research perspective, the ways in which resilience capacity is recovered in the long-term, several years after the end of a conflict, still needs studying.The literature also needs to establish how lower intensity conflict impacts on resilience capacity.Most importantly, we need to understand if either type of conflict – lower intensity and higher intensity – may force households below a lower critical threshold of resilience capacity, from which households cannot recover without external assistance.This threshold may be lower for individual households, but higher if a large number of households are concurrently affected by conflict.In the extreme scenario, | This paper studies how conflict affects household resilience capacity and food security, drawing on panel data collected from households in Palestine before and after the 2014 Gaza conflict.During this escalation of violence, the majority of the damages in the Gaza Strip were concentrated close to the Israeli border.Using the distance to the Israeli border to identify the effect of the conflict at the household level through an instrumental variable approach, we find that the food security of households in the Gaza Strip was not directly affected by the conflict.However, household resilience capacity that is necessary to resist food insecurity declined among Gazan households as a result of the conflict.This was mainly due to a reduction of adaptive capacity, driven by the deterioration of income stability and income diversification.However, the conflict actually increased the use of social safety nets (expressed in the form of cash, in-kind or other transfers that were received by the households) and access to basic services (mainly access to sanitation) for the households exposed to the conflict.This finding may be related to the support provided to households in the Gaza Strip by national and international organizations after the end of the conflict.From a policy perspective, the case of the conflict in the Gaza Strip demonstrates that immediate and significant support to victims of conflict can indeed help restore resilience capacity. |
of an increase of classroom size by one student in Israel or Sweden or by three students in California.The reduced-form results contrast with the current body of evidence, which tend to find either positive or null effects of school competition, with a notable exception of a negative effect reported by Imberman.One reason why our study is unique is that, we capture an exogenous change in the threat of competition.This way, we can account for the fact that actual entries into educational markets may be endogenous to school conduct or reactions of existing schools.In particular, principals might have an incentive to block the entry of a new school.The entry deterrence might affect the performance of students, however, in a different way than actual competition.Our paper is similar to Hoxby, which also exploits changes in the threat of competition, but also finds a positive reduced-form effect on student performance.We focus on the short run in which there is only a limited set of actions available to schools’ principals.The negative effect can be driven by the outflow of good students, adjustment in available resources or negative change in productivity.We exclude student sorting and adjustments in schools’ expenditures as potential channels, and conclude that the threat of competition might have a negative effect on school productivity.More research is needed to fully understand the mechanisms at play.The anecdotal evidence suggests that when the decisions are made in the short run, school principals may use simple marketing actions to attract parents, such as school trips.These activities might shift the attention of teachers and students away from learning.Therefore, the promotion of performance-based school rankings or an accountability system might alleviate the short-run negative impact of the threat of competition.Apart from the unique possibility to analyse exogenous variation in the threat of competition that the Polish reform enables, the Polish case is interesting for other reasons.The Economist wrote “Poland has made some dramatic gains in education in the past decade.Before 2000 half of the country’s rural adults had finished only primary school.Yet international rankings now put the country’s students well ahead of Americas in science and maths, even as the country spends far less per pupil.What is Poland doing right?,And what is America doing wrong?”.In other words, by studying the determinants of student performance in countries like Poland, we can also learn how to improve education systems in highly developed economies. | Theoretical literature on whether school competition raises public school productivity is ambiguous (e.g.MacLeod & Urquiola, 2015) and empirical evidence is mixed (e.g.Moreover, competition might itself be an outcome of changes in productivity (e.g.Hoxby, 2003).We provide evidence for the negative effect of the threat of competition on students’ test scores in elementary public schools in Poland.The identification strategy uses the introduction of the amendment facilitating the creation of autonomous schools in Poland in 2009 as an external shock to the threat of competition.We focus on the short run in which there is only a limited set of actions available to schools’ principals.For the total sample we find no effect, however, for more competitive urban educational markets, we report a drop in test scores in public schools following the increased threat of competition.This negative effect is robust to the existence of autonomous schools prior to the amendment and to the size of public schools.We exclude student sorting and adjustments in schools’ expenditures as potential channels. |
the growth medium was supplemented with cAMP, substantiating the serine-mediated interference in the cAMP-CRP control of gene expression .To be sure, this effect was reverted in a crp* background.It was therefore interesting to isolate secondary mutants that would again be resistant to excess serine in order to better understand how CRP was involved in this regulation.A new class of CRP mutants was identified in E. coli cya relA crp* strains.These mutants were mapped in the crp gene, and their physiological features differed from both the wild type crp and the crp* allele .However, they could not be studied more in-depth at the time.Exploring this selection procedure with the “omics” techniques that are now familiar should allow us to enter a new evolution landscape of the protein.Similar approaches could be developed to study other global regulators.In general, letting genes that are expressed under stationary conditions evolve should bring about new observations in the unchartered territory of adaptive mutations.We would like to thank the three anonymous reviewers for their positive, insightful and constructive comments.This work was supported by the Novo Nordisk Foundation. | The Escherichia coli cyclic AMP receptor protein (CRP or catabolite activator protein, CAP) provides a textbook example of bacterial transcriptional regulation and is one of the best studied transcription factors in biology.For almost five decades a large number of mutants, evolved in vivo or engineered in vitro, have shed light on the molecular structure and mechanism of CRP.Here, we review previous work, providing an overview of studies describing the isolation of CRP mutants.Furthermore, we present new data on deep sequencing of different bacterial populations that have evolved under selective pressure that strongly favors mutations in the crp locus.Our new approach identifies more than 100 new CRP mutations and paves the way for a deeper understanding of this fascinating bacterial master regulator. |
the reaction was monitored by UV–vis spectroscopy as the decrease in α-terpinene absorption at 266 nm .Fig. 6 shows UV–vis spectra of α-terpinene recorded in the course of illumination of pCB-PDI layer.The clear decrease in α-terpinene absorption at 266 nm results from its reaction with 1O2 .Since almost no drop in absorption of α-terpinene is observed during illumination of bare borosilicate glass, the self-degradation of substrate is excluded.Similarly to methanol environment the pCB-PDI layer is not being destroyed nor dissolved in the acetonitrile reaction medium during the photoprocess.The rate constant of photooxidation of α-terpinene was determined based on the decrease in its concentration in time.The plot of ln as a function of time gives the straight line indicating pseudo first-order reaction under applied conditions .It was also found that only small drop in the effectiveness was observed when the pCB-PDI layer was re-used: ca. 6% decrease in the photoreaction yield for 5th use, which confims that the deposited layer maintains its high photoactivity after photooxidation and can be re-used in the consecutive processes.Note, that since the synthesized polymers are soluble in chloroform and/or chlorobenzene, they can be easily deposited on solid substrates, like glass by e.g. spin coating technique.On the other hand, thanks to their insolubility and high stability in such solvents as acetonitrile or methanol, they can be applied as the effective source of singlet oxygen in heterogeneous synthesis of fine chemicals.The results obtained for pCB-PDI indicate that this type of materials consisting of conjugated polymers and PDI moieties can be used as complementary sensitizers.It can be expected that investigated systems may find application in the photooxidation processes including light sources with broader illumination wavelength range, for example white light.The use of common source of light and the use of solid sensitizers is considered beneficial for industrial processes and is in agreement with green chemistry concepts.In summary, different conjugated polymers with pendant PDI groups deposited on the glass support were investigated as a source of singlet oxygen in methanol and acetonitrile under green light illumination.PDI-based polymer films show high efficiency in 1O2 photogeneration tested by the reaction with DPBF specific trap.The presented results show that such materials can be effectively applied in the form of thin photoactive films in the commercially important process of α-terpinene oxidation without introduction of additional reagents into the reaction mixture.Moreover, the green light irradiation is beneficial compared to higher energy irradiation often used in generation of singlet oxygen, due to smaller possibility of side reactions. | New conjugated polymers with perylene diimides (PDI) as pendant groups were synthesized and deposited on glass substrates by the spin coating.The resulting thin films were characterized by UV–vis, Raman spectroscopy, atomic force microscopy and profilometry.It was shown that PDI photosensitizers retain its photoactivity after covalent immobilization and the formed layers can be applied as efficient and environmentally stable source of singlet oxygen, 1O2, as tested with 1,3-diphenylisobenzofuran (DPBF) specific trap.Additionally, α-terpinene heterogeneous photooxidation was studied as the practical use of singlet oxygen generated by this novel PDI-based materials.The use of such heterogeneous source of singlet oxygen can be beneficial for the fine chemicals synthesis, due to simplified products isolation and purification step. |
flat cross-section area provides better enhancement in both Nusselt number and friction factor with 5% and 3% enhancement respectively.Furthermore, this finding consistent with Fiebig et al. ,The effect of fin perforation shape of circular, square and triangular perforation has been conducted and illustrated in Fig. 4 for the Nusselt number and friction factor respectively.The results present a considerable enhancement in Nusselt number with using perforation technique, where the perforation provide 8.5%, 13.6% and 18.4% enhancement using circular, square and triangular perforation respectively.It can be observed that the triangular perforation offers better enhancement at Re = 16,500 followed by square and circular perforation, this can be attributed to the formulation of secondary flow or vortex and fluctuations of flow close to the inner wall in the perforated fin and this lead to enhance the thermal performance of the heat exchanger.This can give an advantage of using perforation technique with triangular perforation shape.Meanwhile, the perforation fins offer 14.64% 21.63% and 33.23% in friction factor for circular, square and triangular perforation respectively.The finding indicates that the friction factor of triangular perforation shape offers higher enhancement followed by square and circular perforation shapes, this can be credited to the initiation of fluctuations and swirling flow close to the inner wall in the perforated fin and this provides wide contact area and lead to increase friction factor.The results also clarify that with increasing Reynolds number the Nusselt number increase as well and friction factor decrease.The temperature contours of perforated and non-perforated fins are illustrated in Fig. 5.The temperature contours for perforated and non-perforated fins were conducted at an air velocity of 3 m/s.the results showed a decrease in temperature distribution size over fin wall in the downstream zone.It can be seen from Fig. 5 the low removal of the temperature distribution for solid fin meanwhile, the highest temperature reduction presented using triangular perforation fin as shown in Fig. 5.Consequent to the non-perforated fins and perforated fins, perforations are presented in the downstream and upstream zone in order to enhance the characteristics of fluid flow and improve the heat transfer rate.As an effect of air flow stream, high temperatures displayed in solid fin.Therefore, upstream part of the fin provides a higher reduction in the temperature comparing to the downstream part.Additionally, at the perforated fins, it can be observed there is a decreasing in the recirculation zone.However, the temperature reduction increase based on the perforation shape where the triangular perforation shape offers higher temperature reduction followed by a square, circular and solid fins as well as based on downstream air temperature.A numerical thermal analysis of the finned-tube geometry with different fin perforation shape of circular, square and triangular perforation under turbulent flow regime was developed in this paper.A satisfactory agreement was found between the present results and the references with a maximum deviation of 7% for the finned circular tube with solid fin.The results indicated that the flat tube display more enhancement in heat transfer than circular tube as well as triangular perforation shape provide higher enhancement in heat transfer rate represented by Nusselt number moreover, the triangular perforation shape offer higher pressure drop represented by friction factor.Furthermore, triangular perforation model offers a considerable finding due to the increment in the Nusselt number comparing to the pressure drop. | In this paper, the heat transfer and flow characteristic of air over flat finned tube with perforated and non-perforated fin have been carried out numerically.The mesh generation and finite volume analyses have been conducted using Ansys 15 with a RNG k-e turbulent model to estimate heat transfer coefficient and pressure drop.The free stream velocity ranging between 3,4,5,6, and 7m/s have been applied for all cases in the simulation and verified with the available data.A satisfactory agreement was found between the percent results and the references with a maximum deviation of 7% for the finned circular tube with solid fin.The results present a considerable enhancement in Nusselt number with using perforation technique, where the perforation provide 8.5%, 13.6% and 18.4% enhancement using circular, square and triangular perforation respectively.Triangular perforation model offers a considerable finding due to the increment in the Nusselt number comparing to the pressure drop. |
Fig. 8.Similarly, for the convenience of research, the structural parameter “vertical distance” is defined as V, as shown in Fig. 9.The vertical distance between two gold particles is increased with the H = 0 nm.It is obviously that a new absorption band is obtained when V is increased, as shown in Fig. 8.When V = 800 nm, the maximum absorption band can be achieved.It should be indicated that the average absorption rate is reduced with V further increasing.Moreover, when V = 1400 nm, the absorption band disappears.Another new absorption peak can be found at 13.5 THz, as shown in Fig. 8.Fig. 9 shows the calculated electric field distributions.LSP modes are excited near one edge of each gold particle.The resonance strength around two gold particles is enhanced with V increasing, seen in Fig. 9.These resonance behaviors lead to the absorption increase in Fig. 8.Moreover, the interaction and coupling phenomenon is found with V increasing, as shown in Fig. 9.When V = 800 nm, LSP modes are excited around two gold particles.At the same time, the strength of the interaction and coupling effect is enhanced.These resonance behaviors lead to the maximum absorption band, as shown in Fig. 9.However, the interaction and coupling effect is reduced when V = 1100 nm, seen in Fig. 9, which leads to the reduction of the absorption band in Fig. 8.When V = 1400 nm, the left gold particle and the bottom gold layer are touched.Moreover, original LSP modes around this gold particle can’t be excited ,seen in Fig. 9.The right gold particle keeps away the bottom gold layer, while LSP modes around each edge of this gold particle are excited, as shown in Fig. 9.These new excited LSP modes around each edge of the right gold particle leads to the new absorption peak, seen in Fig. 8.Finally, the absorption performance of the proposed metamaterial absorber is enhanced based on the phase change material Ge2Sb2Te5.The Ge2Sb2Te5 layer is a smart material.This smart material reveals a short phase transition when temperature higher than 433 K .In simulation, the dielectric constant of Ge2Sb2Te5 layer can be achieved based on the reported results in work .To date, the phase change material Ge2Sb2Te5 are applied in many fields, such as memories, electronics switches, and nanostructures .Fig. 11 shows the absorption spectrum of the proposed metamaterial absorber.It is found that the absorption spectrum shows two resonance phases.When the simulated temperature is lower than 433 K, original absorption peak shows a slight increase.When the simulated temperature reaches 433 K, a new absorption band is observed.Moreover, the new absorption band is continuously increased and shifted to lower resonance frequencies with the simulated temperature increasing.This is because that the Ge2Sb2Te5 layer is transformed to a crystalline phase when the temperature reaches 433 K.The absorption loss of electromagnetic waves is significantly increased in Ge2Sb2Te5 layer.The Ge2Sb2Te5 layer reveals a very dispersive dielectric constant and a highly imaginary part.These resonance performances reveal a high absorption loss of electromagnetic wave in the Ge2Sb2Te5 layer, which leads to the new absorption peak.In order to further enhance the absorption performance of the absorber, the thickness of the dielectric layer was gradually increased.It is found that the absorption band is slightly increased, as shown in Fig. 12.The center frequency of the absorption band is almost unchanged.On the one hand, the absorption band is mainly comes from the LSP modes resonance on the edges of metal particles, as shown in Fig. 4.On the other hand, the LSP modes resonance is also can be influenced by the surrounding dielectric environment of metal particles .Therefore, the absorption band is slightly increased but the center frequency is unchanged.However, for the absorption peak, the amplitude is increased from 73% to 94% and the resonance frequency is shifted to lower frequencies.This is due to the absorption peak is mainly comes from the dielectric loss of the Ge2Sb2Te5 layer.In this paper, a metamaterial absorber is designed and simulated based on a metal particle array embedding in a media layer.The absorption performance is enhanced based on modulating the horizontal or vertical distance between two metallic particles.Perfect absorption bands and new absorption peaks are obtained.The absorption performance is also improved through changing the simulated temperature.The effect of the Ge2Sb2Te5 layer thickness on the absorption performance is also revealed. | A compound structure metamaterial containing a metal particles array embedded in a dielectric layer is designed and simulated.The absorption property is modulated through changing horizontal (H) or vertical (V) distances between two metallic particles.The interaction and coupling between LSP modes enhances of the absorption performance.Two absorption bands are revealed based on the maximum resonance strength of the coupling effect.The absorbing properties are modulated due to the Ge2Sb2Te5 layer is temperature sensitive.The effect of the thickness of the dielectric layer on the absorption performance is also revealed. |
Reinforcement learning algorithms rely on carefully engineered rewards from the environment that are extrinsic to the agent.However, annotating each environment with hand-designed, dense rewards is difficult and not scalable, motivating the need for developing reward functions that are intrinsic to the agent.Curiosity is such intrinsic reward function which uses prediction error as a reward signal.In this paper: We perform the first large-scale study of purely curiosity-driven learning, i.e., across standard benchmark environments, including the Atari game suite.Our results show surprisingly good performance as well as a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many games. We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better. We demonstrate limitations of the prediction-based rewards in stochastic setups.Game-play videos and code are at https://doubleblindsupplementary.github.io/large-curiosity/. | An agent trained only with curiosity, and no extrinsic reward, does surprisingly well on 54 popular environments, including the suite of Atari games, Mario etc. |
efficiency and biogas generation is seen mainly after addition of the second stage.In another study using a CSTR as a primary stage and an up-flow anaerobic sludge blanket reactor as the second stage, the results showed that the two stage system is more stable at higher organic loading rates compared to a single stage involving only a CSTR.Observe that in both cases, CSTR performs optimally when used as a first stage.However, a major drawback with the aforementioned and many other studies involving multistage digestion is that the digester configuration is often predefined at the start of the study with no systematic rule for answering the following key questions: what type of digesters subunits to include in the network how many individual digester subunits should be included should the subunits be connected in series, parallel or both should bypass or recycle streams be included and if yes where within the system?,The main advantage of the presented prototype compared to other multistage systems in that it has been designed based on a systematic framework that uses experimental data, which contains necessary information about the kinetics of the process.In addition, by being a compact multistage system, the prototype can separate the acidogenic and methanogenic phases axially within the reactor but without the high cost and control problems normally associated with multistage systems.Although the prototype is still to be subjected to experimental validation, it can be theorised to have the following advantages: simple design, low sludge generation, no requirement of biomass with special settling properties, no requirement of a special gas or sludge separation system as well as stability to organic shocks.A natural progression of the study will be to subject the prototype to experimental testing whereby it will be constructed and operated simultaneously with a conventional fixed dome system under similar experimental conditions.This will allow for the determination of optimal flow rates for the feed stream, bypass stream and effluent stream from the primary treatment stage.A very interesting continuation of the current study with respect to the fuzzy decision-making aspect will be to consider other scenarios for use of anaerobic digestion technologies.The anaerobic digestion technology can be used for three main applications: Renewable energy generation, sustainable nutrient recycling as well as waste sanitation and different digester technologies are more adapted for one application than the other.This implies the ranking of the digester technologies using the fuzzy method will be different if the application of anaerobic digestion technology changes.This study has only focused on use of anaerobic digestion for renewable energy generation.It will be interesting if further studies could expand the fuzzy multicriteria decision to the other two applications of anaerobic digestion and compare the results for all three cases.More interestingly, because the method is novel and not very common in the field of anaerobic digestion, the ultimate research goal should be to integrate the methodological framework presented in this study into a web-based application, which can serve industry practitioners and researchers involved in design of anaerobic digester systems,A framework that couples attainable regions and fuzzy multicriteria decision making for modeling configurations of anaerobic digesters without use of a kinetic model has been developed.Taking a case study of anaerobic treatment of abattoir effluent, the optimal batch policy involves four anaerobic sequencing batch reactors operated in series with fresh feed being added at the second and the fourth stages.In the case of a continuous mode operation, the optimal digester structure involves a continuous stirred tank digester with bypass from feed followed by an anaerobic baffled digester, which has been modelled as a compact three-dimensional prototype. | This study sets out to develop an approach that couples attainable regions and fuzzy multicriteria decision methods for modeling optimal configurations of multistage digesters without using a kinetic model of the process.The approach is based on geometric analysis of methane curves as their shapes contain valuable insight into substrate biodegradability characteristics during anaerobic digestion.With the case study of abattoir waste, the results indicate that the optimal batch operation policy involves four anaerobic sequencing batch reactors operated in series with fresh feed being added at the second and the four stages (fed-batch systems).For continuous mode operation, the optimal configuration involves a continuous stirred tank digester with bypass from feed followed by an anaerobic baffled digester, which has been used to obtain a novel prototype.The methodological framework presented in this study can be adopted to enhance design of multistage anaerobic digesters especially when reliable kinetic models are unavailable. |
High temporal resolution neutron imaging is a technique from which several domains of science and engineering may profit, as there are large number of processes for which a high temporal resolution neutron imaging is an appropriate technique of investigation.It is the recent developments in sCMOS detector technology that allow for imaging of such processes with very high temporal resolution.The negligible readout-time and low read-out noise of the sCMOS cameras therefore allow for the “continuous” observation of non-cyclic processes with high-temporal resolution.The examples of high temporal resolution neutron imaging of industrially relevant samples utilizing the sCMOS technology include: the visualization of flows in liquid metals with temporal resolution of approximately 0.03 s in 2D , and on-the-fly tomography of water uptake in roots with sub-minute resolution in 3D .An overview of the high-temporal resolution imaging for studies of porous media has been recently provided by Kaestner et al. .It is necessary to mention here that the neutron imaging of even higher temporal resolution has hitherto been available only for the case of repetitive/cyclic processes using the stroboscopic modality of the neutron imaging .However, should one like to investigate non-cyclic processes using neutron imaging with 0.01 s temporal resolution, the principal limitation is posed by the available neutron flux.Even at advanced neutron sources, the flux is limited to about 107 n cm−2 s−1.This value translates to single captured neutrons per 100 × 100μm pixel per 0.01 s acquisition time.In order to alleviate the neutron intensity problem for the high temporal resolution imaging, we decided to utilize a neutron focussing guide.Neutron focussing guides with supermirror coatings are routinely used for number of experiments in neutron science , however, their use for increasing the neutron flux locally within neutron images is limited .This short paper presents the results of pilot tests using such experimental arrangement, thus increasing the available neutron flux at the expense of the available field of view and the depreciation of the available spatial resolution.A parabolic neutron guide was selected due to its availability at PSI and was utilized for high-temporal neutron imaging experiments at BOA beamline .BOA beamlines provide cold neutrons that possess the higher reflectivity and thus higher supermirror efficiency compared to thermal neutrons.The used neutron focussing guide had the following parameters.Its length equalled 1 m.The size of the entrance and exit windows were 25 × 25 mm × mm and 13.3 × 13.3 mm × mm, respectively.The parabolic-bent substrate is coated with a m = 3.6 supermirror.The experimental arrangement at the BOA beamline was as follows: 40 × 40 mm × mm aperture was used.The “standard” MIDI-camera box was placed at the measuring position 2.The flight path has been equipped with flight tubes and beam-limiters.The focussing guide was placed upstream the detector in such a manner the focal point of the guide was positioned approximately 10 cm behind the scintillator screen.The MIDI-box was equipped with 200 μm-thick 6LiF/ZnS scintillators screen and with a sCMOS detector coupled with a 50-mm lens.The resulting pixel in the image equalled 56.3 μm.As the neutrons were focussed into smaller area by the focussing guide, only 256 × 256 pixel area of the detector were acquired, limiting the field of view to approximately 14.4 × 14.4 mm × mm.The image showing the beam distribution in the field of view is shown in Fig. 1.The beam distribution exhibits clear focussing of the neutron beam both in the horizontal and in the vertical directions, creating an approximately flat-top area of about 3.5 × 3.5 mm × mm in the centre of the image.The flux at this flat-top area is approximately an order of magnitude higher than should the image be taken without the focussing guide.Naturally, the focussing guide has a significant influence on the neutron beam divergence and thus on the spatial resolution of the resulting images.In fact, the spatial resolution in the image is clearly rather non-uniform.This is manifested in the image of the test pattern that was placed about 2.5 mm from the scintillator screen shown in Fig. 1.From this qualitative assessment and the known thickness of the used scintillator screen, we infer that the spatial resolution throughout the entire image is not better than 200 μm even for the acquisition time of only 0.01 s.Regarding the pilot test, we performed a model experiment that allowed us to observe the process of droplets of water falling into the container filled with heavy water and the subsequent process of the interaction of the two liquids.For these experiments, an in-situ titration system that allows for a remote delivery of well-defined volumes of liquids onto the sample stage was assembled.Several droplets of water were dropped from a needle placed about 3 mm above the original D2O level in the container.The acquisition time of the radiographic series was set to 10 milliseconds.Fig. 2 shows 100 subsequent images of the fall of one such droplet into the container.As this droplet is not the first one that was dropped into the container, the original D2O level is already contaminated with the H2O from the preceding droplets.The droplet starts falling at about 0.03 s and hits the liquid surface in the container at 0.07 s. Between the times of 0.07 s and 0.14 s the droplet remains on the surface of the liquid in the container and the changes in the profile of the liquid surface in the container due to the droplet impact can be clearly observed.From the time 0.15 | In order to partially overcome the neutron intensity problem for the high temporal resolution imaging, a parabolic neutron focussing guide was utilized in the test arrangement and placed upstream the detector in such a manner that the focal point of the guide was positioned slightly behind the scintillator screen.In a pilot test application, an in-situ titration system allowing for a remote delivery of well-defined volumes of liquids onto the sample stage was utilized. |
s the droplet starts submerging into the depth while mixing with the D2O volume reaching the largest depth at the time,about 0.35 s.As the density of D2O is higher than that of H2O, the H2O is then redistributed towards the surface of the liquid in the container by buoyancy forces, thus reinforcing the H2O-contaminated layer that had been formed by the previous droplets.It is noteworthy that already within app 1 s after the fall nearly all the volume of H2O is distributed close to the surface of the liquid.A selection of images showing the entire process in more detail is shown in Fig. 3.As the longer radiographic series were actually captured, an .avi file showing the video of the extended version of this image sequence is available in the Supplementary materials of this paper in the online version at DOI: http://dx.doi.org/10.1016/j.mex.2016.10.001.As the longer radiographic series were actually captured, an .avi file showing the video of the extended version of this image sequence is available in the Supplementary materials of this paper.From the test arrangement point of view, we must highlight again that the focussing guide has been in no way optimized for the high temporal resolution neutron imaging task.We can foresee that an optimized design of such guide might lead to even more favourable results in the future.Likewise, it should be highlighted that the used scintillator screen was not optimized for the purpose of the,high-temporal resolution neutron imaging either.Two aspects of the scintillator screens should be optimized in the future for this purpose, namely, the neutron capture efficiency of the scintillator should be increased, while at the same time the light output decay time should be suppressed as much as possible.The experiments were performed at the BOA beamline that has due to its spectrum higher sensitivity for hydrogen than that of the ICON beamline .On the other hand, the ICON beamline exhibits more than two times higher flux and therefore the similar experiments are foreseen to deliver even superior results when performed at the ICON beamline or at the existing facilities of inherently higher neutron flux.Likewise, similar experiments are foreseen for the beamlines with already in-built neutron optics.Regarding the presented experiments, we trust that we visualized the process of an interaction of two liquids that are otherwise indiscernible by other probes with the very high temporal resolution of 0.01s.We foresee that rather similar experiments might provide sound experimental backing for modelling of liquid interactions .The paper presents results of radiographic experiments.Naturally, this very high temporal resolution capability can be utilized for tomographic imaging with foreseen acquisition times of few seconds, thus enabling 4D investigation of samples of limited size with the mentioned temporal resolution .This short paper presents the results of pilot tests using the combination of sCMOS detector with a parabolic neutron focussing guide.We show that such combination may provide neutron imaging of very high temporal resolution albeit at the expense of the available field of view and the depreciation of the available spatial resolution.In the pilot model experiment, we visualized the process of an interaction of two otherwise indiscernible liquids with the temporal resolution of 0.01s.We trust that the very high temporal resolution neutron imaging capability will find further users both from the academia and the industry. | The recent developments in scientific complementary metal oxide semiconductor (sCMOS) detector technology allow for imaging of relevant processes with very high temporal resolution with practically negligible readout time.However, it is neutron intensity that limits the high temporal resolution neutron imaging.In such a test arrangement, the neutron flux can be increased locally by about one order of magnitude, albeit with the reduced spatial resolution due to the increased divergence of the neutron beam.The process of droplets of water (H2O) falling into the container filled with heavy water (D2O) and the subsequent process of the interaction and mixing of the two liquids were imaged with temporal resolution of 0.01 s. Combination of neutron focussing device and use of sCMOS detector allows for very high temporal resolution neutron imaging to be achieved (albeit with reduced spatial resolution and field of view).. In-situ neutron imaging titration device for liquid interaction experiments.. Interaction of otherwise indiscernible liquids (H2O and D2O) visualized using neutron radiography with 0.01 s temporal resolution. |
Nematodes of the genus Trichinella are zoonotic parasites with a cosmopolitan distribution.The twelve taxa recognized so far in the genus are separated into two clades, one that encompasses species that encapsulate in host muscle tissues following muscle cell reprogramming, and a second that includes non-encapsulated species.Of the six encapsulated species, Trichinella spiralis which probably originated in Eastern Asia, shows a cosmopolitan distribution in tropical and temperate regions due to its passive introduction into Europe, North and South America and New Zealand.The geographical range of the other five encapsulated species shows a north-south cline.Trichinella nativa occurs in arctic and subarctic areas of the Holarctic region, approximately up to the isotherm – 4 °C in January in the south; Trichinella britovi is found in the Palearctic region from the isotherm −6 °C in January in the north up to North and Western Africa in the south; Trichinella murrelli is present in temperate areas of the Nearctic region; Trichinella nelsoni occurs in the Ethiopian region; and Trichinella patagoniensis in the Neotropic region.These zoonotic nematodes are transmitted from one host to another through the ingestion of striated muscle tissues infected with larvae; however, vertical transmission has also been experimentally demonstrated in some rodent species and ferret, but not in foxes and pigs.The most important reservoir hosts of Trichinella nematodes are those with a scavenger behaviour.An important adaptation of Trichinella spp. muscle larvae, which facilitates parasite transmission, is a physiological mechanism to survive in decaying carcasses; the greater the persistence of larval viability, the higher the probability of being ingested by a scavenging host.Despite the larva-induced angiogenic process that develops around the nurse cell after larval penetration of the muscle cell, larval metabolism is basically anaerobic, which favours its survival in decaying tissues.The distribution areas of T. nativa and its related Trichinella T6 genotype, and T. britovi overlap, completely or partially, with cold regions and the muscle larval stage of these taxa have developed mechanisms to survive in frozen carrions for several months up to several years.Previous studies on freezing temperature favouring the survival of T. britovi and T. nativa larvae in muscle tissues of naturally or experimentally infected host carrions had shown that the optimal freezing temperature range for survival corresponds to temperatures between 0 °C and −20 °C.Furthermore, the molecular identification of Trichinella spp. larvae showed that when infected muscle tissues are frozen and thawed more than one time, a DNA degradation occurs, which is caused by thermal shock.Based on these data, Pozio hypothesized that the habitat under the snow, i.e. the subnivium, could represent the ideal haven for the survival of Trichinella larvae in decaying muscles of host carcasses, since it provides environmental stability.The aim of the present study was to investigate the survival and infectivity of T. britovi larvae in muscle tissues of naturally infected carnivore carcasses preserved beneath and above the snow.Two carcasses of foxes and a carcass of a raccoon dog were collected within the Latvian State programme for the Control and Eradication of Rabies.Animals were first tested for rabies and only rabies negative animals were subsequently used to test for Trichinella sp. infection at the Institute of Food Safety, Animal Health and Environment BIOR, Riga, Latvia.Animals were hunted between the end of November and the beginning of December 2017.Carcasses were eviscerated and 25 g of muscles from the tongue and diaphragm pillars were collected and submitted to artificial digestion according to the protocol of the Commission Regulation 1375/2015.Following digestion, larvae were washed and counted in triplicate to detect the number of larvae per g.Then each carcass was packaged in plastic bags, placed in a polystyrene box containing ice packs and forwarded to the Department of Veterinary Sciences, University of Turin, Grugliasco, Italy, by an international courier on December 11, 2018.The carcasses were delivered on December 12, 2017.Upon arrival, the temperature in the polystyrene boxes was ascertained to be 6 °C.The raccoon dog carcass was cut in two symmetrical parts with a longitudinal cut along the spine, and henceforth considered as two carcasses.Carcasses were stored at +4 °C and transported on December 13, 2017 in polystyrene boxes containing ice packs to the locality where they were to become the object of the experimental study.The study was carried out in the Alps at 1175 m above sea level.The scavenger proof box where the fox and raccoon dog carcasses were preserved, was placed in a locality of the Oulx municipality, Turin province, Northern Italy.The box was positioned in a restricted open space exposed to the north and shadowed by a building on the south side.The surrounding area constituted the south slope of a large inner alpine valley mainly covered by a Scots pine forest interspersed with juniper, ash and birch.A wooden frame measuring 90 × 80 × 50 cm with a 2 × 2cm netted mesh was used to house the carcasses.It was placed on the ground and surrounded by snow.A fox carcass and a racoon dog carcass were placed on one side of the bottom of the box.Then, half of the box was filled with snow almost to the upper border and, if necessary, snow was added to maintain the depth at about 45–50 cm.The second fox carcass and the second raccoon dog carcass were placed on the bottom of the other half of the box without any snow cover.A wooden partition prevented the snow from falling into the second half of the box.A temperature and humidity recording systems were | Parasite nematodes of the genus Trichinella are transmitted from one host to another through the ingestion of larvae present in striated muscles.Thereby, these nematodes developed an anaerobic metabolism favouring their survival in decaying tissues.In addition, muscle larvae of three taxa, namely Trichinella nativa, Trichinella britovi and Trichinella T6, can survive freezing for several months to several years depending on the taxon. |
of carcasses in the subnivium ranged from −2.5 °C to +3.3 °C with a maximum delta of 5.8 °C and a relative humidity close to 100%.In contrast, the temperature variation of carcasses exposed above the snow was in the range of −16.9 °C to +14.8 °C with a maximum delta of 31.7 °C and a relative humidity ranging from 23.9% to 99.9%.In the present study, the temperature variation in the subnivium was 5.5 times lower than that above the snow.Snow accumulation and density, both of which directly control the formation and persistence of the subnivium, are affected by ambient temperature, wind, snowfall, and radiation fluxes.Deep, low density snow is most effective in maintaining the thermal stability of the subnivium.With warmer ambient temperatures, ablation increases, reducing depth and increasing snow density throughout the entire snowpack.Under colder conditions, the temperature gradient between the bottom layer of snow and the air temperature increases, which increases snow density at the surface of the snowpack.Despite increased density at the surface, air temperatures at or below 0 °C prevent the occurrence of surface snow melt and support the retention.At the end of the study, carcasses preserved in the subnivium appeared more degraded than carcasses stored above the snow.However, the putrefaction of muscles did not adversely affect the survival and infectivity of Trichinella larvae as previously observed.This study was conducted at 1175 m asl on the southern slope of the Alps, an area where Trichinella infection was widespread in wild carnivores.However, during the last decades, a reduction of T. britovi prevalence was observed in the entire Alps region.At the same time, the snow depth and snow cover in the Alps showed a significant decrease for elevations below 1300 m asl.In Latvia, a relationship between T. britovi infection in wild boar and snow cover was also documented.Even if latitude, land cover, and interannual variability, influence the subnivium phenology, optimal subnivium conditions depend on the relationship between air temperature, snow depth, and snow density.Above-freezing temperatures promote rapid subnivium establishment, while below-freezing temperatures reduce snowmelt and support longer subnivium maintenance periods.In the Alps, the number of days with snow cover of at least 30 cm reduced from 60 in the eighties to less than 30 from the nineties of the last century.Colder air temperatures are required to promote subnivium maintenance.Warmer air temperatures increase ablation, which reduces depth, increases density, and reduces the overall insulative capacity of snow cover.Future climate change scenarios predict warmer winter temperatures, which are often accompanied by an increase in precipitation falling as rain rather than snow and an overall increase in air temperature variability.Disturbances to the subnivium in either extent, duration, or thermal stability can therefore disrupt the regimes that currently provide fitness benefits to a variety of organisms, including cold adapted representatives of the genus Trichinella, resulting in phenological mismatches and enhanced mortality.In conclusion, the subnivium with its environmental stability represents a seasonal refuge for Trichinella larvae in host carrions and even if carcasses preserved in the subnivium appeared more degraded than carcasses stored above the snow, the putrefaction of muscles did not adversely affect the survival and infectivity of the infecting stage of this parasite.The present study demonstrates the interaction between environmental conditions and the life cycle of Trichinella nematodes, which apparently do not show a free-living stage. | The longer the survival of muscle larvae in host carcasses, the higher the probability of being ingested by a scavenging host.The aim of the present work was to investigate the survival time of T. britovi larvae in naturally infected host carcasses preserved beneath or above the snow.Fox and raccoon dog carcasses naturally infected with T. britovi larvae were preserved beneath or above the snow in a cold mountainous area.Temperature and relative humidity were recorded.The RCI of larvae in carcasses preserved beneath the snow (the subnivium) ranged from 23 to 25 at day 0, to 12–18 after 112 days.In contrast, the RCI of larvae in carcasses preserved above the snow ranged from 22 to 27 at day 0, to 0.0 after 112 days.The difference between the RCIs of larvae beneath the snow and above the snow was statistically significant (P < 0.01).These data corroborate the hypothesis that the subnivium with its environmental stability favours the survival of Trichinella larvae in host muscles, increasing the probability of their transmission to other hosts.On the other hand, the environment above the snow, characterized by sudden temperature variations, causes strong environmental stress for larvae in host carrions causing their death. |
and privacy enhancing technology to be implemented.Secure e-mail, secure messaging and secure phone calls should be the current basic demand of consumers of electronic products.In the future this demand should extend to secure communications with IoT devices that will invade all aspects of human life.Finally, the deployment of PETs and encryption more specifically should not prevent LEA from conducting targeted investigations pending the delivery of proper warrants by judicial authorities.LEA should have the skills and technical means for targeted interception of data at the end-point level before/after it is encrypted/decrypted, if necessary by conducting physical interventions on the devices of the data subject under investigation . If major security vulnerabilities are identified and exploited by LEA during targeted investigations, LEA should report them to the vendors/service providers concerned as soon as a possible, without compromising the results of on-going investigations. | The 2013 Snowden revelations ignited a vehement debate on the legitimacy and breadth of intelligence operations that monitor the Internet and telecommunications worldwide.The ongoing invasion of the private sphere of individuals around the world by governments and companies is an issue that is handled inadequately using current technological and organizational measures.This article1 argues that in order to retain a vital and vibrant Internet, its basic infrastructure needs to be strengthened considerably.We propose a number of technical and political options, which would contribute to improving the security of the Internet.It focuses on the debates around end-to-end encryption and anonymization, as well as on policies addressing software and hardware vulnerabilities and weaknesses of the Internet architecture. |
maize yields by 25%, indicating legumes may reduce fertilizer requirements by 50% .Besides adding organic matter to soils, trees roots can effectively remove portions of inorganic phosphorus in the soil solution through uptake causing increased P adsorption capacity of soils and increased P retention .Tree roots exudates and decaying root cells are used as an energy source by soil micro-organisms.This food web is maintained in the soil outside crops growing seasons, supporting soil biota that provide crops with nutrients at the beginning of the next cultivation cycle.Traditional, resource-conserving approaches such as agroforestry can positively influence the drivers of intensification such as nutrient-cycling and water-use and help close the yield gap in Africa.The use of legumes in rotations or intercrops can restore soil nutrients by fixing nitrogen, improving soil organic matter and reducing reliance on fertilizer use.Practices involving species mixtures and intercropping can create diversified production systems yielding both staples and marketable tree products to improve livelihoods .Further research on the integration of ecological knowledge with an understanding of social-economic constraints is nonetheless required in order to fulfil the potential of diversification in improving productivity, enhancing ecosystem functions and providing adaptability in different African farm settings.Papers of particular interest, published within the period of review, have been highlighted as:• of special interest,•• of outstanding interest | Agricultural commodity production in a changing climate scenario is undergoing sustainability challenges due to degradation of soil fertility, water and biodiversity resources.In Africa, yields for important cereals (e.g., maize) have stagnated at 1tha-1 due to land degradation, low fertilizer use and water stress.Resource-conserving options such as agroforestry promote integrated management systems that relate livelihoods and ecosystem service functions to agricultural production.Low input practices including improved fallows using legumes in rotations or intercrops can restore soil nutrients, improve soil carbon and reduce reliance on fertilizer use by 50%.We review how agroforestry can sustain agricultural intensification in Africa by regulating ecosystem functions such as nutrient recycling, water use, species diversity and agrochemical pollution.© 2013 The Authors. |
Developmental robustness is achieved through buffering gene expression patterns against stochastic, genetic, and environmental perturbations .Although the underlying molecular mechanisms are still being dissected, transcriptional robustness can be modulated at several levels , including DNA accessibility , RNA polymerase II pausing , and promoter organization .It can also arise from higher levels of network organization , including functional redundancy, defined as two parts of a system that can perform the same or similar tasks and are therefore not individually essential .A potential contributor to functional redundancy is regulatory elements with overlapping functions.A number of studies in vertebrates , invertebrates , and plants have identified enhancers that appear to act redundantly—defined as two enhancers that drive similar patterns of expression and in which deletion of one did not cause any obvious aberrant phenotypes .There are a number of well-characterized examples of such shadow enhancers acting during embryonic development .In the pax3 locus, for example, two enhancers direct expression in neural crest cells .Although the proximal 5′ element, when placed upstream of pax3 cDNA, is sufficient to rescue neural crest cell development in mice lacking endogenous pax3, this enhancer is not required for development or viability.Similarly in the TCRgamma locus, deletion of either the HsA or 3′E enhancers has little effect on TCRgamma transcription, whereas deletion of both elements causes a severe reduction in transcription and defects in gammadelta thymocyte development .Interestingly, although both enhancers act redundantly in gammadelta thymocytes, in a different cell context, the HsA enhancer acts non-redundantly with the 3′E element to regulate gene expression .Although examples of redundant enhancers have been known for over 20 years, recent studies in Drosophila have reignited the debate over the prevalence and functional role of these elements in the regulation of gene expression.When examining the binding patterns of three transcription factors, Hong et al. observed that in addition to a gene’s well-characterized enhancer, many early patterning genes in Drosophila have a second element with very similar TF occupancy .These shadow enhancers frequently regulate highly similar, overlapping patterns of expression in transgenic reporter assays, suggesting that they act redundantly .For example, each of the five gap gene loci in the Drosophila segmentation pathway contain an additional shadow enhancer .Shadow enhancers can provide robustness to genetic variation within a population, allowing development to proceed unperturbed, as shown at a number of well-characterized loci .However, whether this is their primary function remains unclear as they appear to have multiple functions in the regulation of gene expression.For example, in some cases, enhancers that appear to act redundantly due to their overlapping activity are actually both essential to define the precise spatial, in the case of snail , or temporal, in the case of brinker , pattern of that gene’s expression.Alternatively, they may act redundantly, controlling the levels of a gene’s expression at one stage of lifespan, but act more synergistically during another, as recently observed at the mouse Pomc locus .Similarly, enhancers that appear to act redundantly under normal environmental conditions can be essential under more stressful conditions, as demonstrated in the shavenbaby and snail loci.Genes with redundant enhancers also tend to initiate their expression more synchronously during very rapid cell divisions, illustrating another context in which these elements help ensure robust expression during development .These examples question the extent to which enhancers with redundant activity in one context are completely redundant across the entire spectrum of the enhancer’s activity.The examples above demonstrate that individual enhancers can act to canalize their target gene’s expression, buffering them against environmental and genetic perturbations.However, for shadow enhancers to act as major contributors to developmental robustness, they should be much more prevalent than the handful of examples known to date.Just how extensive redundant enhancers are, and to what extent overlapping enhancers are truly redundant, remains unclear.To directly assess this, we performed the first genome-wide assessment of the prevalence and global properties of shadow enhancers using the developing Drosophila mesoderm as a model system.Using two stringent approaches, we identified 1,055 shadow enhancers associated with 319 unique genes.For 23 enhancers at five loci, we examined their in vivo activity throughout all stages of embryonic development.This revealed a regulatory landscape that is considerably more complex than the simple “one shadow to one main enhancer” relationship.Rather, the majority of loci contain three, four, or even as many as five shadow enhancers.When one shadow enhancer is deleted in each of these five loci, there was little obvious effect on embryonic development, suggesting that they can buffer the effects of genetic variation and are thus redundant.However, contrary to expectations for enhancers with absolute redundancy, shadow enhancers are more conserved than non-redundant enhancers, show a higher proportion of functional sites, and show neither evidence of relaxed selection in natural populations nor enrichment for lineage-specific adaptive events, observations that are most consistent with pervasive stabilizing selection.These conservation patterns may be a result of selection for robustness per se .Alternatively, they may equally be a side product of the modular nature of developmental programs—when multiple enhancers are required to regulate complex patterns of expression, a degree of robustness may be an inevitable, very useful, byproduct.The term redundancy, where two parts have the same function, is generally perceived as absolute redundancy.However, the examples presented above show clear cases in which enhancers act 100% redundantly in one context and yet are essential in another , a property we refer to as partial redundancy.Enhancers with absolute redundancy are often generated through duplication | Redundant enhancers (or "shadow" enhancers), for example, can confer precision and robustness to gene expression, at least at individual, well-studied loci.However, the extent to which enhancer redundancy exists and can thereby have a major impact on developmental robustness remains unknown.The activity of 23 elements, associated with five genes, was examined in transgenic embryos, while natural structural variation among individuals was used to assess their ability to buffer against genetic variation.Third, although shadow enhancers can buffer variation, patterns of segregating variation suggest that they play a more complex role in development than generally considered. |
events and then either functionally diverge or degrade, being rapidly lost within a population.Partially redundant elements, i.e., enhancers with overlapping spatial activity, in contrast, should be maintained by selection and therefore preserved over longer evolutionary timescales and thus should be more common.It is now possible to assess this reasoning, given the recent availability of a very large collection of 7,705 enhancers covering ∼15% of the non-coding D. melanogaster genome, whose detailed in vivo activity was annotated with 227 tissue terms throughout all stages of Drosophila embryogenesis in stable transgenic embryos .We therefore first determined whether enhancers with overlapping spatial activity are more prevalent within a genome compared to enhancers with identical activity.Only enhancers with a single DNaseI-hypersensitive site were included in the analysis, to exclude ambiguity caused by cases where multiple enhancers may be contained within the same 2 kb region tested in transgenic embryos.Overall, enhancers located within 50 kb of each other are much more likely to exhibit similar, overlapping spatial activity.However, they are not more likely to exhibit identical activity than expected by chance.As expected, these results indicate that even when considering a very broad and diverse set of spatiotemporal patterns, absolute redundancy of enhancer activity for spatial expression is rare, though some level of redundancy is present and likely to be functionally important.Interestingly, this is not the case at the gene level.Genes within a 50 kb window of each other are both more likely to have overlapping spatial expression and identical expression than is expected by chance.The analyses above suggest that enhancers with partially redundant activity are much more frequent than enhancers with absolute redundant activity.However, the frequency of these elements throughout the entire genome remains unclear; the authors identified 16 genes with shadow enhancers .To examine the prevalence of shadow enhancers more globally, we used two stringent approaches, focusing on the mesoderm and its derivatives.The first approach is based on Perry et al. , who defined prospective shadow enhancers for eight gap genes as pairs of genomic regions cobound by the same TFs within 100 kb of the genes’ promoter.Here, we extended this approach and more formally identified highly correlated TF occupancy across 15 conditions using chromatin immunoprecipitation data for five mesodermal TFs across multiple developmental stages .Importantly, 97% of these ChIP-defined cis-regulatory modules function as developmental enhancers when tested in vivo using transgenic reporter assays .Spearman rank-correlations between TF ChIP intensities was scanned across all 8,008 ChIP-defined enhancers within a 50 kb distance of each other and in the vicinity of a gene with mesoderm and/or muscle expression, using in situ hybridization data.This identified a stringent set of shadow enhancers with highly correlated TF occupancy to at least one other enhancer associated to the same target gene.An example of one such pair is shown in Figure 2A.Although enhancers bound by the same combination of TFs often give rise to similar patterns of expression, a number of studies indicate more complex relationships.Enhancers with diverse patterns of TF occupancy and regulatory logic can, for example, also give rise to highly similar spatial activity.As the functional output of an enhancer is the important property for development, this is the parameter most likely under selection.This observation led us to our second approach, were we defined shadow enhancers based on their overlapping spatial activity.As there are no genome-wide data for enhancer spatiotemporal activity, we made use of our previously validated method, which predicted the activity of 8,008 mesodermal enhancers from TF occupancy data using a machine-learning approach trained on enhancers with characterized activity .Each of the 8,008 ChIP-defined enhancers thereby has a probability score of being active in one of four exclusive tissue classes; importantly 83% of these tissue predictions hold true, i.e., the enhancers drove expression in the predicted tissue when tested in vivo in transgenic embryos .Shadow enhancers were defined as pairs of elements having a high-confidence prediction within the same tissue, being within 50 kb of each other, and associated with a common gene with overlapping expression from in situ hybridization.This resulted in a stringent set of 866 shadow enhancers associated to 298 genes with mesoderm and/or muscle expression.The combination of these two approaches identified 1,055 shadow enhancers predicted to have similar activity to at least one other enhancer during mesoderm and/or muscle development.Approximately 40% of genes are regulated by a single pair of shadow enhancers, in keeping with the vast majority of current examples of redundant enhancers in both Drosophila and mice , with the notable exception of vnd .However, the majority of genes appear to have much more complex regulation, with ∼60% of loci with shadow enhancers containing three or four, and even a few examples of five, six, seven, or eight, shadow enhancers with similar activity, suggesting that the current view of potential redundancy is over simplistic.By definition, redundant or partially redundant enhancers can compensate for mutations that render one of the enhancers dysfunctional, as shown in the svb and dac loci in D. melanogaster or the Hoxd loci in mouse .If the shadow enhancers are acting redundantly, the transcriptional program driving embryogenesis should be able to proceed if one of the two enhancers is deleted.To examine this, we used natural sequence variation within a wild population of Drosophila to determine whether enhancers within a predicted redundant pair are affected by deleterious mutations.As it is often difficult to predict the effect of an individual SNP on TF occupancy , we focused here on deletions | First, it is much more pervasive than previously anticipated, with 64% of loci examined having shadow enhancers.Second, over 70% of loci do not follow the simple situation of having only two shadow enhancers - often there are three (rols), four (CadN and ade5), or five (Traf1), at least one of which can be deleted with no obvious phenotypic effects. |
bound by a small repertoire of TFs: extrapolating to all ∼700 or so predicted Drosophila TFs suggests that shadow enhancers are prevalent throughout the genome and therefore could have a substantial impact on the robustness of gene expression during embryonic development.As we discuss below, however, this largely hidden layer is not without primary function, but rather may play a fundamental role in ensuring the precision, timing, and robustness of specific developmental programs, as has recently been shown at individual gene loci .Just as promoter variants that lead to transcriptional noise are suppressed within natural populations, as seen in yeast , shadow enhancers may play a crucial role in the suppression of transcriptional noise during embryonic development.The partially overlapping activity of redundant enhancers appears to be an emerging theme, but one with an evolutionary paradox.In agreement with the strict definition of redundancy, deletion of a redundant enhancer does not cause major phenotypic alteration, at least in a given environmental condition, as one or more redundant elements could compensate for the loss.What then prevents the deletion of shadow enhancers with a population?,The answer may lie in the context specific nature of their redundancy, which we are referring to here as partial redundancy.As these elements drive overlapping patterns of expression, there are at least some tissues, stages, or environmental conditions in which the elements have distinct functional roles.The overlap in activity can be restricted to a small time window or a small number of cells, while other shadow enhancer “pairs” may have extensive overlap in time or space.Thus, although an enhancer may be redundant with another element in one tissue or developmental stage, its activity may be non-redundant in another cell type and therefore be essential for embryonic development.Similarly, enhancers that appear redundant in “normal” environmental conditions could act non-redundantly when the environmental conditions become more extreme, as observed in the svb locus .It is this partially redundant property that most likely holds the key to how these elements are maintained over long evolutionary periods.A previous study hypothesized that there may be different evolutionary pressure on two redundant enhancers: the primary enhancer being more constrained than the redundant shadow enhancer, allowing the later to accumulate mutations without inducing a phenotype and thus evolve faster .Our analyses of sequence conservation and the frequency of segregating mutations affecting these enhancers doesn’t support this, at least in the context of these mesoderm/muscle enhancers; the evolutionary pressures affecting shadow enhancers are similar and overall show a stronger tendency toward conservation than non-redundant enhancers driving similar expression with no evidence for an increased frequency of relaxed selection or adaptive evolution, although we appreciate that these approaches are most likely underpowered to detect recent adaptive changes.Taken together, our results suggest that shadow enhancers are being maintained for a purpose.One property of many shadow enhancers, in addition to their similar overlapping activity, is that the majority also have additional non-redundant activity, which may be under selective pressure, as discussed above.Alternatively, “redundant” enhancers driving similar spatiotemporal activity could act together to guarantee that a gene reaches a certain level of expression , or could have essential roles in ensuring correct patterning precision , or to reduce stochastic effects on gene expression , and thereby play an essential role in reducing transcription noise during development.In these cases, shadow enhancers ensure robustness of the trait when environmental variations occur but do not confer genetic robustness to all possible mutations since, for example, deletions of a partially redundant enhancer can drastically influence the viability of an organism .We therefore argue that shadow enhancers are pervasive throughout the genome and provide robustness to gene expression in the context of fluctuating genetic and environmental perturbations.The redundant function of these enhancers, e.g., similar overlapping expression, may provide opportunities for evolutionary innovation; however, the non-redundant part of the enhancer’s activity, e.g., in space, time, or environmental conditions, indicates that they also have independent functional roles, which may help to fix these elements within a population.In summary, the data presented here indicate that almost any developmental gene can have multiple shadow enhancers, each with similar overlapping windows of activity.The combined action of partially redundant enhancers may thereby represent a significant strategy through which an organism reaches robustness during embryonic development.The extensive nature of the overlap of these elements activity will generate distributed robustness within large developmental gene regulatory networks, a role that has yet to be explored.Their prevalence may give insights into how gene regulatory networks are organized—with the modular nature of enhancers required to produce robust and precise patterns perhaps providing redundancy as a useful byproduct.E.C. and E.E.M.F. designed the study and analyzed the results.E.C., E.H.G., and L.C. generated all transgenic lines and performed in situ hybridization and imaging.P.K. correlated TF occupancy.D.G. and P.G. performed conservation, GO enrichment, and expression similarity analysis.T.Z. and J.O.K. did SV analysis.E.C., D.G., and E.E.M.F prepared and edited the manuscript. | Summary Embryogenesis is remarkably robust to segregating mutations and environmental variation; under a range of conditions, embryos of a given species develop into stereotypically patterned organisms.Such robustness is thought to be conferred, in part, through elements within regulatory networks that perform similar, redundant tasks.Their spatial redundancy is often partial in nature, while the non-overlapping function may explain why these enhancers are maintained within a population. |
arthroscopic shaver system, so it was not completely resected.Fortunately, there was no recurrence at the final follow-up 20 months after surgery.Nevertheless, the limitation of this case is that the long-term results have not been evaluated.Therefore, it is necessary to follow up this case in the future.We reported a rare case of intra-articular fasciitis in the elbow joint.It was difficult to diagnose preoperatively because preoperative clinical findings were nonspecific.Although histological examination is necessary to establish the diagnosis, we recommend that intra-articular nodular fasciitis should be included in the differential diagnosis of intra-articular mass lesions.The authors have no conflict of interest.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.In our case report was not made no experimentation, you just described our clinical practice.Written informed consent was obtained from the patient for publication of this case report and accompanying images.A copy of the written consent is available for review by the Editor-in Chief of this journal on request.Osamu Nakamura: performed the surgery; designed this study; writing of the paper.Yoshio Kaji: assistant to writing of the manuscript.Yoshiki Yamagami: literature review.Tetsuji Yamamoto: participated in the critical revision of the article.My UIN is research registry 4671.All authors have read and approved the manuscript and accept full responsibility for the work.Not commissioned, externally peer-reviewed. | Introduction: Nodular fasciitis is a benign myofibroblastic proliferation arising from the fascia.Until now, there have been only two reported cases of intra-articular nodular fasciitis in the elbow joint.Presentation of case: We report a case of a 19-year-old woman with a 3-month history of pain in the left elbow.Contrast-enhanced T1-weighted magnetic resonance imaging (MRI)showed an intra-articular lobulated mass on the anterior portion of the elbow joint, with accompanying effusion.The patient subsequently underwent arthroscopic excision of the mass.Histologically, intra-articular nodular fasciitis was the final diagnosis.At the most recent follow-up, 20 months after surgery, the patient had no subjective symptoms, including pain.The final MRI findings showed no tumor recurrence.Discussion: As nodular fasciitis is not generally known to arise within a joint, the occurrence at such anatomical locations may lead to a misdiagnosis.Intra-articular nodular fasciitis is rarely encountered, and therefore, is not usually considered during the clinical investigation of joint symptoms.Conclusion: Preoperative diagnosis was difficult in this case because of nonspecific preoperative clinical findings.Although histological examination is necessary to establish a diagnosis, we recommend that intra-articular nodular fasciitis should be included in the differential diagnosis of intra-articular mass lesions. |
Intra-tumor genetic heterogeneity has been documented in several adult tumors.Such tumors typically evolve over long periods before diagnosis, with most demonstrating branched evolutionary trajectories.However, the prevalence and relevance of ITGH are poorly understood in pediatric solid tumors: since they carry lower burdens of mutational changes and have evolved for shorter periods of time before diagnosis, they may be expected to show less complex evolutionary histories.Although there are relatively few sequence mutations in pediatric malignancies, DNA copy number aberrations and rearrangements are often characteristic features of these tumors.Some common CNA have recognized prognostic significance in pediatric tumors.For example, in neuroblastoma, MYCN amplification or subchromosomal genomic gains and losses are used to stratify therapy.In Wilms tumor, gain of 1q is increasingly being proposed as a common prognostic biomarker to select patients for more intensive treatment.However, these studies have relied on a single tumor sample from each case.Indeed, there has been limited investigation of ITGH in pediatric solid tumors.A recent multisampling study reported genetic homogeneity in multi-sampled embryonal brain tumors.However, a study of four pediatric small round cell tumors, with two samples from each, reported heterogeneous CNA in three out of four tumors.In WT, a large study showed that combined loss of heterozygosity of chromosomes 1p and 16q, while rare, was not only associated with poorer outcome, but also showed concordance in the vast majority of the 10% of tumors from which two separate samples were assessed.In contrast, heterogeneous WTX deletion has been reported in two multi-sampled WTs, and heterogeneous activation of MYCN and inactivation of TP53 have been reported in a case of bilateral WT.Such variable heterogeneity complicates clinical decision making because of a poor understanding of the evolution of pediatric tumors.It also means that most previous studies showing prognostic significance for specific CNA did not take into account potentially significant ITGH."Therefore, here we assess the extent and significance of ITGH in a prospective study of unselected multi-sampled Wilms' tumors.We obtained multiple samples from WT nephrectomy/nephron-sparing surgery specimens at Great Ormond Street Hospital between May 2011 and June 2013."All patients were enrolled on the SIOP WT 2001 trial, the current IMPORT study or their parents had consented for additional tissue to be used in research as part of the UK Children's Cancer and Leukaemia Group tissue bank.The research reported here was approved by a national research ethics committee.Patients received preoperative chemotherapy as per the SIOP WT 2001 trial protocol or according to national clinical guidelines based on this trial.Tumors were classified as previously described.A histological section from each tissue sample was reviewed to determine viable tumor content, and only samples with more than 50% viable tumor were used.DNA was extracted using standard techniques from each tumor sample, and from adjacent non-tumorous kidney where it was available in 19 cases, and from peripheral blood lymphocytes in 3 cases.In two cases, different regions within the same tumor were identified prospectively as distinct nodules in the same overall tumor mass on T1- and T2-weighted MR imaging, and matched on comparison of pre- and post-chemotherapy images.Assessed diffusion coefficient was calculated by one observer as previously described."Illumina® HumanCytoSNP-12 v2·1 microarrays were hybridized with 250 ng DNA per sample according to the manufacturer's instructions.Methylation-specific multiplex ligation-dependent probe amplification for 11p15 was carried out as previously described, using the Salsa MS-MLPA BWS/RSS ME030-C3 probemix, and data visualized in Coffalyser.NET.Log R ratio and B-allele frequency were calculated using the Illumina® GenomeStudio software for each array using default settings.LRR genomic waves were detected in normal tissue samples and corrected from all arrays.The LRRs from each array were segmented and copy number states were called using the ‘CGHcall’ R package in Bioconductor.For each case, the boundaries between adjacent regions were compared, smoothed and summarized between samples using the ‘CGHregions’ R package in Bioconductor.A region was removed if it contained fewer than 100 probes or its probe density was an outlier.The mean tumor-specific mirrored BAF was calculated for each aberrant region and copy number aberrations were rejected if they did not show expected allelic imbalance.Aberrant regions detectable only in the BAF were incorporated into our analysis using a custom pipeline.Allele-specific copy number was interpreted by the phylogenetic algorithm MEDICC to infer clonal evolution of samples in each case.Normal tissue samples were used to root phylogenetic trees.Annotated code of our entire analysis pipeline is available as a GitHub repository at: https://github.com/luslab/multiregion-cnv-phylogenetics.The study sponsors did not participate in study design, in the collection, analysis, and interpretation of data, in the writing of the report, or in the decision to submit the paper for publication.We studied 70 distinct tumor samples from 24 tumors in 20 patients, with matched DNA from non-tumorous kidney and/or peripheral blood leukocytes in 19 cases.Five patients had bilateral WT, and we obtained samples from both tumors in four of them; in Case 9, the contralateral tumor had been removed prior to the start of our study.Patient characteristics and samples are summarized in Table 1.We applied a custom-built pipeline to determine reproducibly genome-wide allele-specific CNA and LOH events using high-resolution SNP arrays hybridized with genomic DNA from each sample, and automatically compare these events across samples in a tumor.Fig. 1 shows a graphical representation of all CNA and copy number neutral LOH events across the 70 tumor samples.We detected most known recurrent WT CNA/LOH, including those associated with poor outcome.Surprisingly, 1q + was heterogeneous in four of seven multi-sampled tumors with this change.In general, we found remarkable diversity in the extent of intra-tumor | A number of copy number aberrations (CNA) are proposed as prognostic biomarkers to stratify patients, for example 1q + in Wilms tumor (WT); current clinical trials use only one sample per tumor to profile this genetic biomarker.We multisampled 20 WT cases and assessed genome-wide allele-specific CNA and loss of heterozygosity, and inferred tumor evolution, using Illumina CytoSNP12v2.1 arrays, a custom analysis pipeline, and the MEDICC algorithm. |
across multiple samples for each tumor, and infer evolutionary trajectories in order to provide a basis for understanding how ITGH arises.We found a remarkable range of evolutionary scenarios and variable ITGH in WT, including 1q +.Indeed, our data indicated that single sampling misses a significant proportion of cases with 1q +, and we estimated that to detect more than 95% of cases with 1q +, one would need to obtain at least three tumor samples per case.We also found that 1q + does not show preference in evolutionary timing—it may occur as an early or late event—which suggests that its oncogenic effect is independent of other genetic changes.Therefore, we suspect that 1q + may have a similar effect on WT outcome regardless of whether it is homogeneous or heterogeneous in the primary tumor, and that current studies based on single tumor sampling may have underestimated its prognostic significance.Our findings clearly imply that future clinical trials in WT must take this heterogeneity into account and multi-sample each tumor or attempt to detect this change in circulating tumor DNA at a level that can interrogate subclonality.In contrast, we find that somatic 11p15 CNNLOH, another common change in WT, is consistently an early event in WT tumorigenesis.Our finding therefore builds on previous observations of somatic 11p CNNLOH in WT precursor lesions.11p15 CNNLOH is associated with several other pediatric small round cell tumors, and recently it was found as a recurrent lesion in the vast majority of pediatric adrenocortical tumors, also occurring as an early event preceding most point mutations.These findings suggest that 11p15 CNNLOH may represent a common mechanism of tumorigenesis in a significant proportion of pediatric solid tumors, and its occurrence as an early event makes it a promising candidate for early detection of pediatric cancer by non-invasive screening for ctDNA in blood.In those tumors without 11p15 CNNLOH we identified a subset of five cases with isolated hypermethylation of the H19 DMR, and this abnormality was also present in adjacent histologically normal kidney.In one of these five cases, there was hemihypertrophy, whereas in the other four cases there were no features to suggest Beckwith-Wiedemann syndrome.This finding is in keeping with previous reports of mosaic hypermethylation of the H19 DMR in a significant proportion of normal cells in cases of WT with this abnormality, even in the absence of other features of Beckwith-Wiedemann syndrome.Indeed, the proportion and distribution of non-tumor cells with this WT-predisposing epimutation may at least in part underlie the expression and variability of the features of Beckwith-Wiedemann syndrome.We have uncovered evidence of independent origins of two synchronous WT not only in bilateral cases but also within the same kidney containing an intra-tumoral nodule with divergent histology.While the presence of multicentric tumors in the same kidney is a recognized feature in 5%–10% of WT, in Case 13 the nodule with divergent histology was contiguous with the main tumor mass.Multicentric WT may thus be under-recognized and therefore not treated appropriately.Furthermore, under current diagnostic criteria, the relative proportions of blastemal, epithelial and stromal elements are used to stratify WT into low/intermediate/high-risk categories, with an underlying assumption that such structures are all derived from the same tumor.However, this practice needs to be refined to take into account multiple tumors, of independent origins, within the same overall mass as well as multicentric and bilateral WT.Our findings on rarer biomarkers are more difficult to interpret in the absence of a larger multi-sampled tumor cohort.Nevertheless, our findings on 16q − highlight the importance of interpreting ITGH in the context of tumor evolution: in our series, 16q − is apparently heterogeneous only in Case 13, but it is erroneous to interpret this as evidence of 16q − ITGH, since it is present in the one sample from the smaller nodule that we showed arose independently of the remaining tumor mass.In the case of another biomarker, MYCN gain, we were able to relate subclonal acquisition of this change to a significantly better response to chemotherapy.MYCN gain may be expected to be associated with more rapid cell proliferation and therefore greater sensitivity to cytotoxic chemotherapy, and this may explain our finding.More generally, we have shown that it is feasible to integrate phylogenetic tumor analysis with monitoring of treatment response by imaging, provided that the imaging analysis is used as a guide to tumor sampling, in addition to current standard histological sampling.Taken together, our findings in WT show unpredictable and clinically significant genetic heterogeneity that requires tumor multisampling for its detection, and assessment of tumor evolutionary trajectories for its interpretation.The custom analysis pipeline that we developed for this project may be easily applied to similar data from other multi-sampled tumors, and we are also extending it to integrate single nucleotide variants and small indels, which are typically detected in sequencing studies.Our findings have major implications for planning biomarker sampling strategies in future clinical trials for WT, and possibly other pediatric solid tumors.GDC, JRA, BM, NJS, KPJ, RDW and WM designed the study.JRA obtained and organized clinical, radiological imaging and macrophotographic data.TC extracted DNA and carried out molecular analyses.ØEO performed radiological imaging analyses.SDP, NJS and WM reviewed histology and WM took photomicrographs.GDC, JRA, BM, CCB, MM, MEW, RDW and WM analyzed data.GDC, BM, NML and WM developed the bioinformatics analysis pipeline.NJS, KPJ, NML, RDW and WM supervised the project.GDC prepared the figures.GDC, JRA, BM, NJS, KPJ, NML, RDW and WM wrote the paper. | We found remarkable diversity of ITGH and evolutionary trajectories in WT.1q + is heterogeneous in the majority of tumors with this change, with variable evolutionary timing.We estimate that at least three samples per tumor are needed to detect > 95% of cases with 1q +.In contrast, somatic 11p15 LOH is uniformly an early event in WT development.We find evidence of two separate tumor origins in unilateral disease with divergent histology, and in bilateral WT.We also show subclonal changes related to differential response to chemotherapy.Rational trial design to include biomarkers in risk stratification requires tumor multisampling and reliable delineation of ITGH and tumor evolution. |
Here, we exemplified the viability of Toxoplasma gondii treated by-usnic acid in cell with typan blue and Giemsa stainingand the survival rate of infected mice and ultrastructural changes of toxoplasma in vivo.The RH strain of Toxoplasma gondii was prepared by intraperitoneal inoculation of mice and used in experiments in our laboratory.Serial dilutions of-usnic acid were prepared with normal saline in 0.1% Dimethyl Sulfoxide.Acetylespiramycin was used as the controlled drug.Toxoplasma tachyzoites were aliquoted into the treated groups.After treatment for 1 h, 2 h, and 4 h in 25 °C, tachyzoites suspension were respectively smeared and stained with Giemsa in each group.The numbers of changed tachyziotes were counted to calculate the ratio of altered tachyziotes under the light microscope.Meanwhile, tachyzoites suspension were respectively stained with 0.4% trypan blue in each group, and the numbers of colored tachyzoites were counted to calculate the ratio of stained tachyzoites.Rat cardiofibroblasts were prepared for primary cardiomyocyte monolayer cultures according to a previously published method .Briefly, new born SD rats were sacrificed and the hearts were minced.The tissue was subjected to 3 cycles of proteolytic dissociation by magnetic stirring with 0.125% trypsin solution.The cell pellet was re-suspended in DMEM supplemented.Selective adhesion procedure was performed after the incubation for 1.5 h at 37 °C in a humidified atmosphere.The rat cardiofibroblasts were then washed and suspended in medium DMEM.The cultured rat cardiofibroblasts were infected by tachyzoites simultaneously to investigate the invasion of tachyzoites.Briefly, monolayers of rat cardiofibroblasts were prepared in 24 well culture plates containing cover glasses for 24 h. All cells were divided into seven groups with three wells in each group.The tachyzoites were treated by different final concentrations of-usnic acid and acetylespiramycin for 4 h at 37 °C.Then treated tachyzoites were added to the cells.The cells were continually incubated at 37 °C in a humidified atmosphere for 24 h.Then the cover glasses were taken out, washed with PBS, and stained by Giemsa dye solution.The numbers of cells infected by tachyzoites were counted to calculate the infection rate.-Usnic acid–liposome was prepared with the mechanical dispersion-extrusion method .In briefly, the amount of cholesterol, egg phosphatidylcholine and-usnic acid were dissolved with trichloromethane and dried to prepare liposomal emulsion.The liposomal emulsion was mixed with PBS and passed through 200 and 100-nm-pore-size polycarbonate membrane filters for three times.The 120 to 140-nm-size-usnic acid liposome were prepared.Swiss Webster mice were infected with RH strain tachyzoites of Toxoplasma gondii through peritoneal injection.Two hours after inoculation, all the infected mice were administrated orally with drugs.The survival times of mice was observed.In addition, one mouse in each group was killed on the fourth day after infection.The peritoneal exudates of the mouse were harvested.The samples for transmission electron microscope observation were respectively prefixed with 2.5% glutaraldehyde.According to the previously described method , ultra-thin-sections were made and observed in an H-600 transmission electron microscope for the ultrastructural changes of Toxoplasma gondii.Data were expressed as the mean±S.E.M.The statistical significance of the differences between groups was determined by One-Way Analysis of Variance.Survivals of mice were analyzed with Kaplan–Meier with Sigma plot v11.0.Values of p<0.05 was statistically considered significant. | Toxoplasma gondii pathogen is a threat to human health that results in economic burden.Unfortunately, there are very few high-efficiency and low-toxicity drugs for toxoplasmosis in the clinic.(+)-Usnic acid derived from lichen species has been reported to have anti-inflammatory, antibacterial, anti-parasitology, and even anti-cancer activities.In associated with the published article "Effects of (+)-Usnic Acid and (+)-Usnic Acid-Liposome on Toxoplasma gondii" [1], this dataset article provided the detailed information of experimental designing, methods, features as well as the raw data of (+)-usnic acid and (+)-usnic acid-liposome on toxoplasma in vivo and vitro.(+)-Usnic acid may be a potential agent for treating toxoplasmosis. |
less than 3.Not being an acidophile sensu stricto but an acid-tolerant species, E. andevalensis might have evolved acquiring plasma membrane properties to keep H+ at bay, complex cell wall structures to tolerate low pH and greater cation uptake.Adaptation to the harsh conditions in acid root environment by significant accumulation of nutrients like Ca, Mg and B might provide E. andevalensis roots with a greater resistance to the structural disturbing effects of H+ excess on cell walls and membranes.The greater tolerance of E. andevalensis to H+ toxicity found in in this work may explain its adaptation to grow in very acid soils and the formation of monospecific communities.The extremely low soil pH and periodic flooding with the acid river waters might be too hostile for E. australis establishment.Considering the significant differences in root nutrient concentration, the species might also cope with the poor nutrient availability found in the acid soils.Other factors like the intrinsic lower growth rate in the tolerant species or interactions with the soil microbiota which also might play a role in the plant adaptation to this extreme environment.The maintenance of root cell-wall–membrane structure in the tolerant species through a greater stability of the RG-II dimers kept a functional plasma membrane H+-ATPase leading to cytoplasmic H+ toxicity avoidance by active H+ efflux and providing the driving force for nutrient uptake.The significant differences in pH tolerance between Erica andevalensis and E. australis were mostly associated with major variation in the concentration of nutrients like Ca, Mg and B required for the maintenance of cell wall structure and membrane selective properties.Indeed, differences in boron bridging of the pectic polysaccharide domain, RG-II, were noted.The interspecific variation in pH sensitivity might certainly explain the differential distribution of the Erica species in the contaminated acid soils and margins of highly acid rivers in South Portugal and SW Spain.In the soils affected by past mining activities, complex environments evolved and at present are occupied by species highly adapted to situations of heavy metal contamination and low nutrient availability.However, in conditions of very low pH some species like E. andevalensis may thrive more successfully than others because of their intrinsic root tolerance to H+ toxicity.S.R.O. and E.O. Leidi, performed and designed research; M.D. Mingorance, performed chemical analyses; D. Sanhueza and S.C. Fry, performed research on RGII; E.O.L. and S.C.F. wrote the paper.The authors comply with the Ethical Standards in the COPE statement and declare that they have no conflict of interest. | Background and aims: Tolerance to soil acidity was studied in two species of Ericaceae that grow in mine-contaminated soils (S Portugal, SW Spain) to find out if there are interspecific variations in H+ tolerance which might be related to their particular location.Methods: Tolerance to H+ toxicity was tested in nutrient solutions using seeds collected in SW Spain.Plant growth and nutrient contents in leaves, stems and roots were determined.Viability tests and proton exchange were studied in roots exposed, short-term, to acidic conditions.Membrane ATPase activity and the cell-wall pectic polysaccharide domain rhamnogalacturonan-II (RG-II) were analysed to find out interspecific differences.Results: Variation in survival, growth and mineral composition was found between species.The H+-tolerant species (Erica andevalensis) showed greater concentration of nutrients than E. australis.Very low pH (pH 2) produced a significant loss of root nutrients (K, P, Mg) in the sensitive species.Root ATPase activity was slightly higher in the tolerant species with a correspondingly greater H+ efflux capacity.In both species, the great majority of the RG-II domains were in their boron-bridged dimeric form.However, shifting to a medium of pH 2 caused some of the boron bridges to break in the sensitive species.Conclusions: Variation in elements linked to the cell wall-membrane complex and the stability of their components (RG-II, H+-ATPases) are crucial for acid stress tolerance.Thus, by maintaining root cell structure, active proton efflux avoided toxic H+ build-up in the cytoplasm and supported greater nutrient acquisition in H+-tolerant species. |
dsrA gene previously associated with branched dextran composed of α- and α- linkages showed a very low level of expression in the conditions of this study.Notwithstanding the possibility of other dextransucrases involved, in this investigation, at least one showed its highest activity during the first 24 h of sourdough fermentation, remaining consistent until the end of the process.Transcriptional analysis of L. mesenteroides NRRL B-512F dextransucrase has shown that sucrose act as atypical activator, detecting dextransucrase activity only after several hours of contact with high concentration of sucrose.In this study, dsrE and brsA, typically associated to α- branching, where highly expressed during cultivation condition in MRS, as reflected on the initial sourdough inoculum, but their expression diminished afterwards, in sourdough fermentation conditions.Analogous result was observed for genes loosely related to dsrE in other L. citreum strains isolated from sourdoughs after cultivation in MRS.This might be due to multiple factors, including CCpA-mediated regulation as response to a growth environment characterized by high amount of sucrose.However, more in depth understanding requires further study.Similarly, the amount of EPS formed depends also on environmental factors, which might play a role in the regulation of genes not related to EPS synthesis, responsible of overall carbohydrate metabolism of lactic acid bacteria, such as CcpA or other sugar regulators.It was previously suggested that, in spontaneous sourdough back-slopping, the selection of specific environmental parameters, such as temperature, and propagation time could affect the final microbiota composition.The fermentation parameters used in this study, such as temperature of 20 °C and consistent supply of high sucrose concentration, allowed to establish an environment in which L. citreum FDR241 dominated and showed a consistence performance.Understanding of transcriptional regulation of EPS-synthetizing genes in different sourdough conditions may have high technological relevance in industrial processes. | This study focused on the performance of the dextran producer Leuconostoc citreum as starter culture during 30 days of wheat flour type I sourdough propagation (back-slopping).As confirmed by RAPD-PCR analysis, the strain dominated throughout the propagation procedure, consisting of daily fermentations at 20 °C.The sourdoughs were characterized by consistent lactic acid bacteria cell density and acidification parameters, reaching pH values of 4.0 and mild titratable acidity.Carbohydrates consumption remained consistent during the propagation procedure, leading to formation of mannitol and almost equimolar amount of lactic and acetic acid.The addition of sucrose enabled the formation of dextran, inducing an increase in viscosity of the sourdough of 2–2.6 fold, as well as oligosaccharides.The transcriptional analysis based on glucosyltransferases genes (GH70) showed the existence in L. citreum FDR241 of at least five different dextransucrases.Among these, only one gene, previously identified as forming only α-(1–6) glycosidic bonds, was significantly upregulated in sourdough fermentation conditions, and the main responsible of dextran formation.A successful application of a starter culture during long sourdough back-slopping procedure will depend on the strain robustness and fermentation conditions.Transcriptional regulation of EPS-synthetizing genes might contribute to increase the efficiency of industrial processes. |
tiptrode.The correlation between sessions was as strong for wave I as for wave V.The bottom panel of Fig. 6 shows the wave I/V ratio for session T2 plotted against that of session T1.The correlations for the I/V ratio were larger for the canal tiptrode than the mastoid electrode, and similar to those for the individual waves shown in the upper two panels of Fig. 6.Fig. 7 shows scatter plots for the SP amplitudes and SP/AP ratios.The correlations between sessions were much weaker for the SP than for the main ABR waves.The correlation coefficients were larger for the SP in the canal tiptrode montage than for the mastoid electrode.The bottom panel of Fig. 7 shows the SP/AP ratios for session T2 plotted against those for session T1.The correlations for the SP/AP ratio were slightly larger in the canal tiptrode montage, though both recording locations showed much smaller coefficients than the wave I/V ratio.ICC values are shown in Table 2, together with 95% confidence intervals.The ICCs were largest for waves I, V, and the I/V ratio, and largest for the canal tiptrode montage.These ICC values would generally be described as reflecting excellent repeatability, both within and between montages.The reliability of wave I across the two test sessions was comparable to that for wave V, with all ICC values greater than 0.80.Wave I amplitudes were larger for the canal tiptrode montage, but it does not appear that this was concordant with a substantial increase in reliability over the mastoid electrode montage.ICC values for wave I and V latency are reported in S4 of Supplementary Materials.The SP and SP/AP ratio measures showed much lower reliability.The SP for the mastoid electrode had poor reliability, and although this was improved by using the SP/AP ratio, it still remained lower than the reliability reported for the other waves.The SP values from the canal tiptrodes were more reliable and these were also improved by using a ratio measure, although, as indicated by the confidence intervals, there was no statistically significant difference between the reliability of the two montages for any of the measured waves or ratios.However, it is clear that any measure utilising the SP was much less reliable than one using waves I and V.The strongest ICC value of the four measures involving the SP was 0.46.Comparing this ICC value with the weakest ICC from the three measures using waves I and V demonstrates that the reliability of measures utilising the SP were significantly poorer than those using waves I and V.The primary aim of the current study was to quantify the test-retest reliability of ABR measures, to evaluate whether the ABR is a suitable technique for measuring auditory nerve function in individual human listeners.Although it has been reported that the ABR is stable over long time periods in an individual, much of this evidence relates to wave V.The data presented here indicate that wave I test-retest reliability, and therefore measurement error, is comparable to that of the larger amplitude wave V. Therefore, although wave V is often characterised as robust and reliable, and wave I as small and variable, it is clear that wave I has high within-subject reliability in normal-hearing listeners, at least for the stimulus intensity used here.If the other sources of between-subject variability can be controlled, wave I amplitude is sufficiently reliable to accurately characterize individual differences in auditory nerve function.Neither the SP nor SP/AP ratio were reliable.Even when using the canal tiptrode montage, the best-case ICC was 0.46.In the current study these measures clearly have poor test-retest reliability, but this may be because of the small SP amplitudes evoked by an 80 dB nHL click.The click used by Liberman et al. to evoke the SP had a level of 94.5 dB nHL, and produced much larger SP amplitudes.However, it is not clear that raising presentation levels to enhance the SP is advisable.Even an 80 dB nHL stimulus is intolerably loud for some listeners.A stimulus presentation level greater than 90 dB nHL could risk exceeding recommended daily exposure limits after a few thousand presentations.Moreover, even such exposure limits may be too permissive, since impulse noise is more damaging than continuous-type noise of equivalent energy.It may also be the case that the SP is inherently unreliable, even if higher stimulus presentation levels are used.Either way, the clinical utility of the SP measure may be limited.The SP/AP ratio in the current study used an arbitrary baseline to compute the amplitude of both the SP and the AP components, as described by Liberman et al.It has been reported previously that peak-to-baseline measures of wave I amplitude are less reliable than peak-to-trough estimates of amplitude.Therefore, measures such as the SP/AP ratio could benefit from using peak-to-trough estimates of the AP.However, in the current study this made little difference to the reliability of the SP/AP ratio, which suggests that the variability of the SP was the limiting factor.One concern when trying to measure small, supra-threshold changes in the auditory nerve function of normal-hearing listeners is that scalp-mounted mastoid electrodes are simply not sensitive enough to reliably detect the subtle changes in evoked responses.The results presented in this study indicate that moving the recording site closer to the generator of wave I, by placing a tiptrode in the ear canal, produced only a small increase in reliability for waves I and V, although the benefit was greater for the SP.The amplitude of wave I increased and that of wave | The current study aimed to determine whether ABR wave I amplitude has sufficient test-retest reliability to detect impaired auditory nerve function in an otherwise normal-hearing listener.The stimulus was an 80 dB nHL click.The summating potential (SP) and ratio of SP to wave I were also quantified and found to be much less reliable than measures of wave I and V amplitude.We conclude that, if the other sources of between-subject variability can be controlled, wave I amplitude is sufficiently reliable to accurately characterize individual differences in auditory nerve function. |
V decreased when using a canal tiptrode relative to a mastoid electrode, as seen in other studies.However, reliability of the wave amplitude did not appear to be directly linked to the absolute amplitude of the wave.Wave V was slightly more reliable in the canal tiptrode montage compared to the mastoid electrode montage, despite having lower amplitudes on average.Given that the use of canal tiptrodes increases the financial burden on ABR practitioners and can reduce participant comfort, it is not clear that such equipment is necessary or advisable for the recording of ABR waves I or V.The final aim of the study was to investigate supra-threshold changes in the ABR in relation to noise exposure.The results presented here, for a group of young females in which low- and high-noise exposed listeners were well-matched for audiometric thresholds and age, indicate no changes in wave I amplitude as a function of noise exposure.There is no evidence for noise-induced cochlear synaptopathy.This is consistent with other recent studies in our laboratory which have found no association between noise exposure and wave I amplitude in young listeners with normal audiograms.The range of noise exposures in the present study allowed for good separation between the groups, although compared with Prendergast et al. there were fewer listeners with very high exposures, and more listeners with very low exposures.It should be noted that an absence of any evidence for cochlear synaptopathy is not the same as evidence for absence of the disorder.It remains unclear how sensitive the ABR is to a loss of low-SR fibers, even in animals.Shaheen et al. suggested that the frequency-following response is a more sensitive identifier of cochlear synaptopathy than the ABR.It may yet prove that in humans, a click-evoked response is too crude a measure with which to elucidate subtle supra-threshold, sub-clinical deficits.Liberman et al. also reported no significant difference in wave I amplitude between low- and high-noise exposed groups of listeners, although they did find a large difference between the groups in the SP/AP ratio.Liberman et al. reported mean SP amplitudes of approximately 0.14 and 0.21 μV, and SP/AP ratios of 0.26 and 0.46, for the low- and high-noise exposure groups, respectively.For the canal tiptrode montage in the present study, the SP amplitudes were 0.07 and 0.08 μV, and the SP/AP ratios were 0.22 and 0.26, for the low- and high-noise exposure groups, respectively.Although the present data show a trend in the direction reported by Liberman et al. the effect did not reach significance.The click intensity used in the current study was 14.5 dB lower than that used by Liberman et al. and therefore it may be that substantial differences between noise-exposure groups are only observed for more intense presentation levels than used here.Alternatively, there were substantial high-frequency audiometric differences between the groups in the Liberman et al. study, in contrast to the present study in which the groups were closely matched at high frequencies.Hence the populations tested in the two studies may not be directly comparable.One possibility is that high-frequency audiometric loss is a marker for cochlear synaptopathy.For example, only noise exposures that produce high-frequency threshold elevations may have the capacity to cause a substantial loss of cochlear synapses.Another is that SP/AP ratios may be directly influenced by high-frequency sensitivity, in the absence of synaptopathy.It may also be crucial to consider age more carefully, for example, whether the age at which intense noise exposures are experienced is critical, or whether the effects of noise-induced synaptopathy are more easily observed as an accelerated decline in hearing with advancing age. | The auditory brainstem response (ABR) is a sub-cortical evoked potential in which a series of well-defined waves occur in the first 10 ms after the onset of an auditory stimulus.Wave V of the ABR, particularly wave V latency, has been shown to be remarkably stable over time in individual listeners.This ABR component has attracted interest recently, as wave I amplitude has been identified as a possible non-invasive measure of noise-induced cochlear synaptopathy.Thirty normal-hearing females were tested, divided equally into low- and high-noise exposure groups.ABR recordings were made from the ipsilateral mastoid and from the ear canal (using a tiptrode).Although there was some variability between listeners, wave I amplitude had high test-retest reliability, with an intraclass correlation coefficient (ICC) comparable to that for wave V amplitude.There were slight gains in reliability for wave I amplitude when recording from the ear canal (ICC of 0.88) compared to the mastoid (ICC of 0.85).Finally, we found no significant differences in the amplitude of any wave components between low- and high-noise exposure groups. |
Hepatitis B virus is 50–100 times more infectious than HIV; and it is the aetiologic agent of hepatitis B, an infection that is endemic in Nigeria .HBV is a double-stranded DNA virus of a complex structure that causes infection of the liver .The virus belongs to the Hepadnaviridae family and is the most common cause of chronic liver disease; hepatocellular carcinoma and necrotizing vasculitis .HBV can cause both acute and chronic infections; and during the acute phase of infection, symptoms are not experienced by most people.Nevertheless, certain individuals develop acute illness with symptoms that last several weeks, including yellowing of the skin and eyes, nausea, dark urine, extreme fatigue, abdominal pain and vomiting .Additionally, in individuals with acute hepatitis, a small subset can develop life-threatening acute liver failure whereas in certain individuals, HBV establishes a chronic liver infection that progresses to cirrhosis or cancer of the liver ."According to the WHO's 2017 global hepatitis report, 257 million people were living with chronic HBV infection in 2015, with African and Western Pacific regions accounting for the highest burden .In Nigeria, hepatitis B prevalence ranges from 4–32% depending on the subject population .Laboratory diagnosis of HBV includes detection of markers such as HBsAg, HBsAb, HBcAb, HBeAg and HBeAb in the serum .Detection of HBsAg in the serum is indicative of HBV infection and this marker is the most frequently used in testing for HBV infection .HBsAg is detected within 10 weeks in the serum following exposure to the virus and its persistent presence for longer than 6 months may depict chronic infection ."Additionally, new HBV infection in certain individuals evolves into chronic infection, whereas there's a spontaneous clearance of the virus in others, with the risk of developing chronic infection being highest in children .As such, the focus of prevention of HBV infection is on children below five years of age, and children five years of age who test positive for HBsAg have chronic infection .The presence of antibody to hepatitis B surface antigen, a neutralizing antibody, suggests recovery and protective immunity against the viral infection.It is the only detectable marker in those who respond to hepatitis B vaccine successfully .On the other hand, serum hepatitis B envelope antigen is associated with active HBV replication and transmission of infection .Moreover, an individual may harbour HBV infection for 30 years or more before the manifestation of clinical symptoms .Remission of disease is associated with sero-conversion from HBeAg to HBeAb and serum disappearance of HBV DNA .Although hepatitis B core antigen is not found in serum because it is an intracellular antigen, the serum antibody to hepatitis B core antigen symbolizes an earlier contact with the virus .During early HBV infection, the IgM anti-HBc first appears in the serum; and this is usually detected within one month after appearance of HBsAg .The presence of IgG anti-HBc, which is not a neutralizing antibody, remains for life in both acute and chronic cases of infection.However, in the absence of circulating HBsAg, the presence of IgG anti-HBc in the serum may suggest an occult HBV infection in persons positive for serum HBV DNA irrespective of other HBV serologic markers .Although HBV infection is endemic in Nigeria, the epidemiology of the virus among young people and student populations is poorly understood across the country in spite of the significance of this in designing effective intervention initiatives.In this study, we therefore, identified the serologic markers of HBV infection and analysed associated socio-demographic factors in a subset of young individuals in Central Nigeria.We found that the prevalence of HBsAg was high and the risk of transmission, denoted by the prevalence of HBeAg, was significant in this population.Our findings will bolster understanding of the epidemiology of the virus especially in Nigeria, with implications for intervention initiatives that include designing effective treatment and prevention policies.This study was conducted in Nasarawa State University, Keffi, Nasarawa State, Nigeria.NSUK is a higher educational institution with a student population that is well above twenty thousand.The institution offers both undergraduate and postgraduate programmes with a blend of both local and foreign students.Keffi city is approximately 68 km from Abuja, the capital city of Nigeria and 128 km from Lafia, the capital city of Nasarawa State.It is located between Latitude 8°5 N of the equator and Longitude 7°8 E and situated on an altitude of 850 m above sea level .The study recruited 350 newly admitted undergraduate students of the 2016/2017 academic session who gave informed consent for their participation.A representative sample size was determined using the formula propounded by Naing .Socio-demographic information was obtained by administering a structured questionnaire.Three mls of blood was obtained from each participant by venepuncture and placed in an appropriately labelled plain tube.This was allowed to clot at room temperature and spun for 5 min at 3000 rpm.The resultant sera were harvested into well-labelled cryovials and stored at −20 °C until use.To detect HBV serologic markers, a 5-panel HBV test kit was used."Test and result interpretations were carried out according to the manufacturer's instructions.Ethical approval for this study was obtained from the Health Research Ethics Committee at the Federal Medical Centre, Keffi, Nasarawa State, Nigeria."Data obtained was subjected to descriptive statistical analysis using the Smith's Statistical Package Version 2.80.Chi-squares were calculated and P values obtained at 95% confidence interval; with P values ≤0.05 considered statistically significant.A total of 350 newly admitted undergraduate students of Nasarawa State University, Keffi, with a mean age of | Hepatitis B virus (HBV) is 50–100 times more infectious than HIV, and hepatitis B is endemic in Nigeria.In this study, we evaluated the serologic markers of HBV infection and associated socio-demographic factors in a subset of young people in Central Nigeria.Blood samples were collected from 350 consenting newly admitted students of the 2016/2017 academic session of Nasarawa State University, and their socio-demographic information obtained using structured questionnaires.The sera were analysed for HBsAg, HBsAb, HBcAb, HBeAg and HBeAb using a 5-panel HBV profiling diagnostic kits (Qingdad High Top Biotech Co. Ltd, Hangzhou, China).Data was analysed using Smith's Statistical Package (version 2.80, California, USA); and test of significance performed at 95% confidence limit with P values ≤0.05 considered significant. |
22.2 years voluntarily participated in this study.Of these, 34 were positive for HBsAg, 134 had HBsAb, 98 showed evidence of HBcAb, 13 were positive for HBeAg and 16 had HBeAb.The patterns of infection markers among the participants show that 4 subjects had chronic infection with high viral replication, 9 had acute infection, 16 were carriers with low viral replication, 5 were recently vaccinated, 56 were immune due to vaccination, 78 were immune due to natural exposure to the virus while 182 have never had any exposure to the virus.Table 2 shows the socio-demographic factors associated with serologic markers for HBV infection among the apparently healthy young participants.Of the 350 participants, 157 were male and 193 were female.Overall, 34 had HBsAg, 134 had HBsAb, 98 had HBcAb, 13 had HBeAg and 16 had HBeAb.Gender distribution showed that 20, 78, 59, 9 and 11 of male subjects had HBsAg, HBsAb, HBcAb, HBeAg and HBeAb respectively.Among female subjects, the distribution of HBsAg, HBsAb, HBcAb, HBeAg and HBeAb was 14, 56, 39, 4 and 5 respectively.Being male, unmarried and histories of alcohol consumption, blood transfusion, sharing of sharp objects and multiple sex partners were significant predictors of infection."The 9.7% proportion of individuals who had HBsAg as our findings reveal, demonstrates that the prevalence of HBV in the population was high based on the World Health Organization's classification of prevalence into low, moderate and high.Previous studies in Nigeria have illustrated similarly high prevalence in student populations.For example, reported prevalence have included 9.2% among students of a tertiary institution in North Western Nigeria; 11.5% among students of Nasarawa State University, Nigeria; 12.0% among asymptomatic students of Ahmadu Bello University, Nigeria; 15.5% among medical students of Usman Danfodio University, Nigeria; and 31.5% among apparently healthy students of a tertiary institution in North Eastern Nigeria .Obviously, these findings suggest that hepatitis B is endemic in Nigeria.However, it is likely that the reported varying rates from the different studies were impacted by sample size and study population type.In contrast, some other studies have reported relatively low prevalence of 4.1%, 4.7%, 6.5% and 8.0% in varying populations of adolescents, university students, school children and pregnant women respectively, in certain parts of Nigeria .Sample size, sample population and the varying levels of engagement in risk predisposing practices across populations and communities might account for the variations in findings .Our evaluation further reveals that 134 of the participants had HBsAb either due to vaccination or previous natural exposure to the virus, with 78 of them belonging to the latter category; indicating that 22.3% of the study subjects had their infections resolved after natural exposure to the virus.These findings are consistent with the 22.7% prevalence of HBsAb reported among healthy individuals in Benue, Nigeria; 22.2% among surgeons in Lagos, Nigeria; and 28% among hospital personnel in Cairo, Egypt .The detection of HBcAb in 28.0% of the participants implies earlier exposure to the virus by this proportion of the participants.However, some studies have reported higher prevalence of HBcAb in certain populations .Differences in sample populations between the studies may account for the variation in findings.Additionally, a study reported by Sadoh and colleagues in 2013 found an 11.4% HBcAb prevalence in a population of infants in Benin .It is instructive that the Benin study was among an infant population in contrast to ours that was among a population of teenagers and young adults.Therefore, the comparatively lower prevalence in the Benin study might be attributed to the age differences between the two populations.We found that 3.7% of the participants had HBeAg.Since this marker is indicative of active replication and transmission, there was a significant risk of transmission in this population with a potential impact on the incidence of the disease and a concomitant challenge to control initiatives.It has been established that HBsAg-positive individuals, who are as well HBeAg positive, have 70–90% chances of transmitting the virus to their contacts in addition to being at high risk of developing persistent liver disease leading to cirrhosis and primary liver cancer if not treated .Moreover, studies in other populations have found higher HBeAg prevalence of 6.5% and 4.7% among pregnant Nigerian women and a set of individuals who were HBsAg positive .However, a lower rate of 2.7% was reported in one study in Benue State, Nigeria .The reason for these differences may not be unrelated to the fact that the studies were conducted in different populations and as such population differences should understandably impact the outcome.As a further support to this explanation, while the study by Odimayo and colleagues included only individuals who were seropositive for HBsAg , our study consisted of apparently healthy individuals.HBeAb is the antibody produced against HBeAg and its presence denotes low infectivity and transmission of the virus or remission of disease .In other words, just like the HBsAb, its presence most likely indicates recovery from HBV infection .We found a prevalence of 4.7% of HBeAb among the participants.This is lower than the 8.0% reported in 2016 by Odimayo and colleagues among HBsAg seropositive individuals, 13.0% by Mbaawuaga and colleagues and 51.6% by Abah and Aminu among a population of pregnant women in Nigeria .Differences in study populations may account for the observed differences in findings.HBV serologic markers say a lot about the prognosis of hepatitis B .Overall, 1.1% of the participants had chronic HBV infection with high viral replication, 2.6% had acute infection with high viral replication, 4.6% were carriers with low viral replication, | Of the 350 participants, 157 (44.9%) were male and 193 (55.1%) were female.Overall, 34 (9.7%) had HBsAg, 134 (38.3%) had HBsAb, 98 (28.0%) had HBcAb, 13 (3.7%) had HBeAg and 16 (4.6%) had HBeAb.Gender distribution showed that 20 (12.7%), 78 (49.7%), 59 (37.6%), 9 (5.7%) and 11 (7.0%) of male subjects had HBsAg, HBsAb, HBcAb, HBeAg and HBeAb respectively.Among female subjects, the distribution of HBsAg, HBsAb, HBcAb, HBeAg and HBeAb was 14 (7.3%), 56 (29.0%), 39 (20.2%), 4 (2.1%) and 5 (2.6%), respectively.Being male, unmarried and histories of alcohol consumption, blood transfusion, sharing of sharp objects and multiple sex partners were significant predictors of infection (p ˂ 0.05). |
1.4% were recently vaccinated, 16.0% were immune due to vaccination, 22.3% were immune due to previous natural exposure to the virus and the remaining 52.0% have never had any exposure to the virus.In contrast, one study in Benue State, Nigeria, reported higher prevalence of 3.8% and 8.7% for chronic and acute infections respectively .The Benue study recruited pregnant women, who should normally have low immunity, and this might have informed the differences in outcome between the study and ours .There was a significant association between gender and prevalence of HBsAg and HBeAb in this study.Although differences in the prevalence of HBsAb, HBcAb and HBeAg were not statistically significant, the prevalence of HBsAg, HBsAb, HBcAb, HBeAg and HBeAb were higher in participants who were male than female.These findings are supported by reports from Isa and colleagues in North Western Nigeria and Pennap et al. in Keffi, Nigeria; but finds little support from findings by Mustapha and Jibrin among HIV patients in Gombe State, Nigeria .Since our study participants were freshmen who had just left their various homes, the common culture that ensures young women spend most of their times at home on domestic activities with little chances of exposure to risk factors outside of home, while young men have more freedom of movement and association, might account for the higher prevalence of HBsAg in the male than female participants in our study.This study recorded significant association between marital status and prevalence of HBsAg among the participants.The prevalence of HBsAg was higher among single participants than their married counterparts.This finding finds support in reports by Ejele and colleagues among HIV positive patients in Niger Delta, Nigeria; and Isa et al. in a tertiary institution in North Western Nigeria; the differences in study populations notwithstanding .Moreover, history of blood transfusion was significantly associated with the prevalence of HBsAg and HBeAg.Higher prevalence of HBsAg was observed among those who had received blood transfusion at some point in their lives.Until recently in Nigeria, testing of blood donors for hepatitis B virus infection was not a routine practice in most clinical settings.This finding concurs with a previous finding by Abah and Aminu in Nigeria .The prevalence of HBsAg was significantly higher among participants who had multiple sex partners than those without.This finding is supported by previous findings including reports by Adekunle et al. among blood donors in a tertiary hospital in Nigeria, Pennap et al. among students of a Nigerian tertiary institution and Mboto and Edet among students in University of Uyo, Nigeria .There was a statistical significant difference between the prevalence of HBsAg and HBeAb in relation to scarification mark in this study.It was observed that participants with scarification marks were more likely to have HBV infection than those without.This finding agrees with previous reports ; and participants in this category were likely from local homes where knowledge of transmission of the virus through the use of sharp unsterilized objects in making body-piercing marks is inadequate or lacking.In addition, alcohol consumption was unexpectedly not significantly associated with infection in this study.This disagrees with previous reports that have designated alcohol consumption as a transmission risk .It was possible that the participants in our study were not sincere with their alcohol consumption habits, making our data on this not to be a true reflection of the reality.Moreover, higher prevalence of HBsAg was recorded among those who shared sharp objects than those who did not.This result is in consonance with other studies done in Nigeria ; and our findings further confirm that practices such as sharing of sharp unsterilized objects permit transmission of the virus.Additionally, there was no statistical significant association between the prevalence of HBV serologic markers in relation to sharing of clothes and bed spaces among the participants.This finding is supported by reports from Ndako et al. in North Central Nigeria and Isa et al. in North Western Nigeria .However, this should not preclude the fact that HBV can be transmitted through those means since the virus can be found in saliva, tears, urine, breast milk and any other body fluid .Although participants who had history of HBV infection in their families had higher prevalence of HBV than those without, this factor was not found to be significantly associated with infection.However, a more robust study design that considers two groups of individuals; one with the history and the other without the history will provide a much more reliable result.Generally, our findings and other similar findings , raise critical policy questions about intervention programmes that should be designed for students and young people especially in Nigeria and the rest of Africa.This study reveals high prevalence of HBV and risk of transmission in the apparently healthy freshmen.This is alarming and unacceptable for a disease that has had a vaccine available since 1982.Youth and student populations across the country should be targeted for special, effective intervention initiatives in a comprehensive, holistic intervention programme that includes all populations and individuals of all ages. | This study reveals high prevalence of HBV and risk of transmission in the apparently healthy freshmen.Our findings have critical implications for intervention initiatives especially among students and youths. |
The Succulent Karoo is a biodiversity hotspot in the semi-arid winter rainfall region of southern Africa, which contains a high proportion of endemic plant species."Quartz fields are rare features of South Africa's Succulent Karoo Biome as they are represented in only five of the biome's 63 vegetation units and cover less than 8% of the biome's 111,000 km2 surface area.Despite their small area, quartz fields contribute significantly to plant diversity and endemism in the Succulent Karoo as 155 of its 1600 endemic species are restricted to quartz fields.The dwarf vegetation of quartz fields contains growth forms and species very different from surrounding non-quartz field habitats.Microclimate conditions also differ from the surroundings in that the surface covering of angular white stones gives rise to lower air temperatures at the soil surface.Geographic separation of quartz fields has resulted in high levels of plant compositional turnover resulting in six quartz field regions being recognized namely, Little Karoo, Knersvlakte, Riethuis-Wallekraal, Northern Richtersveld, Southern Richtersveld and Bushmanland-Warmbad.The Riethuis-Wallekraal quartz fields occur on the lowlands of the Namaqualand region in north-western South Africa.This quartz field area contains 17 quartz field specialist species belonging to the Asteraceae and Crassulaceae as well as to the Mesembryanthema group within the Aizoaceae.Seven species are restricted to these quartz fields.Livestock farming is the dominant land use in the Succulent Karoo.It is likely that prolonged livestock hoof action would have impacted the biological soil crusts of the Riethuis-Wallekraal quartz fields and that this should be noticeable when indexing soil aggregate stability.A study by Kaltenecker et al. found significant biological soil crust recovery after livestock exclusion in Sagewood plant communities in the arid winter rainfall region of the United States of America.Concostrina-Zubiri et al. also suggest that heavy grazing by livestock may alter biological soil crust patterns in rangeland landscapes.Biological soil crusts, especially those types at the late succession stage, are indicators of healthy and stable soils as they contribute to soil organic matter, the binding of soil particles, as well as resistance to water and wind erosion.Biological soil crusts have been observed on the quartz fields of the study area.It can be expected that changes in biological soil crust composition and functioning, as a result of livestock disturbance on the Riethuis-Wallekraal quartz fields, will be reflected in its vegetation composition.Biological soil crusts are beneficial for plants as they fix atmospheric nitrogen making this limiting nutrient available to shallow-rooted plants such as quartz field specialists.The destruction of biological soil crusts may also facilitate the establishment of invasive plant species.Beside possible damage to biological soil crusts, other likely consequences of trampling are physical damage to plants, soil compaction and accelerated soil erosion.Soil compaction alters soil structure and hydrology which can affect water absorption by plants.Depending on the intensity and period of livestock stocking rates these factors can cause vegetation change, as has been observed along a piosphere in the Tanqua Karoo part of the Succulent Karoo biome.Haarmeyer et al. found that intense stocking of livestock reduced species richness and the abundance of endemic species on quartz field vegetation.Livestock paths approximately 0.30 m wide are present on the Riethuis-Wallekraal quartz fields.These paths are seemingly denuded of vegetation and appear to have more exposed soil than areas away from livestock paths.Smooth surfaces will offer less resistance to wind and water erosion, and loose soil particles are more likely to be displaced by such erosion forces.More than half of the Riethuis-Wallekraal endemic quartz field flora are dwarf succulents < 0.05 m in height.Because of the undulating terrain on which the Riethuis-Wallekraal quartz fields are located, we sampled quartz field vegetation and soil aggregate stability upslope and downslope of the paths.Loose soil particles dislodged by livestock hoof action are expected to move downslope during rain and such soil deposition could be to the detriment to the unique quartz field vegetation.Burial of dwarf succulent plants is likely to have negative impacts on their growth and survival.Fig. 2 shows a typical example of a dwarf succulent species of the Riethuis-Wallkeraal quartz fields that is vulnerable to trampling and soil burial.Of concern to conservationists is whether plant species unique to quartz field vegetation will be able to persist in the face of livestock pressures.We tested the following hypotheses relating to presence of livestock paths on the Riethuis Wallekraal quartz fields and their potential impact on species composition:Livestock hoof action affects the soil stability on livestock paths and indirectly also soil stability downslope;,Quartz field vegetation is less diverse on livestock paths and on the adjacent quartz field area downslope;,Endemic quartz field flora specific to quartz fields is absent from livestock paths and the adjacent quartz field area downslope.Field work was carried out from May to July 2005.Plots were laid out for sampling vegetation upslope of, downslope of, and on livestock paths.Thus a natural experimental design was followed whereby the effects of each treatment were tested and therefore could not be randomly allocated.Plots on the livestock paths covered the width of the livestock paths and were flanked by the upslope and downslope plots.The underlying assumption is that hoof action destabilizes soil on paths.It was expected that sediment transported downslope by rain or dislodged by hoof-action resulting from livestock activity would result in burial of dwarf flora and biological soil crusts.Off-path impacts of livestock on soil and vegetation were therefore expected to be greater below than above the path.For each plot the step point method was used to record presence or absence of plant | Quartz fields are rare features that contribute significantly to vegetation diversity and endemism of South Africa's Succulent Karoo Biome.The Riethuis-Wallekraal quartz fields in the north-western Namaqualand area of South Africa contain 17 quartz field specialist species of which seven are endemic to this specific area.Hoof-action by livestock has formed paths of approximately 0.30. m on these quartz fields.trampling) and indirect effects (e.g.We tested the hypotheses that the unique quartz field vegetation and biological soil crusts would be affected by loose soil particles transported downslope from the paths. |
livestock paths in our study area were altered sufficiently to reduce soil stability.Our argument is supported by Belnap and Eldridge who described the disturbance and recovery of biological soil crusts by looking at research conducted across the world.They state that mechanical disturbance such as trampling by livestock, people and vehicles are known to cause severe compositional changes of biological soil crusts.Biological soil crusts are important for stabilizing soils by increasing resistance to wind and water erosion.Changes in biological soil crust composition are critical as there is a resistance gradient to disturbance through the stages of biological soil crust succession based on morphological and reproductive attributes.For example, cyanobacteria that occur at an early successional stage are more tolerant to disturbance than certain late succession mosses and lichens but provide less protection against disturbances.Having already shown that the livestock paths in this study have low soil stability it is significant that Pohl et al. demonstrated a positive relationship between plant diversity and soil aggregate stability in an alpine environment.They argued that different plant growth forms have different root systems and because of this structural variability, play an important role in soil stabilization.Most leaf-succulent plants of the Succulent Karoo have fairly uniform structure and relatively shallow root systems with those of quartz field vegetation considered to be even more limited in structure and depth.The role of root systems of quartz field flora is therefore believed to have a limited role in supporting the formation of stable soils on quartz fields.Quartz fields are generally small and the associated vegetation differs markedly from the taller vegetation that is on adjacent substrata.It would be incorrect to assume that not much time and energy would be spent by livestock to forage on quartz fields, due to the small size of the plants and the absence of more palatable and accessible growth forms such as shrubs, grasses and annual forbs.Indeed, Haarmeyer et al. showed that quartz field vegetation is utilized by small livestock under high stocking rates.Our study found significant differences in plant species diversity and soil stability on and off livestock paths, but no consistent differences between upslope and downslope locations.This suggests that impacts were limited to livestock paths only and not downslope sites, as predicted.Haarmeyer et al. have shown that the abundance and species richness of certain endemic quartz field species decreased under heavy grazing by livestock irrespective of rotational or continuous grazing regimes.In our study we found fewer quartz field specialists on the livestock paths.Our results suggest that the transformation of soil properties, and hence the edaphic microenvironment, as a result of the livestock path formation happened at a small scale and played a limited role in reducing quartz field taxa overall.However, it is important to note that in the current study the past stocking rate is unknown, but being a commercial farm — is likely to be lower than in communal areas where the heaviest impacts of livestock on plant community structure have been recorded.The study was carried out in the Riethuis section of the Namaqua National Park.Previous landuse of the area included farming with small stock which was removed during 1999 when ownership was transferred to the Namaqua National Park.The stocking rate of sheep by the previous landowner is unknown, but was probably relatively light, at least in comparison to adjacent communally-owned areas."The former National Department of Agriculture's 1993 map estimates that the grazing capacity for this region is 31–45 ha per animal unit.The study area contains a significant proportion of the Riethuis-Wallekraal Quartz Vygieveld which is one of 63 vegetation units recognized in the Succulent Karoo.The Riethuis-Wallekraal quartz field vegetation is dominated by low growing leaf-succulents with species belonging to mainly the Asteraceae and Crassulaceae as well as the Mesembryanthema group within the Aizoaceae family.The landscape of the Riethuis-Wallekraal quartz fields is characterized by low-lying undulating hills with scattered gneiss outcrops.Quartz field sizes range from 10 to more than 100 m in diameter and are covered with white angular stones, 0.02–0.60 m in size that have weathered from quartz bedrock or quartz veins.The mean annual rainfall for a farm close to the study area is 137 mm, range: 65–188 mm from 1983 to 2004.Rainfall is concentrated in the winter months.Temperatures in winter are relatively mild with minimum temperatures above 0 °C whereas summers are hot and dry with maximum temperatures often reaching 35 °C. | It would be important to conservationists to understand whether direct (e.g.burial of flora by sediment movement) associated with the livestock paths holds any threat to the dwarf succulent (<.Livestock paths also had lower cover and fewer quartz field specialist species.It is concluded that under conditions of intense and continuous grazing, livestock are likely to have an even stronger negative impact on the specialist quartz field flora. |
and compound solutions was the same as in the final step of the purification.The compounds were dissolved in DMSO, and then diluted with buffer.DMSO content in the final compound solutions did not exceed 0.5%.Data integration, fitting, and evaluation were performed using the software Origin 7 with the ITC200 plugin provided by MicroCal/GE Healthcare.PrfA in complex with 1 was co-crystallized by the hanging-drop vapor-diffusion technique at 18°C.Crystals grew in 5 days when the protein solution was mixed with an equal volume of mother liquor containing 20% PEG-4000, 16% isopropanol, and 100 mM sodium citrate.Before data collection, the crystals were transferred to a cryo-protectant solution including 16% glycerol in the precipitant solution.The crystals were flash-cooled to 100 K using a Cryostream 700 cooler and stored in liquid nitrogen.Diffraction data were collected at 100 K at the ESRF beamline ID29.The structure was solved with molecular-replacement methods.Data collection and refinement statistics are shown in Table 2.Details of the structure determination are provided in the Supplemental Information.The atomic coordinates and structure factors have been deposited in the Research Collaboratory for Structural Bioinformatics, Rutgers University, New Brunswick, NJ.J.J., F.A., E.S.A., C.A., S.B., S.H., J.G., and P.W.S. wrote the manuscript.K.S.K., E.C., and J.G. synthesized and characterized the molecules.A.B., C.G., U.H.S., and E.S.E. determined the crystal structure and interpreted the data with J.J., F.A., and J.G. C.A., S.H., and J.W. performed western blot analysis and cell-infection experiments.J.W. and K.V. performed northern blot experiments.S.H. determined the hemolytic activity and growth rate.K.V. performed preliminary experiments.M.S.N. performed ITC and analyzed the data with P.W.S. C.A. performed and analyzed the SPR experiments.F.A. and J.J. conceived and initiated the project.All authors read and edited the manuscript. | The transcriptional activator PrfA, a member of the Crp/Fnr family, controls the expression of some key virulence factors necessary for infection by the human bacterial pathogen Listeria monocytogenes.These inhibitors bind the transcriptional regulator PrfA and decrease its affinity for the consensus DNA-binding site.Structural characterization of this interaction revealed that one of the ring-fused 2-pyridones, compound 1, binds at two separate sites on the protein: one within a hydrophobic pocket or tunnel, located between the C- and N-terminal domains of PrfA, and the second in the vicinity of the DNA-binding helix-turn-helix motif.At both sites the compound interacts with residues important for PrfA activation and helix-turn-helix formation. |
other explanatory factors tested did not have an impact.Although baseline concentrations influenced individuals’ stress-induced levels of CORT stress-induced values were not influenced by brood size.Instead, stress-induced levels were higher in chicks whose nest started later in the season.Contrary to our predictions that parental neophobia levels would affect provisioning rates and the levels of developmental stress their offspring experience, we did not find correlations between neophobia, feeding rate, or offspring hormone levels.Although parents’ provisioning rate was the main predictor of chicks’ survival, as found in previous studies, provisioning rate did not correlate with parental neophobia scores, nor with chicks’ body condition and stress levels.However, certain aspects of the rearing environment were associated with chicks’ stress hormone levels.For example, chicks from nests with larger broods had higher baseline CORT levels, and later hatching nests had higher CORT concentrations in response to handling stress, irrespective of chicks’ body condition.Since parents from later hatching nests also were slower to return in both experimental conditions, such a response may indicate that either these parents were more sensitive to nest disturbance, independent of novelty responses, or that they spent less time at their nests generally.Overall, the results reveal the importance of sibling competition and hatching date in contributing to natural variation in stress responses, but suggest that parents’ neophobia has no detectable influence on their reproductive success under the environmental conditions of this study.Fig. 5 provides a graphical illustration of the relationships between parental traits, rearing environments and offspring traits.Although parents’ neophobia scores did not correlate with either the number or condition of their chicks, the scores themselves cannot be dismissed as noise.Neophobia scores and our provisioning rate measures were consistent across the season, with similar repeatability to that reported in studies on other species that have presented novel objects at nest boxes.Given that individual variation across cognitive responses and traits may have important effects on fitness, one might expect this variation to have impacts on reproductive success.However, we found no impact of parental neophobia on either the percentage of hatching chicks that fledged per nest, or the body condition of chicks.Given that jackdaws are known to be more neophobic than other passerine species, such as great tits, it may seem puzzling at first that we found no obvious costs or benefits to this distinctive trait.Neophobia levels are suggested to impact fitness by increasing wariness and thus survival alongside predators and by helping with foraging among potentially dangerous resources.This hypothesis relies on there being a high prevalence of predators, or poisonous prey, which could vary as environmental conditions change.Additionally, the same environmental conditions may impact the optimal level of neophobia differently depending on animals’ life stage.For instance, high neophobia increases survival in juvenile, predator naïve reef fish.Meanwhile higher parental neophobia is correlated with lower nest survival in great tits, supposedly because more neophobic individuals were less likely to challenge predators and defend their nests.In this way, the same level of neophobia could have different costs and benefits depending on the life stage and the dangers of the environment, such that neophobia might be beneficial for juveniles who can flee predators but costly for adults when fleeing predators leaves their nests defenseless.Potentially, therefore, neophobia could impact jackdaw fitness or survival at a different life stage or time of year than what our breeding success measures capture.One reason why neophobia did not impact reproductive success is because neophobia did not influence pairs’ combined provisioning rate.Since neophobic behavior involves the psychological appraisal of novelty, neophobia would only aid in acquiring variable food if variability involved novel, not just patchy resources, or if food were often found near novel objects.Therefore reactions towards a novel object in a foraging context might be more relevant for fitness consequences than reactions in a nesting context.While object neophobia in corvids is repeatable when tested in the same context and time of year, the consistency of individuals in the wild toward object neophobia tests in different contexts is rarely studied.Moreover, very little is known about how individual variation in object neophobia impacts natural feeding choices in the wild.Since we were unable to measure the extent to which single parents contributed to the pairs’ neophobia score and provisioning rate, it is possible that partners could compensate if one member of the pair was particularly neophobic, and therefore mask connections between neophobia and provisioning.However, as the reproductive output that we measured stemmed from pair-level success, the birds’ combined effort, and hence their combined neophobia, is likely to have the greatest bearing on fitness,Regardless of whether partner compensation was occurring, overall feeding rate did not predict either baseline or stress-induced CORT levels.This null result is surprising because nutritional deficits have been shown to impact CORT hormone levels in other corvids.Since higher feeding rates were associated with increased brood size, and increased brood size predicted elevated baseline CORT levels, the way food was allocated within the nest may explain why feeding rate did not impact CORT.The predictability of a food source, not just the total amount of food available can influence CORT expression.Having more siblings could decrease the predictability with which any one individual was fed.This effect seemed to impact all chicks within the brood similarly because we found no direct connection between baseline or stress-induced hormone levels and nestling body condition.An independence between baseline hormone levels and body condition contrasts with findings from studies of other birds.Since elevated baseline CORT encourages chicks to beg more often, long term | Despite its consistency across the breeding season, and suggestions in the literature that it should have importance for reproductive fitness, parental neophobia did not predict nest success, provisioning rates or offspring hormone levels.Parents with lower provisioning rates fledged fewer chicks, chicks from larger broods had elevated baseline CORT levels, and chicks with later hatching dates showed higher stress-induced CORT levels. |
increases in baseline CORT may act as an adaptive response to sibling competition, despite the costs that these hormones incur, such as later impacts on spatial memory and immune responses.Although higher levels of baseline CORT have been documented in experimentally enlarged clutches in other species not all studies with brood manipulations or natural brood variation have found such an effect.These differences between species in the effect of brood size on CORT cannot be explained by differences in hatching asynchrony.Even though it is unclear why larger broods of jackdaws have higher baseline CORT when other species may not, there are likely to be long-term effects of such sibling competition on individuals from larger broods.Rearing conditions also influenced chicks’ stress-induced stress levels, as later hatching nests had higher stress-induced CORT values.There are two potential explanations for this effect, namely that late season chicks may have had worse parents, or that they may have experienced a different surrounding environment than early breeders.We found that parents from later season nests were slower to return in both control and object test conditions, which could mean that later season parents were more sensitive to disturbances such as a trial setup, or that they generally visited less often.Although nests that were slower to return in test and control conditions were also more likely to have lower provisioning rates, provisioning rate itself did not directly predict stress-induced CORT levels.Instead, later season jackdaws’ reluctance to return to the nest might have been indicative of lower levels of nest attendance.Reductions in nest attendance have been shown to alter stress hormone physiology in nestling Florida scrub-jays, which has been suggested to be the result of the social stress of separation from the mother.Therefore the parenting of late breeders’ might be to blame for the increases in stress-induced CORT we found.Alternatively, the hormonal difference might not be due to the characteristics of late breeding parents, but to some type of external stress that impacts late nests disproportionately.Overall, later breeding individuals in many species produce smaller or poorer quality clutches, but whether their poor performance is a result of individual quality is unclear because timing and quality are often intertwined.Although later nests fledged a similar number and quality of chicks, their elevated stress-induced hormone levels could indicate that late hatching individuals might be on a different developmental trajectory that predisposes them to be more responsive to acute stressors.Although we found no impact of parental neophobia on offspring CORT levels, the variation in baseline and stress-induced CORT that we detected among nestlings could potentially contribute to downstream variation in their stress responses as adults.Since experiencing elevated levels of CORT during development may modify the negative feedback loops of stress hormone expression, the impact of sibling competition and later hatch date may determine how individuals cope with future stressors.Moreover, since the expression of neophobia and CORT are thought to be linked within individuals, and there is evidence that experimentally administering CORT during development increases neophobia later in life, at least in males, differences in the rearing environment might also contribute to variation in neophobia in adulthood.Testing whether or not, for example, chicks in larger broods show differing levels of neophobia as adults could help determine the long term consequences of early life stress and help explain why we see variation in neophobia without clear fitness consequences.Investigating the development of individual differences in stress physiology helps explain some of the variation in cognitive traits, and stress responses seen in the wild.Neophobia, provisioning rates and CORT were not connected in this study.If this disconnect is true for a number of species, then perhaps we need to re-examine under what ecological conditions neophobia should be favored.Future research needs to determine whether neophobia is not predictive of the quality of rearing environment across a greater diversity of environmental conditions when food is scarce and innovation could be helpful.Also, assessing the fitness consequences of neophobia at other times of year could help inform where neophobia might benefit individuals.Without such assessments the ecological consequences of individual variation in traits such as neophobia will remain elusive.This work was supported by the Gates Cambridge Trust to ALG and two separate BBSRC David Phillips Fellowships to KAS and AT. | Many species show individual variation in neophobia and stress hormones, but the causes and consequences of this variation in the wild are unclear.Variation in neophobia levels could affect the number of offspring animals produce, and more subtly influence the rearing environment and offspring development.Therefore measuring offspring stress hormone levels, such as corticosterone (CORT), helps determine if parental neophobia influences the condition and developmental trajectory of young.As a highly neophobic species, jackdaws (Corvus monedula) are excellent for exploring the potential effects of parental neophobia on developing offspring.Instead, sibling competition and poor parental care contributed to natural variation in stress responses.Since CORT levels may influence the expression of adult neophobia, variation in juvenile stress responses could explain the development and maintenance of neophobic variation within the adult population. |
Multiagent systems where the agents interact among themselves and with an stochastic environment can be formalized as stochastic games.We study a subclass of these games, named Markov potential games, that appear often in economic and engineering applications when the agents share some common resource.We consider MPGs with continuous state-action variables, coupled constraints and nonconvex rewards.Previous analysis followed a variational approach that is only valid for very simple cases; or considered deterministic dynamics and provided open-loop analysis, studying strategies that consist in predefined action sequences, which are not optimal for stochastic environments.We present a closed-loop analysis for MPGs and consider parametric policies that depend on the current state and where agents adapt to stochastic transitions.We provide easily verifiable, sufficient and necessary conditions for a stochastic game to be an MPG, even for complex parametric functions; and show that a closed-loop Nash equilibrium can be found by solving a related optimal control problem.This is useful since solving an OCP---which is a single-objective problem---is usually much simpler than solving the original set of coupled OCPs that form the game---which is a multiobjective control problem.This is a considerable improvement over the previously standard approach for the CL analysis of MPGs, which gives no approximate solution if no NE belongs to the chosen parametric family, and which is practical only for simple parametric forms.We illustrate the theoretical contributions with an example by applying our approach to a noncooperative communications engineering game.We then solve the game with a deep reinforcement learning algorithm that learns policies that closely approximates an exact variational NE of the game. | We present general closed loop analysis for Markov potential games and show that deep reinforcement learning can be used for learning approximate closed-loop Nash equilibrium. |
Stochastic gradient descent, which dates back to the 1950s, is one of the most popular and effective approaches for performing stochastic optimization.Research on SGD resurged recently in machine learning for optimizing convex loss functions and training nonconvex deep neural networks.The theory assumes that one can easily compute an unbiased gradient estimator, which is usually the case due to the sample average nature of empirical risk minimization.There exist, however, many scenarios where an unbiased estimator may be as expensive to compute as the full gradient because training examples are interconnected.Recently, Chen et al. proposed using a consistent gradient estimator as an economic alternative.Encouraged by empirical success, we show, in a general setting, that consistent estimators result in the same convergence behavior as do unbiased ones.Our analysis covers strongly convex, convex, and nonconvex objectives.We verify the results with illustrative experiments on synthetic and real-world data.This work opens several new research directions, including the development of more efficient SGD updates with consistent estimators and the design of efficient training algorithms for large-scale graphs. | Convergence theory for biased (but consistent) gradient estimators in stochastic optimization and application to graph convolutional networks |
tissues stored in BABB.Alexa Fluor® fluorescence was found to be stable for a few months in BABB.Besides, differences in the effect of dehydration with ethanol or methanol have also been described.Parra et al. proposed ethanol to be used for better preservation of fluorescence signals.Becker et al. and Jährling et al. corroborate these findings and use ethanol for dehydration.In our study, methanol was used for clearing of gingiva samples and we did not find any loss of fluorescence signals, irrespective whether the iDISCO or BABB protocol was followed.Clearing with organic solvents has the disadvantage of toxicity and aggressiveness.Organic clearing solutions have to be handled carefully.Parra et al. found out that BABB can dissolve glue.Therefore, application of BABB and DBE in Petri dishes and coverslips was tested first before imaging gingiva.Our findings were in agreement with those of Parra et al., as BABB and DBE indeed dissolved glue.As an alternative, dental cement was tried for imaging.The ring formed on a coverslip did not dissolve, but did not stick properly to the glass either.Therefore, this approach could not be used.However, a metal ring around the tissue sample solved the problem for confocal microscopy whereas light-sheet microscopes have a chamber especially designed for solutions like DBE and BABB.Imaging is only successful when the appropriate imaging solution is used.When the RI of the tissue is different from that of the imaging solution, light is scattered which blurs the image and limits the penetration of the laser light into the tissue sample.Glycerol cannot be used as imaging solution because according to Richardson and Lichtman the RI of the imaging solution and the tissue and glass coverslips and immersion oil differs too much.Moreover, glycerol is viscous, and bubbles are introduced easily during handling, which makes imaging impossible.Finally, glycerol is hydrophilic, while BABB and DBE are hydrophobic, and glycerol does not penetrate the tissue that has been cleared with DBE.The best results were obtained when imaging the tissue in its clearing solution.Tissues that were cleared in BABB can also be imaged in DBE as imaging solution, because the RIs of both solutions are nearly identical.For tissue cleared in aqueous solutions, other imaging solutions than glycerol have been proposed.FocusClear has been used by Chung et al. to image tissue that has been cleared with CLARITY.It contains DMSO, diatrizoate acid and other reagents, but the exact composition is proprietary.As a result, FocusClear is expensive and cannot be optimized for the various tissues other than brain.Although it has been described for imaging tissue using a confocal microscope, it was not necessary to test imaging with FocusClear in the present study.Marx also reported that FocusClear is not necessary.To summarize our review of methods particularly for clearing human extracellular-rich tissue, it can be stated, that the best results are obtained when these tissues are cleared with BABB or iDISCO.The authors declare that they have no conflict of interests.No funding from sources has to be reported. | For 3-dimensional (3D) imaging of a tissue, 3 methodological steps are essential and their successful application depends on specific characteristics of the type of tissue.The steps are 1° clearing of the opaque tissue to render it transparent for microscopy, 2° fluorescence labeling of the tissues and 3° 3D imaging.In the past decades, new methodologies were introduced for the clearing steps with their specific advantages and disadvantages.Most clearing techniques have been applied to the central nervous system and other organs that contain relatively low amounts of connective tissue including extracellular matrix.The present survey lists methodologies that are available for clearing of tissues for 3D imaging.We report here that the BABB method using a mixture of benzyl alcohol and benzyl benzoate and iDISCO using dibenzylether (DBE) are the most successful methods for clearing connective tissue-rich gingiva and dermis of skin for 3D histochemistry and imaging of fluorescence using light-sheet microscopy. |
intervention group, and 43 of 366 in the control subsample, had a urinary tract infection at initial screening, corresponding to prevalences of 8·6% and 11·8%, respectively.In the intervention group, 271 participants with urinary tract infections started antibiotics and 251 completed the full course.A test-of-cure urine specimen was obtained from 244, 47 of whom had persistent infections.Among the 216 participants who completed the full initial antibiotic course and underwent rescreening, 153 were cured.Overall, the effective coverage of successful treatment of urinary tract infection was 70·7% after two antibiotic courses.The frequency of treatment and resolution were broadly similar in the control subsample.At initial screening, 73 of 3319 participants in the intervention group were co-infected with abnormal vaginal flora and urinary tract infections.The distribution of mean gestational age was similar between the intervention and control groups.A cluster-level analysis of median gestational age similarly showed no difference between groups."The incidence of preterm livebirths of less than 37 weeks' gestation, preterm livebirths of less than 34 weeks' gestation, or preterm deliveries including late miscarriage and stillbirth did not differ significantly between groups. "Sensitivity analysis showed that inclusion of birth outcomes for pregnancies of less than 20 weeks' gestation did not affect outcomes.Adjustment for covariates that seemed slightly imbalanced between groups also did not affect these estimates.In exploratory post-hoc analyses, the risk of preterm delivery was significantly higher among women and girls with persistent abnormal vaginal flora than among non-infected participants.Preterm delivery occurred in 72 of 202 participants with persistent abnormal vaginal flora, compared with 839 of 3472 non-infected participants."The frequency of delivery before 34 weeks' gestation was also higher among those with persistent abnormal vaginal flora than among those who were not infected. "In participants who were diagnosed with abnormal vaginal flora, completed antibiotic treatment, and had documented cure, the risks of preterm birth before 37 weeks' and 34 weeks' gestation was similar to that in uninfected participants.Rates of late miscarriage, late fetal deaths, stillbirth, neonatal mortality, and perinatal mortality did not differ between groups.Infant weight was measured within 72 h of birth for 2461 of 3818 infants in the intervention group and 2268 of 3557 infants in the control group.Mean weight did not differ significantly between groups.The frequency of infants with low birthweight or who were small for gestational age did not differ significantly between groups. | Background: One-third of preterm births are attributed to pregnancy infections.We implemented a community-based intervention to screen and treat maternal genitourinary tract infections, with the aim of reducing the incidence of preterm birth.Eligible participants within clusters were all ever-married women and girls of reproductive age (ie, aged 15–49 years) who became pregnant during the study period.Clusters were randomly assigned (1:1) to the intervention or control groups via a restricted randomisation procedure.In both groups, community health workers made home visits to identify pregnant women and girls and provide antenatal and postnatal care.Between 13 and 19 weeks' gestation, participants in the intervention group received home-based screening for abnormal vaginal flora and urinary tract infections.A random 10% of the control group also received the intervention to examine the similarity of infection prevalence between groups.Both infections were retreated if persistent.The primary outcome was the incidence of preterm livebirths before 37 weeks' gestation among all livebirths.The trial is closed to new participants, with follow-up completed.Findings: Between Jan 2, 2012, and July 28, 2015, 9712 pregnancies were enrolled (4840 in the intervention group, 4391 in the control group, and 481 in the control subsample).3818 livebirths in the intervention group and 3557 livebirths in the control group were included in the primary analysis.In the intervention group, the prevalence of abnormal vaginal flora was 16.3% (95% CI 15.1–17.6) and that of urinary tract infection was 8.6% (7.7–9.5).The effective coverage of successful treatment in the intervention group was 58% in participants with abnormal vaginal flora (ie, abnormal vaginal flora resolved in 361 [58%] of the 622 participants who initially tested positive), and 71% in those with urinary tract infections (ie, resolution in 224 [71%] of the 317 participants who initially tested positive).Overall, the incidence of preterm livebirths before 37 weeks' gestation did not differ significantly between the intervention and control groups (21.8% vs 20.6%; relative risk 1.07 [95% CI 0.91–1.24]).Interpretation: A population-based antenatal screening and treatment programme for genitourinary tract infections did not reduce the incidence of preterm birth in Bangladesh.Funding: Eunice Kennedy Shriver National Institute of Child Health and Human Development and Saving Lives at Birth Grand Challenges. |
and samples stored with EOs+β-CD ice.Hence, EOs+β-CD ice had not any adverse effect on the sensory acceptability of seabream.Similar results were found in cooked seabream.Control seabream were worst evaluated along cold storage by the panellists.Neither odour nor taste of the cooked seabream slaughtered and stored with EOs-β-CD ice was negatively affected.The panellists did not detect unpleasant odours and taste due to EOs aroma.Thus, the use of EOs-β-CD ice in stunning/slaughtering and ice storage do not affect sensorial characteristics of fresh and cooked seabream, when low doses are used, as the handled in this work.Our previous work showed that using 10 and 15 mg CEO+β-CD crushed ice during stunning/slaughtering promoted a decrease in plasmatic glucose levels, which confirm CEO+β-CD ice decreases the stress in fish at slaughtering.Furthermore, the results of the current work showed that application of CEO+β-CD crushed ice during stunning/slaughtering with or without using CBG+β-CD crushed ice during ice storage, improved the quality and freshness of seabream and extended its shelf-life.The application of EOs+β-CD improved microbiological and some chemical parameters along the storage time.Seabream slaughtered or stored in ice including encapsulated EOs showed lower microbiological and chemical values due to the antimicrobial and antioxidant properties of the essential oils.In fact, the microbiological shelf-life was established in 18 days for seabream from control treatment while seabream slaughtered or stored in antimicrobial ice had an extended shelf-life.Use of antimicrobial ice extended the shelf-life of seabream stored at 2 °C between 4 and 6 days if comparing with seabream from control treatment.According to the results of sensory analyses, up to 15 days all the conditions were determined as fresh fish but on day 17 control seabream was no longer acceptable for consumption.Antimicrobial CBG+β-CD crushed ice avoids unpleasant sensorial attributes and improve sensorial acceptability.Thus, the application of CEO+β-CD during stunning/slaughtering, in combination with ice storing of seabream did not show off-flavour and off-odour along storage time.Thus, antimicrobial ice including CBG+β-CD can be a good option to extend shelf-life for marine species by fishing industry due to beneficial effects and the low price of EOs+β-CD crushed ice.Laura Navarro-Segura, Amanda E López Cánovas, Isabel Cabas: Performed the experiments; Analyzed and interpreted the data.María Ros-Chumillas: Performed the experiments; Analyzed and interpreted the data; Wrote the paper.Garcia Ayala Alfonsa: Analyzed and interpreted the data; Wrote the paper.Antonio Lopez Gomez: Conceived and designed the experiments; Analyzed and interpreted the data; Wrote the paper.This work was funded by PESCAMUR SL Company and CDTI, Project Number: IDI-20150100, and co-financed by the European Regional Development Fund through the Pluriregional Operational Programme for Intelligent Growth.The authors declare no conflict of interest.No additional information is available for this paper. | Ice containing essential oils (EOs) nanoencapsulated in β-cyclodextrins (β-CD) (named as EOs+β-CD ice) was used for stunning/slaughtering by hypothermia in ice slurry, and for ice storage of gilthead seabream.Clove essential oil (CEO) was used at fish stunning/slaughtering, while ice storage of whole fish was performed using a combination of carvacrol, bergamot and grapefruit EOs (CBG).Inclusion complexes CBG+β-CD were characterized, and antimicrobial effect was also evaluated.The kneading method used to form inclusion complexes with CBG showed a good complexation efficiency.Microbial, physical-chemical and sensory analyses were carried out to assess the quality changes of fresh whole seabream during ice storage at 2 °C for 17 days.Results (microbial, chemical and sensorial) indicated that seabream stunning/slaughtering and storage using EOs+β-CD ice (in low doses of 15 mg/kg ice for stunning, and 50 mg/kg ice for ice storage) improved the quality of fresh fish and extended the shelf-life up to 4 days. |
Although honesty is regarded as a virtue or even a moral duty, lying and deception permeate economic life.Studying truth-telling has accordingly become a focus of inquiry for economics.1,An area of particular public economic importance is the truth-telling of economic agents towards their regulating authorities—from the banking industry, and tax reporting to environmental regulation.The case where the German car manufacturer Volkswagen systematically lied about cars’ emissions is but one prominent example.Faced with uncertainty about how honest economic agents are, regulators need to decide how much to invest in monitoring and how to devise appropriate sanctioning schemes for misbehavior.Appropriate monitoring and sanctioning mechanisms are especially crucial for the management of common pool resources, with the fishery as a prime example.Fishery management comes in many different forms around the globe.It ranges from stringent restrictions on fish catches using individual transferable quotas—as in New Zealand or Iceland—to largely unregulated open-access fishing, as it is still the case for most high-seas fisheries.The costs of illegal, unreported and unregulated fishing are substantial and amount to US$ 10 to 23 billion per year.Due to its economic importance and the heterogeneity of its regulatory structures, the fishery has gained substantial interest in experimental economic work.2,This paper extends the scope of previous studies and investigates to what extent regulator framing affects truth-telling.Our study therefore adds a new dimension to effective regulatory policy.We present evidence from an artefactual mail field experiment that examines truth-telling of German commercial fishermen."German commercial fishing is regulated by the European Union, which is the world's fourth largest producer of fish, under the European Common Fisheries Policy.The EU has recently enacted a ban on returning unwanted fish catches to the sea, as the practice of discarding ensues substantial costs to the public.3,The change in legislation has, as of yet, not been combined with more stringent monitoring."The regulator, and scientists assessing the status of fish stocks upon which recommendations for fishery management are based, thus depend on fishermen's truth-telling.Continuing to discard unwanted fish catches remains the individually optimal choice for fishermen in the present regulatory regime unless the regulator enforces the new policy.This, however, would require costly monitoring and sanctioning mechanisms.4, "This trade-off for the regulator between more costly monitoring and reliance on regulatee's honesty is not only relevant in the fishery for the newly enacted European “discard ban” or compliance with fishing quotas, but is present more generally, including the previously discussed cases of banking, tax reporting and environmental regulation.For studying to what extent fishermen might tell the truth towards their regulator, we conduct a coin-tossing game in a mail field experiment targeting all commercial fishermen in Germany.Adapting the 4-coin toss game of Abeler et al., we ask fishermen to toss a coin 4 times and report back their number of tail tosses.For each reported tail toss, they receive five Euros.In a between-subjects design, we test whether truth-telling in a baseline setting differs from truth-telling in two further treatments with different EU framings, where, first, the EU flag is made salient on the instruction sheet, and, second, a framing that states additionally that the European Commission has funded the research.Based on a simple model of reporting behavior of fishermen that considers internal Nash bargaining among a pay-off maximizing ‘selfish self’ and a ‘moral self’, we hypothesize that the salience of the EU regulator may increase the bargaining power of the ‘selfish self’ vis-à-vis the ‘moral self’ and thus decrease overall lying costs if the EU is ill-regarded.The fishery is an ideal test case for studying how truth-telling behavior may be affected by regulatory framing, as there is well-documented and wide-spread contempt among fishermen concerning stricter EU fishing regulation.We confirm the almost entirely negative view of the EU prevalent among European fishermen for our field experimental setting in Germany: Besides ample anecdotal evidence, our survey results indicate that the vast majority of participating fishermen have a low trust in the EU, while this is only the case for about a third of a student control group.If regulator framing impacts truth-telling, we will therefore expect an almost uniform direction of the effect.To study the robustness of our findings, we conduct a similar experiment with a population that is similar to the fishermen with respect to the stance towards the EU: Brexiteers.Using a large sample of UK citizens we conduct the experiment with 1200 individuals who reported to have voted ‘leave’ in the Brexit referendum in previous questionnaires.We find that fishermen misreport coin tosses to their advantage, albeit to a lesser extent than standard theory predicts.As hypothesized, misreporting is larger among fishermen who are faced with the EU flag.Our main effect is supported by further regression analyses.We also find some support for our main result in the conceptual replication with Brexiteers.Furthermore, we find that misreporting by fishermen is consistent with behavior in other hidden tasks involving a sent-along coin and the possibility to cheat in a competition task, suggesting some more general validity of the coin toss findings.Overall, our results imply that lying is more extensive towards an ill-regarded regulator.We close by discussing further policy relevance of our results.The fishery has economic relevance in the German coastal regions at both the North Sea and Baltic Sea."According to the European Union's Common Fisheries Policy, the Council of Ministers of the European Union and the European Parliament set fishing quotas for the German fisheries.The German Federal Office for Agriculture and Food distributes the national catch quotas to fishing organizations or individual | Understanding what determines the truth-telling of economic agents towards their regulator is of major economic importance from banking to the management of common-pool resources such as European fisheries.By enacting a discard-ban on unwanted fish-catches without increasing monitoring activities, the European Union (EU) depends on fishermen's truth-telling.Using a coin-tossing task in an artefactual mail field experiment with 120 German commercial fishermen, we test whether truth-telling in a baseline setting differs from behavior in two treatments that exploit fishermen's widespread ill-regard of their regulator, the EU.We find, first, that fishermen misreport coin tosses more strongly to their advantage in a treatment where they are faced with the EU flag, and, second, that misreporting is consistent with behavior in other hidden tasks. |
results also extend to other settings.The experiment with fishermen did not confirm our initial hypothesis 3 that fishermen report less truthfully in EU_Flag_Funding compared to the EU_Flag treatment.Our expectation was that fishermen may regard the additional informational cue as an indication that there is plenty of funding available to those conducting the study, and that this may reduce the moral cost of lying, such that fishermen would report less truthfully.This effect is not apparent in the experiment with fishermen.In the regression analysis, the EU_Flag_Funding treatment variable does not explain any difference in reported tail tosses compared to the baseline treatment.The conceptual replication with Brexiteers includes a treatment where we show the information that this study is funded by public funds, but without the EU flag, i.e. omitting reference to the regulator.This treatment leads to increased misreporting in a similar way as the EU_Flag treatment.Indeed, it might be that the additional information box about funding may make the wealth of the funding institution more salient and taking money may appear permissible to the Brexiteers.This would be in line with our initial hypothesis 3 formulated for the experiment with fishermen.However, the conceptual replication did not provide an explanation why fishermen were more honest in the EU_Flag_Funding treatment than in the EU_Flag treatment.One mechanism that may seem plausible is that fishermen may have considered the joint information on research funding and the EU flag as information that the EU is using money to survey fishermen.As this was a mail experiment, fishermen may even have taken the opportunity to discuss these issues with relatives or family members, which may have reinforced such a view.This may lead to more support for the regulator, rendering the responses statistically indistinguishable from the baseline treatment.However, this is just one possible explanation and there may be others.Our findings show that the ill-regard of the regulator is not the only effect present, and that other mechanisms may offset the dislike effect.It is an interesting question for future research if and how the regulator can approach regulated individuals in such a different kind of way to offset the negative attitude and thus increase truth-telling behavior.Moreover, we find evidence suggesting some consistency of behavior between the coin-tossing task and two other measures of truth-telling or lying behavior.This finding is based on two hidden tasks in the experiment—leaving the ownership of a coin to flip ambiguous and using an additional task in which it was possible to provide more material than was supplied—that may be of use for experimental methodology beyond our specific context to investigate the external validity of standard lying tasks.Overall, our findings imply that regulators not only have to consider some exogenous degree of dishonesty among the regulated, but also take into account that truth-telling may erode in reaction to the regulatory policy.Faced with a variable degree of dishonesty, the regulator can act strategically in adopting its regulatory approach, such as shifting part of the regulatory work to bodies that are closer to the regulated, thus considering how the regulated will adapt their behavior.Whereas the substantial number of fishermen who likely report honestly might suggest that softer monitoring approaches could be sufficient, the strategic aspect of regulatory experience calls for a more deliberate approach.One possible solution to coping with this strategic dimension of dishonesty would be to choose the ‘corner solution’ and comprehensive control.28,In practice, this would mean a monitoring scheme relying on-board observers or camera systems.However, instead of directly incurring the high costs to the regulator and fishermen of comprehensive control, our recommended approach would be to introduce monitoring of different degrees of stringency selectively to study the effects of monitoring on honesty.Overall, our findings imply that lying is more extensive towards an ill-regarded regulator and that policy needs to account for this endogenously eroding honesty base.Studying this new dimension of truth-telling in further detail is a promising avenue for future research. | Our findings imply that lying is more extensive towards an ill-regarded regulator and that policy needs to account for this endogenously eroding honesty base. |
IVZ specimens exhibited framework fracture, and micro-CT images revealed fractures from the surface of the pontic to the connector area.The MC specimens exhibited veneer fracture at the porcelain/metal interface or within the porcelain.This study investigated the effects of veneering material and framework design on fracture load of implant-supported IVZ.Median fracture load values were significantly higher for the IVZ specimens than for the PVZ specimens in both UT and AD groups.In addition, AD group had higher median fracture loads than the UT group.Thus, the present results support rejection of the null hypothesis — that the fracture load of implant-supported zirconia bilayered prostheses would not be related to veneering material used or zirconia framework design.The median fracture loads of all tested groups were greater than 1.38 kN, which exceeds the maximum molar masticatory force of up to 0.92 kN .Moreover, median fracture loads in the IVZ specimens were greater than or similar to those in the MC specimens, which is the gold standard for implant-supported FDPs.The findings indicate that the implant-supported IVZ assessed in this study are clinically feasible.The present results suggest that, in relation fracture resistance, indirect composite materials are a useful alternative to feldspathic porcelain as a layering material for implant-supported zirconia FDPs.This finding is likely attributable to the material characteristics of indirect composite resin, including modulus of elasticity and hardness.The lower elastic modulus of composite materials improves shock absorption and reduces impact force and stress on implant-supported FPDs .Although no studies have investigated the fracture resistance of IVZ, the present findings agree with the results of previous studies of fracture resistance of indirect composite-veneered zirconia single prostheses .On the other hand, the composite materials have negative sides such as insufficient wear resistance and plaque accumulation .Hence, the further studies with respect to stability of the IVZ should be conducted.Interestingly, the fracture pattern of implant-supported IVZ differed by framework design.The IVZ specimens exhibited veneer fracture in the UT group, probably because of insufficient bond strength of the veneering material to zirconia.Previous studies reported the bond strength between an indirect composite material and zirconia was lower than that of feldspathic porcelain .In contrast, framework fracture was seen in AD group from the IVZ specimens, probably because the anatomic framework design provided support for the veneering material and thus prevented veneer fracture in the FDPs tested.Fracture load values were greater for the anatomic framework design than for the uniform thickness design.This finding is consistent with those of studies of fracture resistance in zirconia single prostheses.It may be that vertical and lateral occlusal stresses are eliminated by optimal support of veneers in the anatomic framework design.The difference in the dimensions of the zirconia frameworks for the UT and AD groups is another possible explanation.This is a limitation of this study.In IVZ specimens, however, the difference in fracture pattern between UT and AD groups suggests that the anatomic framework is effective in fracture resistance.This is supported by the findings, which the maximum stress was located at the connector areas of the framework of zirconia FDPs for the anatomical design of zirconia frameworks .Therefore, the present results indicate that, to ensure adequate support of veneers and thickness of veneering materials, an anatomic zirconia framework design is recommended for implant-supported zirconia-based FDPs.Future studies should attempt to confirm the present results and develop clinical protocols that ensure the long-term stability of implant-supported IVZ.Indirect composite materials appear to be an alternative to feldspathic porcelain as the layering material for implant-supported zirconia FDPs.The AD group had higher fracture loads than UT group.The present implant-supported IVZ appear to be clinically feasible. | Purpose: To determine the effect of veneering material and framework design on fracture loads of implant-supported zirconia molar fixed dental prostheses (FDPs).Methods: Sixty-six zirconia FDPs were manufactured onto two implants and classified as uniform thickness (UT) or anatomic design (AD).These framework design groups were then further divided into three subgroups (n = 11): feldspathic porcelain-veneered zirconia FDPs (PVZ), indirect composite-veneered zirconia FDPs (IVZ), and metal–ceramic FDPs (MC).The FDPs were luted on the implant abutments and underwent fracture load testing.Results: For UT group, median fracture load was significantly higher for the IVZ (1.87 kN) and MC (1.90 kN) specimens than for the PVZ specimens (1.38 kN) (p < 0.05).In the AD group, the IVZ specimens had the highest median fracture load (4.10 kN) of the three groups tested.The AD group exhibited higher median fracture loads than the UT group in all subgroups.Conclusions: Indirect composite appears to be a useful alternative to feldspathic porcelain as the layering material for implant-supported zirconia FDPs.The AD group had higher fracture loads than UT group.In addition, implant-supported indirect composite-veneered zirconia-based FDPs appear to be clinically feasible. |
the following fluorophore-coupled anti-human antibodies: IgG-APC, IgM-BV605, CD19-BV421, CD20-BV421, CD3-PerCP-Cy5.5, CD14 PerCP-Cy5.5, CD335 PerCP-Cy5.5, CD606 PerCP-Cy5.5 and 1:20 Streptavidin-PE coupled BG505.SOSIP.664 mix described above.Staining was performed for 30 mins at 4°C.Sorting was done on a FACS Aria II.The gating first included singlets, followed by exclusion of unwanted cells, selection for B cells and finally sorting of single IgG+ PE+ cells into 96-well plates containing lysis buffer.B cell antibody genes were amplified and Sanger sequenced.Antibody sequences were analyzed using both IgBLAST and the international ImMunoGeneTics information system.Sequences of interest were cloned into human Igγ1-, Igκ or Igλ-expression vectors by SLIC as described above.The IgVH4∗59∗01 Homo sapiens allele sequence was obtained from the international ImMunoGeneTics information system.Antibody heavy chain nucleotide sequences of the SF family were aligned with the IgVH4∗59∗01 sequence in Geneious R8 using ClustalW.The maximum-likelihood tree was generated using the RAxML plugin with a GTR Gamma model using the ‘Rapid Bootstrapping and search for best-scoring ML tree’ function with 100 bootstrap replicates.The best-scoring ML tree was then formatted using FigTree.293-6E cells were maintained in Freestyle 293 Expression Medium containing 0.2% Penicillin-Streptomycin.Paired heavy and light chain expression constructs were transfected into 293-6E cells using branched polyethylenimine 25 kDA.After 7 days of culture, cells were spun down at 4200 g for 40 mins at 4°C and supernatants were filtered through 0.22 μM aPES.Antibodies were then purified from filtered supernatants using Protein G Sepharose 4 Fast Flow according to standard protocols.Antibodies were buffer exchanged and concentrated into PBS using Amicon Ultra centrifugal filter with either a 30 or 50 kDA molecular weight cutoff.Wild-type and mutant His-tagged YU2 gp120/gp140 proteins were expressed by transient transfection of 293-6E cells and purified using Ni-NTA according to manufacturer’s instructions.Corning Costar 96-Well Assay high-binding plates were coated for 1h at 37°C with 2 μg/ml of the respective protein) using a volume of 50 μl/well.Plates were washed 6x using PBS-Tween20, and subsequently blocked using 3% BSA in PBS for 1h at 37°C.After washing, serially-diluted antibodies were added at 50 μl/well in 1% PBS/BSA and incubated for 1h at room temperature or 37°C.After another wash step, anti-human IgG was added at 1:5000 in 1% PBS/BSA for 30 mins at 37°C.Development was done using 100 μl/well ABTS 1-Step Solution, and absorbance was measured at 405 nm on a FluoStar Omega or 415 nm on a Tecan Sunrise.Corning Costar 96-Well Assay high-binding plates were coated overnight at room temperature or for 1h at 37°C with 2 μg/ml anti-His-tag antibody in PBS.Plates were washed 6x using PBS-Tween20, and subsequently blocked using 3% BSA in PBS or 2% milk powder in PBS for 1h at 37°C.After washing, purified BG505 SOSIP.664-His was added at 2 μg/ml in 1% BSA in PBS, and incubated for 1h at 37°C, followed by another washing step.Next, serially-diluted antibodies were added at 50 μl/well in 1% PBS/BSA, and incubated for 1h at room temperature or 37°C.After washing, anti-human IgG was added at 1:5000 in 1% BSA in PBS for 30 mins at 37°C.Post washing, development was done using 100 μl/well ABTS 1-Step Solution, and absorbance was measured at 405 nm on a FluoStar Omega or 415 nm on a Tecan Sunrise.Antibodies SF5 and SF12 were biotinylated using the FluoReporter Mini-Biotin-XX Protein Labeling Kit.Corning Costar 96-Well Assay high-binding plates were coated overnight at room temperature or for 1h at 37°C with 2 μg/ml anti-His-tag antibody in PBS.Plates were washed 6x using PBS-Tween20, and subsequently blocked using 3% BSA in PBS for 1h at 37°C.After washing, BG505 SOSIP.664 was added at 2 μg/ml in 1% BSA in PBS, and incubated for 1h at 37°C, followed by another washing step.Next, serially-diluted competitor antibodies were added at 50 μl/well in 1% PBS/BSA and incubated for 1h at room temperature.Plates were washed and biotinylated SF5 or SF12 were added at 0.5 μg/ml and incubated for 1h room temperature.After another wash step, Streptavidin-HRP was added at 50 μl/well in 1% PBS/BSA for 30 mins at room temperature.Development was done using 100 μl/well ABTS 1-Step Solution, and absorbance was measured at 405 nm on a FluoStar Omega or 415 nm on a Tecan Sunrise.The accession numbers for the nucleotide sequences of SF-family members are GenBank: MK722158–MK722171.The accession numbers for the cryo-EM reconstructions of the SF12–B41–10-1074 complexes comprising three or two SF12 Fabs are Electron Microscopy Data Bank: EMD-20100 and EMD-20101, respectively.The accession numbers for coordinates for atomic models of the cryo-EM SF12–B41–10-1074 complex and the unliganded SF12 Fab crystal structure are Protein Data Bank: PDB 6OKP and PDB 6OKQ, respectively. | Broadly neutralizing antibodies (bNAbs) against HIV-1 envelope (Env) inform vaccine design and are potential therapeutic agents.We identified SF12 and related bNAbs with up to 62% neutralization breadth from an HIV-infected donor.SF12 recognized a glycan-dominated epitope on Env's silent face and was potent against clade AE viruses, which are poorly covered by V3-glycan bNAbs.A 3.3Å cryo-EM structure of a SF12-Env trimer complex showed additional contacts to Env protein residues by SF12 compared with VRC-PG05, the only other known donor-derived silentface antibody, explaining SF12's increased neutralization breadth, potency, and resistance to Env mutation routes.Asymmetric binding of SF12 was associated with distinct N-glycan conformations across Env protomers, demonstrating intra-Env glycan heterogeneity.Administrating SF12 to HIV-1-infected humanized mice suppressed viremia and selected for viruses lacking the N448gp120 glycan.Effective bNAbs can therefore be raised against HIV-1 Env's silent face, suggesting their potential for HIV-1 prevention, therapy, and vaccine development.VRC-PG05 was the only donor-derived antibody against the silentface (SF) of HIV-1 envelope described to date.identify the antibody SF12 and its relatives, which recognize the center of the SF with a different angle and more extensive protein recognition than VRC-PG05, thereby achieving substantial neutralizing ability and potential for clinical use. |
The authors declare that there are no conflicts of interest.Scaling and temporal adjustment of precision grip force is a highly efficient skill in everyday life.While grasping an object, healthy subjects precisely scale the applied grip force to match the load defined by physical object properties, such as weight and shape, as well as dynamic properties such as inertia.Neural implementation of precision grip force control is embedded in a complex network involving pre-motor cortical areas, the cerebellum and sub-cortical structures, particularly the basal ganglia.Neuroimaging studies have shown that basal ganglia are involved in both predictive aspects of grip force control, as well as parameterization of grip force scaling."In Parkinson's disease a distinction between dynamic grip force control and grip force scaling is observed: Whereas temporal aspects of dynamic grip force control are relatively preserved, grip force scaling is pathologically elevated in PD patients.Direct evidence for the involvement of the subthalamic nucleus in grip force scaling has been obtained in PD patients treated by deep brain stimulation, where pathologically elevated peak grip force could be normalized by chronic DBS.For temporal adaptation of precision grip force, the cerebellum is another key structure: It has been shown that patients with cerebellar disease suffer from impaired grip force control.In this line, grip force adaptation relies on internal anticipatory models in the brain, which are mainly based in the cerebellum.The tight functional connections between basal ganglia and cerebellum, suggest a dynamic interplay between the cerebellum and the basal ganglia in dynamic grip force control.While data from neuroimaging, anatomy and behavior point to an important role of basal ganglia networks in grip force control, the underlying neuronal activity is still unknown.Various studies have demonstrated high beta power in the STN of PD patients and the amount and stability of beta activity in the STN correlates negatively with motor performance.The outstanding role of beta oscillations for bradykinesia has been demonstrated by inducing a frequency-specific impairment in a grip force task upon low-frequency stimulation in the STN of PD patients.Whereas beta activity in the basal ganglia may simply be an epiphenomenon of enhanced neuronal synchronicity during movement initiation, the suppression of beta activity before movement initiation in event-related tasks provides evidence that dynamic changes in beta oscillations are critical for motor control per se.Extending this idea, dissociation of salient cues and actual motor execution supports the hypothesis that beta desynchronization prospectively modulates executive motor processing.To investigate prospective motor control, we examined how STN beta activity is modulated with adaptive grip force control during a shaking movement as compared to a control condition with voluntary grip-force initiation.We included 6 PD patients who underwent DBS in the subthalamic nucleus."Patients' demographic data and clinical details are summarized in Table 1.Bilateral DBS electrodes were implanted after MRI-based direct targeting of the STN.Intra-operatively, accurate implantation of the electrodes within the STN was verified by microelectrode recordings, followed by test stimulation to assess the clinical response, and by CT-imaging to reconstruct the effective electrode position.The data presented here were recorded on the second post-operative day at preoperative l-dopa levels.Local field potentials were recorded on temporarily externalized wires before implantation of the DBS impulse-generator.All patients gave informed written consent to participate in the study.The study was approved by the institutional ethics review board.Adaptive grip force control during motor tasks was measured by a customized device."This device determines and records the applied grip force of the patient's fingers with an in-built force sensor and contains linear acceleration sensors for simultaneous registration of movement in three dimensions.In the case of oscillatory movements, force adaptation relies on an anticipatory internal model.Successful anticipatory grip force control is characterized by a matching of the applied grip force to the loading forces of the device, which were generated by the movement.The device is cuboid and weighs 300 g and emits a TTL pulse for synchronization with other data acquisition systems.To quantify the accuracy of the time-dependent grip-force adaptation, we calculated the correlation coefficient between grip force and loading force as a quantitative measure for the quality of grip force adaptation.The LFP was recorded from all contacts within both STN of each patient.Simultaneously, we recorded scalp EEG from a 12-channel subset of the 10–20 system at the fronto-polar, frontal, central, occipital and midline electrode sites.The central midline electrode Cz was used as recording reference for EEG and LFP.As verified by post-operative reconstruction of the electrode position, the second lowest contact was located in the motor part of the STN in all patients and taken for further analysis.To reduce movement and electrode artifacts, we digitally re-referenced all signals to a Laplacian montage with weighted averages of the surrounding deep brain electrodes and surface electrodes.This montage allowed for a significant reduction of the artifact level, but at the same time ensured the linear independence of cortical and LFP signals for the calculation of cortico-subthalamic synchronization.All experiments were performed in a sitting position.Patients grasped the measurement device with all fingers of one hand, while the other arm was in a resting position.To minimize interference with visual feedback, all experiments were performed with closed eyes.For the shaking task, patients were instructed to shake the cube in a predefined manner, i.e. to perform consecutive point-to-point up- and downward movements in front of the trunk with an amplitude of about 20 cm.This shaking movement was self-paced, but patients were instructed to reach a frequency of approximately 2 Hz, if possible, depending on bradykinesia and rigor.After instruction of the patients | Introduction Healthy subjects scale grip force to match the load defined by physical object properties such as weight, or dynamic properties such as inertia.Methods After implantation of deep brain stimulation (DBS) electrodes in the STN, PD patients performed adaptive and voluntary grip force tasks, while we recorded subthalamic local field potentials (LFP) and scalp EEG. |
distinction between grip force scaling and temporal grip force control in anterior and posterior basal ganglia nuclei: for the voluntary Press task we found higher cortico-STN coherence, indicating that this task is embedded in a cortico-basal ganglia network controlling for grip force parameterization.On the other hand, adaptive grip force control is mediated predominantly through anterior basal ganglia nuclei and accordingly cortico-STN coherence is diminished during Shake.High cortico-STN connectivity during Press may also point to an involvement of the hyperdirect pathway in voluntary grip force control.In this light, our findings suggest that the hyperdirect pathway is predominantly activated during voluntary grip force control and reduced in adaptive grip force control.Furthermore, the hyperdirect pathway has been proposed to play a role in sustaining beta oscillatory activity in the STN.Accordingly, we found higher power spectral density in the beta band during the pressing task, supporting the argument that anactive hyperdirect pathway may cause elevated beta band activity in the STN.Finally, on a behavioral level, we observed significantly higher grip force amplitudes during the pressing task as compared to the shaking task, which could be caused by inhibiting signals from the hyperdirect pathway during voluntary grip force control.STN beta ERP showed a different time course in the two movement tasks.In Shake, there was one peak of beta ERP prior to maximal grip force.In Press there were two peaks, one prior to maximal and one prior to minimal grip force.These temporal changes of synchronicity in the basal ganglia provide complementary information in the understanding of pathological network activity in PD patients.As a clinical observation, habitual movement control is typically more affected in PD patients.Assuming that the baseline beta oscillatory activity is pathologically elevated in PD patients, a higher amount of beta desynchronization is needed to initiate and maintain habitual movement, which is prominently controlled by sub-cortical networks.Clinical studies showed a prolonged and excessive grip force adaptation after motor engagement.This is reflected by the observation that not only movement initiation is impaired in PD patients, but also termination of an ongoing movement is disturbed, resulting in involuntary prolonged movements.Similarly to this behavioral evidence of reduced control in motor dis-engagement in PD patients, the electrophysiological investigation of beta oscillations also show a marked difference exactly in the release phase of the cyclic movement, where the second beta ERD is not seen during the shaking task.In this light, our findings could also be interpreted as an electrophysiological correlate of impaired movement termination: During Shake, no beta ERD was measured when grip and load force decreased during the upward movement.Therefore, the missing beta desynchronization in the late phase of the cycling movement could be interpreted as a correlate for the reduced ability for movement termination in PD patients.Similarly, the interplay between motor cortex and basal ganglia was significantly reduced during Shake as compared to Press.When the functional connectivity to the motor cortex is high the temporal cueing in the basal ganglia seems to be more precise and more adaptive, as compared to Shake, where cortico-STN correlation is lower and therefore the temporal change in beta desynchronization during an ongoing movement is less adaptable.The time-locked suppression of beta oscillatory activity in the STN is in line with previous reports of beta ERD prior to voluntary movements.Our results show that the STN is involved in anticipatory grip force control in PD patients.The difference in the phasic beta ERD between the two tasks and the reduction of cortico-subthalamic synchronization suggests that qualitatively different neuronal network states are involved in different grip force control tasks.This study was investigator-sponsored. | Patients with Parkinson's disease (PD) show an elevated grip force in dynamic object handling, but temporal aspects of anticipatory grip force control are relatively preserved.In PD patients, beta frequency oscillatory activity in the basal ganglia is suppressed prior to externally paced movements.However, the role of the subthalamic nucleus (STN) in anticipatory grip force control is not known.Results During adaptive grip force control (Shake), we found event related desynchronization (ERD) in the beta frequency band, which was time-locked to the grip force.In contrast, during voluntary grip force control (Press) we recorded a biphasic ERD, corresponding to peak grip force and grip force release.Beta synchronization between STN and cortical EEG was reduced during adaptive grip force control.Conclusion The time-locked suppression of beta oscillatory activity in the STN is in line with previous reports of beta ERD prior to voluntary movements.Our results show that the STN is involved in anticipatory grip force control in PD patients.The difference in the phasic beta ERD between the two tasks and the reduction of cortico-subthalamic synchronization suggests that qualitatively different neuronal network states are involved in different grip force control tasks. |
gestures.An alternative explanation for this finding is that children were referring to familiar or preexisting verbal labels when retrieving the nonverbal sounds in memory rather than encoding and retrieving the actual nonverbal PAL stimuli.Given this possibility, it cannot be argued that this condition performs the function of being entirely nonverbal.However, that is not to say that performance in this condition depends entirely on verbal learning.For example, children may remember a verbal label and its associated meaning and, therefore, may be engaging additional skills rather than simply relying on phonological memory.It is clear that there are challenges in creating a nonverbal analogue of PAL while keeping response modality consistent, although further research is needed to investigate nonverbal learning mechanisms and the possible role of an articulatory learning mechanism in learning to read.In summary, the results presented here are consistent with recent accounts and provide clear support for the role of verbal learning in explaining the PAL–reading relationship.We found that an auditory–articulatory latent variable was a stronger predictor of reading ability than the cross-modal visual–articulatory latent variable.However, we also found a strong correlation between reading and nonverbal–nonverbal PAL.This seemingly provides counterevidence for the verbal account and highlights the methodological advantage of the current study in comparing multiple PAL tasks.Thus, in conclusion, the current study provides support for the verbal account of the PAL–reading relationship.However, our results introduce the idea that articulatory learning might be an important demand implicated in both verbal PAL and reading; as such, further research is required to clarify the PAL–reading relationship. | Paired-associate learning (PAL) tasks measure the ability to form a novel association between a stimulus and a response.Performance on such tasks is strongly associated with reading ability, and there is increasing evidence that verbal task demands may be critical in explaining this relationship.The current study investigated the relationships between different forms of PAL and reading ability.A total of 97 children aged 8–10 years completed a battery of reading assessments and six different PAL tasks (phoneme–phoneme, visual–phoneme, nonverbal–nonverbal, visual–nonverbal, nonword–nonword, and visual–nonword) involving both familiar phonemes and unfamiliar nonwords.A latent variable path model showed that PAL ability is captured by two correlated latent variables: auditory–articulatory and visual–articulatory.The auditory–articulatory latent variable was the stronger predictor of reading ability, providing support for a verbal account of the PAL–reading relationship. |
In this paper, we study the problem of optimizing a two-layer artificial neural network that best fits a training dataset.We look at this problem in the setting where the number of parameters is greater than the number of sampled points.We show that for a wide class of differentiable activation functions, we have that arbitrary first-order optimal solutions satisfy global optimality provided the hidden layer is non-singular.We essentially show that these non-singular hidden layer matrix satisfy a "good" property for these big class of activation functions.Techniques involved in proving this result inspire us to look at a new algorithmic, where in between two gradient step of hidden layer, we add a stochastic gradient descent step of the output layer.In this new algorithmic framework, we extend our earlier result and show that for all finite iterations the hidden layer satisfies thegood" property mentioned earlier therefore partially explaining success of noisy gradient methods and addressing the issue of data independency of our earlier result.Both of these results are easily extended to hidden layers given by a flat matrix from that of a square matrix.Results are applicable even if network has more than one hidden layer provided all inner hidden layers are arbitrary, satisfy non-singularity, all activations are from the given class of differentiable functions and optimization is only with respect to the outermost hidden layer.Separately, we also study the smoothness properties of the objective function and show that it is actually Lipschitz smooth, i.e., its gradients do not change sharply.We use smoothness properties to guarantee asymptotic convergence of to a first-order optimal solution. | This paper talks about theoretical properties of first-order optimal point of two layer neural network in over-parametrized case |
temporally, indicating that cofilin may play an important role in the destabilisation of actin foci and the regulation of nanotopography sensing.To test this hypothesis, we knocked down cofilin in GEβ3 cells before seeding on nanofibrous patterns.We observed that cells in which cofilin expression was reduced became insensitive to the nanopattern size and overall spread more than on homogenous substrates.This coincided with an important change in the shape of cells spreading on nanofibres.Supplementary video related to this article can be found at https://doi.org/10.1016/j.biomaterials.2019.119683,The following are the supplementary data related to this article:Video S11Video S12Video S13Video S14,Overall, our results demonstrate that nanotopography sensing is regulated by different molecular mechanisms than the sensing of substrate mechanics and that of ligand molecular distribution, yet is particularly sensitive to the type of integrin heterodimers expressed.Although β1 expressing cells are relatively insensitive to nanopattern dimensions, at least in the range tested, β3 expressing cells are found to be particularly sensitive to nanofibre width.In addition, rather than directly regulating the assembly of focal adhesions, substrate nanotopography imposes boundaries that modulate the spatial organisation of the actin network and stress fibres.This results in local network instabilities, upon myosin-generated contractility, that lead to the local collapse of the actin network into foci, surrounded by a myosin ring.The local geometry of the actin network and associated changes in microfilament curvature, regulated by cofilin recruitment , result in the sequestration of cofilin at the foci and the disassembly of the actin network.Eventually, this results in cell retraction.Although actin foci are the most apparent structures that can be observed in videos of actin dynamics, it is also likely that similar events occur within the lamella and at membrane protrusion, but cannot be resolved from the normal cytoskeletal architecture.Therefore, we propose that nanotopography sensing is mediated by a long-range mechanism, through the microscale organisation of the actin network and that such geometry modulates its contractile mechanical stability.In addition, beyond the sensing of engineered extra-cellular matrices, such as the nanofibres presently studied, we propose that similar long-range sensing processes and network stability, regulated by myosin contractility, also control cell spreading and migration within 3D environments.This is consistent with observations of the importance of adhesion dynamics in sensing the local geometry of 3D matrices and their regulation via actin contractility and we note that cofilin has been implicated in the regulation of cell protrusion and motility in 3D matrices .Questions also remain regarding the mechanisms that differentially regulate actin cytoskeleton assembly in β1 and β3 integrin-expressing cells, given that both of these transmembrane proteins form heterodimers that recruit similar adapter proteins.All data analysed during this study are included in this published article.Other raw data required to reproduce these findings are available from the corresponding author on reasonable request. | The nanotopography and nanoscale geometry of the extra-cellular matrix (ECM) are important regulators of cell adhesion, motility and fate decision.However, unlike the sensing of matrix mechanics and ECM density, the molecular processes regulating the direct sensing of the ECM nanotopography and nanoscale geometry are not well understood.Here, we use nanotopographical patterns generated via electrospun nanofibre lithography (ENL) to investigate the mechanisms of nanotopography sensing by cells.We observe the dysregulation of actin dynamics, resulting in the surprising formation of actin foci.This alteration of actin organisation is regulated by myosin contractility but independent of adapter proteins such as vinculin.This process is highly dependent on differential integrin expression as β3 integrin expressing cells, more sensitive to nanopattern dimensions than β1 integrin expressing cells, also display increased perturbation of actin assembly and actin foci formation.We propose that, in β3 integrin expressing cells, contractility results in the destabilisation of nanopatterned actin networks, collapsing into foci and sequestering regulators of actin dynamics such as cofilin that orchestrate disassembly.Therefore, in contrast to the sensing of substrate mechanics and ECM ligand density, which are directly orchestrated by focal adhesion assembly, we propose that nanotopography sensing is regulated by a long-range sensing mechanism, remote from focal adhesions and mediated by the actin architecture. |
able to communicate with each other under basal and high glucose conditions via exosomes.The exposure of MM6-BG to exoHUVEC-HG or exoHUVEC-BG significantly increased the expression of ICAM-1 in these cells.Again, this may suggest that exosomes per se activate monocytes, i.e. increased their ICAM-1 expression, independent of the glucose concentration.Further studies are needed to evaluate whether exosomes from HUVECs may also influence MM6 in other ways, f.i by inducing cytokine expression.Interestingly, the exposure of HUVEC-BG to exoMM6-HG increased the expression of ICAM-1 as compared with HUVEC-BG and HUVEC-HG.Remarkably, exoMM6-BG incubated with HUVEC-HG reversed the effect of HG to almost normal values.Our data of monocyte exosomes affecting endothelial cells are in line with the study of Tang et al., who observed that exosomes derived from LPS-activated monocytes induced the expression ICAM-1 and pro-inflammatory cytokines in HUVECs.Not only exosomes, but also microparticles derived from leukocytes stimulate the expression of ICAM-1 on endothelial cells.Although, we did not investigate the mechanisms by which monocyte exosomes affect endothelial cells, other studies have shown that monocyte exosomes affected endothelial cells by activating NFκB and Toll like receptor signaling pathways.However, whether the increased ICAM-1 expression mediated by exosomes may affect the interaction between endothelial cells and monocytes, i.e. monocytes transmigration, was not evaluated in this study.Together, these data suggest that in vivo, exosomes derived from both cells types may be one of the mechanisms inducing endothelial cell and monocyte activation under hyperglycemic conditions.In view of the effects of exosomes from MM6 and HUVECs on each other, we hypothesized that in a co-culture of MM6 and HUVECs, under HG or BG conditions, ICAM-1 expression would be increased as compared with monocultures of either HUVECs of MM6.This hypothesis appeared to be true: co-cultured HUVECs and MM6 showed higher ICAM-1 expression as compared with monocultures, however, only under high glucose conditions.We also observed that, similar to monocultures, both HUVECs and MM6 cells exposed to HG in co-culture increased the expression of ICAM-1 as compared to co-cultured cells exposed to BG.In line with the present data, we have previously shown communication between monocytes and endothelial cells.We hypothesized that in co-culture MM6 and HUVECs communicate by exosome production.Alternatively, but not mutually exclusive, HG may also cause increased release of pro-inflammatory cytokines by monocytes or endothelial cells, by which these cells may also activate each other.In accordance with our hypothesis, the exosomal mixture of HUVECs and MM6 exosomes from cells incubated under high glucose increased the expression of ICAM-1 in both HUVECs and MM6 cultured under basal glucose.Interestingly, ICAM-1 levels were increased to the same level as ICAM-1 in co-cultured MM6 cells, i.e. higher than in monocultures.This may suggest that the exosomes from HUVECs and MM6 under high glucose collaborate in inducing ICAM-1 protein surface expression in both HUVECs and MM6.These data thus confirm our hypothesis that in co-cultured HUVECs and MM6 cells exosomes from both cell types collaborate and that exosomes are, at least partly, responsible for increasing ICAM-1 expression in both cell types.Moreover, we observed that the exosomal mix from HUVECs and MM6 under BG decreased the expression of ICAM-1 in both HUVECs and MM6 incubated with HG, i.e. they reverted the effect of high glucose on both HUVECs and MM6.Further studies are needed to investigate the exact mechanisms of how the exosomes influence ICAM-1 expression and how MM6 and HUVECs exosomes collaborate.In summary, our results indicate that high levels of glucose may activate monocytes.The present data also show that exosomes derived from both monocytes and endothelial cells can modulate the protein surface expression of ICAM-1 in endothelial cells and in monocytes.Exosomes can thus act as communication mechanism between monocytes and endothelial cells, both under BG and HG conditions.Despite, this study did not show whether exosomes derived from monocytes and endothelial cells may affect exosome cargo, this effect of HG could be in line with data showing that circulating microparticles from DMT2 patients are enriched with proteins involved in cell activation.Also exosomes derived from myocytes from patients with DMT2 contain higher levels of microRNA-1 and microRNA-133a, as compared with exosomes from healthy individuals.Studies into the cargo of HG and BG exosomes are in progress.We propose that exosomes from both endothelial cells and monocytes have an important role endothelial and monocyte activation induced by high levels of glucose and may play a role in inflammatory cell activation in DMT2 or cardiovascular complications associated with diabetes.TS holds Comisión Nacional para la Investigación en Ciencia y Tecnología PhD fellowships and UMCG University of Groningen Postgraduate School-PhD fellowship.This work was partially supported by the Fondo Nacional de Desarrollo Científico y Tecnológico, Chile.The authors declare no competing interests. | Exosomes are nanovesicles, allow communication between endothelial cells and monocytes and have been associated with diabetic complications.In this study we evaluated whether high glucose can activate monocytes and endothelial cells and whether exosomes play a role in this activation.Moreover, we studied whether endothelial cells and monocytes communicate with each other via exosomes under high and basal glncubation.In the second experiment, MM6 were exposed to exosomes from human umbilical vein endothelial cells (HUVECs) and HUVECs to exosomes from MM6.In the third experiment, MM6 and HUVECs were exposed to a mixture of exosomes from MM6 and HUVECs (exoMix).Cell activation was evaluated by measuring the protein surface expression of intracellular adhesion molecule-1 (ICAM-1) by flow cytometry.HG increased ICAM-1 expression in MM6 and monocytic exosomes from HG or BG shown similar effect in HG and BG MM6 cells.Exosomes from HUVECs increased ICAM-1 expression in MM6 cells, incubated under HG or BG, while also exosomes from MM6 increased ICAM-1 expression in HUVECs incubated under HG or BG.The combination of exosomes from both cell types (exoMixHG or exoMixBG) also increased ICAM-1 expression in both type cells in most conditions.However, the exoMixBG reversed the effect of HG in both MM6 and HUVECs.Our results show that HG activated monocytes and endothelial cells and that exosomes play a role in this HG-induced cell ICAM-1 expression.We hypothesize that during DMT2, exosomes may act as a communication mechanism between monocytes and endothelial cells, inducing and maintaining activating of both cell types in the presence of high glucose. |
principle component analysis, presented in Fig. 2.Four major clusters were evident.Clusters I and IV comprised some of the gene bank materials as well as all five released varieties.All of the gene bank materials grouped together in Clusters II, III and IV.The landraces also grouped in Cluster III, apart from ICP 13076, which was in Cluster I.Screening of the allelic data associated with the selected released and improved varieties for which a DNA fingerprint was to be developed, revealed that 6 markers – CCB1, CCB7, Ccac035, CCttc003, Ccac026 and CCttc019 – could unambiguously discern these six varieties from one another.These markers were also highly homozygous and the amplified fragments were easy to score.The fingerprint developed with the 6 markers listed above, are presented in Table 4.High quality DNA was obtained in this study, even without using the prescribed phenol: chloroform extraction step described by Mace et al. .The total average amount of DNA obtained from leaves was 55 μg, which is higher than the 7.5 μg reported for pigeonpea by Mace et al. .The mean A260/280 of DNA extracted from fresh leaves was 1.9 while that of DNA extracted from seeds was 1.6.This made the extraction both safer and cheaper by eliminating the use of phenol, which is hazardous and expensive to dispose of .DNA extracted from the seeds was degraded and of lower quality than that obtained from leaf material, likely due to the polysaccharides and polyphenols present in pigeonpea seeds .These compounds co-precipitated with the DNA after the addition of isopropanol/ethanol: sodium acetate and inhibited Taq DNA polymerase activity in the subsequent SSR genotyping , which explained the recalcitrance to PCR amplification of the 3 DNA samples that were obtained from seeds.PCR optimization is an important step to ensure the successful amplification of the target DNA fragment.All aspects of a PCR protocol can be considered in optimization .However, this study focused only on the annealing temperature and primer concentration.Amplification for 37 of the 48 primer pairs was successful using a fixed annealing temperature of 59°C, the standard protocol that worked well in our hands.Eight of the remaining 11 primer pairs successfully amplified the target SSR loci when the annealing temperature was adjusted, as indicated in Table 2.For marker CCttc006, CCttc012, and CCtc020, it was necessary to increase the amount of forward primer and reduce the fluorescently labeled M13 tag concentrations in the PCR reaction mixture.However, with the reduced fluorescent label, the resultant fragments did not incorporate enough fluorescence to be detected by the laser during capillary electrophoresis.This has been experienced before in other studies that used labeled M13 sequences according to the method described by Schuelke .All in all, 45/48 or 94% of the markers tested did amplify by PCR and this was considered sufficient for this study.However, not all markers amplified equally well and another 7 had to be excluded from analysis.Although this represented a substantial amount of data that was excluded from the analysis, the final number of 38 good markers compared well with other published studies on genetic diversity analysis where 30 to 40 SSR markers were typically considered adequate e.g. in pigeonpea , sorghum , groundnut , wheat and rice .Allelic data analysis showed an average of 5.58 alleles per marker.This was higher than other pigeonpea diversity studies published to date, which used similar markers on cultivated varieties .Diversity in cultivated pigeonpea is generally reported to be low .This was observed even when other types of markers were used e.g. diversity arrays technology and amplified fragment length polymorphisms .Consequently, studies that included wild species reported higher PIC and allele number averages .Despite the relatively low polymorphism, the markers used in this study grouped the genotypes clearly into four major groups.After ten thousand iterations the highest bootstrap value was observed in Cluster I.Other clusters showed lower confidence levels and these could be due to low polymorphism/genome coverage of the SSRs used .Most of the released varieties were developed from Kenyan and Tanzanian varieties and subsequently introduced to Malawi .ICEAP 00068 and ICEAP 00557 are released varieties originating from Tanzania, which grouped in different clusters.Released varieties that were developed in Kenya all grouped together in Cluster I except for ICEAP 00040, which was in Cluster IV.All these released varieties were selected and improved for traits such as disease resistance, high yields or drought tolerance and have different maturity durations .ICEAP 00040 and ICEAP 00020 are medium and long duration maturity genotypes, respectively, which are resistant to Fusarium wilt while ICEAP 00068, of medium duration, is susceptible to wilt but is popular with farmers as it yields large grains .ICPV 9145 and ICP 13076 were ICRISAT-India accessions collected from Kenya, although they grouped in different clusters.Both genotypes and ICPV 87105 have moderate resistance to Fusarium wilt .The obvious genetic differences observed between ICPV 9145 and ICP 13076 in this study could indicate possible different sources or mechanisms of Fusarium wilt resistance inherent in these two varieties.This should be further investigated in association mapping studies to confirm if this is the case so that this diversity can be exploited in future in breeding programs.Although individuals of the same genotype grouped together for the most part, some were spread out among different clusters, such as ICP 9145 and ICEAP 00040.This was probably due to contamination or mixture of the seeds.Two landraces, Mtawanjuni and ICP 9145 grouped with gene bank materials.Mtawanjuni is a popular traditional cultivar in Malawi.It is a high | In this study, 48 polymorphic SSR markers were used to assess the diversity among all pigeonpea varieties cultivated in Malawi to determine if a genetic fingerprint could be identified to distinguish the popular varieties.Conclusion: Screening of the allelic data associated with the five most popular cultivated varieties, revealed 6 markers – CCB1, CCB7, Ccac035, CCttc003, Ccac026 and CCttc019 – which displayed unique allelic profiles for each of the five varieties. |
yielding medium duration variety, which farmers prefer due to its relatively good insect resistance.ICP 9145 is a Kenyan landrace and one of the first varieties to be introduced to Malawi in 1987.It is high yielding and has resistance to Fusarium wilt .From the neighbor-joining tree the most commonly cultivated varieties in Malawi, which include the four released varieties and four landraces from the region, were spread across three of the clusters observed, indicating that they generally represented the genetic diversity available in Malawi.However, cluster III and cluster IV showed only two released varieties each and none in cluster II; thus there is substantial variation that can still be exploited through further breeding.The markers used in this study were not known to be linked to any traits of interest and this should be the next step in pigeonpea genomics to allow visualization of which varieties harbor important traits such as the differing maturity duration, number and duration of flowering times during a season, high yields, large, cream colored seeds, insect resistance and Fusarium wilt resistance .Markers linked to these traits will allow scientists to determine sources and mechanisms controlling these traits.In addition, germplasm containing these traits can be identified and the traits transferred to the best yielding and most popular varieties .Markers linked to these traits will also allow pyramiding the traits into a select few varieties.The recent sequencing of the pigeonpea genome is a major step in this direction .Natural outcrossing, due to insect pollination, is high in pigeonpea and is difficult and expensive to control in the field since plants have to be isolated under insect-proof nets if outcrossing is to be avoided ."In Malawi, this causes contamination of seeds in farmers' fields since many farmers plant more than one variety on their farms or have neighbors who plant different varieties whose flowering times overlap.For example, after obtaining pure Mtawanjuni seeds used in this study from breeders, other seeds of this variety were obtained randomly from different Malawi farmers.The seeds obtained from the farmers had five different seed coat colors and none was similar to seeds obtained from breeders.Such contamination can cause yield losses due to loss or dilution of insect or Fusarium wilt resistance and often closes market opportunities when mixtures give rise to different seed colors or seed size .To our knowledge, there is no available software that can screen allelic data and identify markers suited for a DNA fingerprint.Therefore, this study attempted a logical approach to identify markers that will provide such a fingerprint and the criteria were developed.The six markers identified for the DNA fingerprint, generally had low heterozygosity and intermediate to high PIC scores according to the PowerMarker results of the entire dataset.Since the resulting number of markers and genotypes were both small, the fingerprint could be determined visually and is presented in Table 4.In all cases, at least four out of the five individuals always presented the same alleles, except for individual ICEAP 00557/3 and Marker CCac026 where missing data reduced this number to 3/5.CCttc019 was a heterozygous marker, which presented a monomorphic allele of 196 bp for all individuals across all the released varieties.This allele was excluded for the fingerprint and only the second, polymorphic alleles from all varieties were included.When the combination of alleles for each variety across the six markers are considered, this preliminary DNA fingerprint for pigeonpea can discern each variety with confidence.In a similar way, advantage of SSR marker assays was evidenced in pigeonpea hybrid breeding through ensuring the genetic purity of hybrids and their parents .However, this fingerprint needs to be further tested for robustness, repeatability and ability to discern admixtures due to cross pollination.This study set out to investigate the level of genetic diversity in all cultivated Malawi pigeonpea varieties with SSR markers.While this was successful, it was observed that the level of diversity is low and further studies should exploit more new SSR markers, such as those identified from resequenced pigeonpea genomes.It is also recommended that such studies include wild pigeonpea genotypes as they could reveal a new genetic resource.It was however noted that the released varieties are representative of the genetic base available in Malawi pigeonpea.With a small number of markers it was possible to create a genetic fingerprint of six important pigeonpea varieties in Malawi.Although this needs to be tested further, it indicates the potential of using SSR markers to discern pigeonpea varieties.Moreover, use of more polymorphic markers will increase the number of genotypes in the fingerprint.This can be used to detect seed contamination, which is a major cause of low yields and ensure availability of high quality seeds for Malawi farmers.Adequate high quality DNA was obtained from leaves despite omitting the phenol: chloroform extraction step.This and the advent of new methods that eliminate use of hazardous substances during DNA extraction show clearly that DNA extraction is becoming safer and cheaper.The following is the supplementary data related to this article.SSR Markers used in this study.Supplementary data to this article can be found online at http://dx.doi.org/10.1016/j.ejbt.2016.02.004.The authors declare that they have no conflict of interest.This project was funded by Irish Aid under the “Malawi Seed Industry Development Project”. | It is mainly cultivated in the semi-arid tropics of Asia and Oceania, Africa and America.However, varietal contamination due to natural outcrossing causes significant quality reduction and yield losses.A neighbor-joining tree produced 4 clusters.The most commonly cultivated varieties, which include released varieties and cultivated land races, were well-spread across all the clusters observed, indicating that they generally represented the genetic diversity available in Malawi, although substantial variation was evident that can still be exploited through further breeding.This genetic fingerprint can potentially be applied for seed certification to confirm the genetic purity of seeds that are delivered to Malawi farmers. |
with urokinase in vitro.In-vivo urokinase utilizing in the blood of treated mice showed a decreased number of CTC clusters compared to the control.Therefore, the results suggest that urokinase disintegrates CTC clusters into individual CTCs .However, some researchers do not agree with disaggregation of CTC clusters in the bloodstream as a metastasis treatment.They caution that urokinase treatment may also include the risk of increasing invasiveness of tumor cells and metastatic spreading, resulting in the opposite effect of that, as reported by Choi et al. .In the field of cancer drug development, Gao et al. used CTCs-derived organoids for testing the new version of androgen receptor antagonist and PI3K-kinase pathway inhibitors .Overall, despite of all experimental studies in CTC cluster, currently, the clinical importance of CTC clusters remains elusive.Further study is requisite to exploit the full potential of CTC clusters in real-world clinical applications.CTC cluster analysis as a noninvasive liquid biopsy is a new expanding field that can introduce unprecedented horizon in early cancer diagnosis and therapy assessment in clinical trials.Nevertheless, due to inefficient separation platforms and heterogeneous biology, there are still many fundamental unsolved issues about CTC clusters.As such, to date, it is not clear the metastatic potential of included tumor cells in a cluster compared to single CTCs and the effect of CTC cluster size and cell number on its metastatic potential.Whether dissociating CTC clusters into single CTCs can effectively reduce their metastatic risk.How the associated non-tumor cells included in CTC clusters increase their survival and more efficient distant colonization, as well as CTC cluster collective migration are among the outstanding questions in CTC cluster biology.Despite the significant progress in separation methods, substantial work still needs to be done to achieve a platform to efficiently identify, enumerate, and isolate intact CTC clusters in a reasonable time with minimal manual intervention.Subsequent developments in CTC cluster separation technologies will enhance our knowledge about these multicellular aggregates and their contribution to metastasis progression and can translate laboratory-based concepts to clinical applications in real-world settings.Complementary studies should be undertaken to characterize CTC clusters and to utilize their clinical value.Monitoring treatment regime is a great potential field of interest toward individual treatment.Therefore, the next step after developing an efficient separating platform for CTC cluster is ex-vivo patient-derived CTCs culturing.However, to date, no techniques have been presented for CTC clusters culturing.The future research should focus on developing strategies for long-term culture of patient-derived CTC clusters.Due to their higher metastatic potential, CTC clusters are expected to be utilized broadly in cancer and metastasis clinical trials in the coming years.We envision that liquid biopsy and qualitative and quantitative monitoring of CTCs, especially CTC clusters, will allow the clinician to establish more effective personalized treatments.The authors declare no conflicts of interest. | Tumor metastasis is responsible for the vast majority of cancer-associated morbidities and mortalities.Recent studies have disclosed the higher metastatic potential of circulating tumor cell (CTC) clusters than single CTCs.Despite long-term study on metastasis, the characterizations of its most potent cellular drivers, i.e., CTC clusters have only recently been investigated.The analysis of CTC clusters offers new intuitions into the mechanism of tumor metastasis and can lead to the development of cancer diagnosis and prognosis, drug screening, detection of gene mutations, and anti-metastatic therapeutics.In recent years, considerable attention has been dedicated to the development of efficient methods to separate CTC clusters from the patients’ blood, mainly through micro technologies based on biological and physical principles.In this review, we summarize recent developments in CTC clusters with a particular emphasis on passive separation methods that specifically have been developed for CTC clusters or have the potential for CTC cluster separation.Methods such as liquid biopsy are of paramount importance for commercialized healthcare settings.Furthermore, the role of CTC clusters in metastasis, their physical and biological characteristics, clinical applications and current challenges of this biomarker are thoroughly discussed.The current review can shed light on the development of more efficient CTC cluster separation method that will enhance the pivotal understanding of the metastatic process and may be practical in contriving new strategies to control and suppress cancer and metastasis. |
and over-staffing, and validating staffing methods across a range of hospitals.Operational research techniques could be brought in to help address these issues alongside traditional methods.The operational research approach can help structure the problem, deal with complexity and perform numerical experiments before implementation.Future possibilities for nurse staffing research utilising operational research techniques include the following.Queuing theory models could be used to investigate how trainees and new staff roles such as nursing associates could affect both demand and supply.Queuing theory could also be used to examine nursing work in more detail by separating urgent, non-urgent and discretionary tasks with prioritisation between patients.Such investigations could contribute to an emerging body of work that investigates links between nurse staffing and omissions, delays or rationing of care, which suggests that there may be conscious or unconscious decisions to prioritise some aspects of care in the face of excess demand.Simulation models could be used to assess existing staffing tools for setting establishments in terms of how they affect costs and daily staffing adequacy.In practice, hospitals could benefit from software that uses near real-time data to recommend how to adjust staffing levels to better match patient needs.Operational research techniques such as optimisation and simulation could be embedded in integrated rostering-deployment-employment systems to help make short-term and long-term staffing decisions.Thus operational research methods can help both with researching the impact of staffing levels and also with planning nurse staffing levels, two previously disconnected areas of endeavour.We believe that it is time to bring in operational research to supplement traditional techniques for determining the best approaches to managing and maintaining safe nurse staffing levels.Operational research capacity in health services research remains limited, and it is notable that most of this significant body of work has been published in operational research journals with teams largely emanating from operational research or related background.We would therefore encourage active collaboration between operational research specialists in the field and those established nurse staffing research groups who have successfully delivered research involving gathering and analysing data at scale and across multiple sites. | Despite a long history of health services research that indicates that having sufficient nursing staff on hospital wards is critical for patient safety, and sustained interest in nurse staffing methods, there is a lack of agreement on how to determine safe staffing levels.For an alternative viewpoint, we look to a separate body of literature that makes use of operational research techniques for planning nurse staffing.Our goal is to provide examples of the use of operational research approaches applied to nurse staffing, and to discuss what they might add to traditional methods.The paper begins with a summary of traditional approaches to nurse staffing and their limitations.We explain some key operational research techniques and how they are relevant to different nurse staffing problems, based on examples from the operational research literature.We identify three key contributions of operational research techniques to these problems: “problem structuring”, handling complexity and numerical experimentation.We conclude that decision-making about nurse staffing could be enhanced if operational research techniques were brought in to mainstream nurse staffing research.There are also opportunities for further research on a range of nurse staff planning aspects: skill mix, nursing work other than direct patient care, quantifying risks and benefits of staffing below or above a target level, and validating staffing methods in a range of hospitals. |