input
stringlengths
6.82k
29k
Instruction: Is there an inhibitory role of cortisol in the mechanism of male sexual arousal and penile erection? Abstracts: abstract_id: PUBMED:14508617 Is there an inhibitory role of cortisol in the mechanism of male sexual arousal and penile erection? Background: It has been speculated for more than 2 decades whether there is a significance of adrenal corticosteroids, such as cortisol, in the process of normal male sexual function, especially in the control of sexual arousal and the penile erectile tissue. However, only few in vivo studies have been carried out up until now on the effects of cortisol on human male sexual performance and penile erection. In order to evaluate further the role of cortisol in male sexual activity, the present study was conducted to detect serum levels of cortisol in the systemic and cavernous blood taken during different penile conditions from healthy males. Material And Methods: The effects of cortisol derivative prednisolone, catecholamine norepinephrine (NE) and the peptide endothelin-1 (ET-1) on isolated human corpus cavernosum (HCC) were investigated using the organ bath technique. Fifty-four healthy adult male subjects were exposed to erotic stimuli in order to elicit penile tumescence and rigidity. Whole blood was simultaneously aspirated from the corpus cavernosum and the cubital vein during different penile conditions. Serum levels of cortisol (microg/dl) were determined by means of a radioimmunoassay (ELISA). Results: In the healthy volunteers, cortisol serum levels significantly decreased in the systemic circulation and the cavernous blood with increasing sexual arousal, when the flaccid penis became rigid. During detumescence, the mean cortisol level remained unaltered in the systemic circulation, whereas in the cavernous compartment, it was found to decrease further. Under all penile conditions, no significant differences were registered between cortisol levels in the systemic circulation and in the cavernous blood. Cumulative addition of NE and ET-1 (0.001-10 microM) induced contraction of isolated HCC strips, whereas the contractile response to prednisolone was negligible. Conclusion: Our results strongly suggest an inhibitory role for cortisol in the mechanism of male sexual response and behaviour. These properties are mediated rather via an effect on central structures than on the penile erectile tissue. Future studies to include patients suffering from erectile dysfunction may reveal whether or not there are differences in the cortisol serum profiles of healthy subjects and patients under different stages of sexual arousal. abstract_id: PUBMED:9089762 Penile erection: a reflection of sexual arousal and arousability in male chimpanzees. A study was conducted to determine if sexual motivation in male chimpanzees, defined by the two components, sexual arousal and arousability, could be quantified by measuring penile erection during laboratory pair-tests of sexual behavior. Sexual arousal, the momentary level of sexual excitation, was quantified by single measurements on a scale of penile erection. Sexual arousability, the propensity for arousal, was quantified by the rate of approach to full erection and by mean erection scores. Sexual arousability, as defined, was related to the frequency of male courtship displays, copulations, and ejaculations, which were highest at midcycle and positively correlated with a quantitative measure of female anogenital swelling. Sexual arousal was negatively correlated with continued exposure to the female and least variable during midcycle. These operational definitions of sexual motivation in the male chimpanzee facilitate comparative research in which analogous indexes of sexual motivation are required such as when extrapolating from animal to human. abstract_id: PUBMED:26944463 Endocrinologic Control of Men's Sexual Desire and Arousal/Erection. Introduction: Several hormones and neurotransmitters orchestrate men's sexual response, including the appetitive (sexual desire) and consummative (arousal and penile erection) phases. Aim: To provide an overview and recommendations regarding endocrinologic control of sexual desire and arousal and erection and their disturbances. Methods: Medical literature was reviewed by the subcommittee of the International Consultation of Sexual Medicine, followed by extensive internal discussion, and then public presentation and discussion with other experts. The role of pituitary (prolactin, oxytocin, growth hormone, and α-melanocyte-stimulating hormone), thyroid, and testicular hormones was scrutinized and discussed. Main Outcome Measures: Recommendations were based on grading of evidence-based medical literature, followed by interactive discussion. Results: Testosterone has a primary role in controlling and synchronizing male sexual desire and arousal, acting at multiple levels. Accordingly, meta-analysis indicates that testosterone therapy for hypogonadal individuals can improve low desire and erectile dysfunction. Hyperprolactinemia is associated with low desire that can be successfully corrected by appropriate treatments. Oxytocin and α-melanocyte-stimulating hormone are important in eliciting sexual arousal; however, use of these peptides, or their analogs, for stimulating sexual arousal is still under investigation. Evaluation and treatment of other endocrine disorders are suggested only in selected cases. Conclusion: Endocrine abnormalities are common in patients with sexual dysfunction. Their identification and treatment is strongly encouraged in disturbances of sexual desire and arousal. abstract_id: PUBMED:23899514 Doramectin reduces sexual behavior and penile erection in male rats. Doramectin (DOR) is an antiparasitic drug that is widely used in domestic animals. In mammals, DOR acts as a γ-aminobutyric acid receptor agonist. This neurotransmitter plays an important role in the regulation of sexual behavior. The present study investigated the effects of two medically relevant doses of DOR on sexual behavior in male rats. We also examined whether previous sexual experience modulates responses to DOR. General activity was first observed in an open field 24, 48, and 72 h after administration of 0.1 and 0.3 mg/kg DOR to determine the dose and time effects of the drug. Apomorphine-induced penile erection and sexual behavior in inexperienced male rats were then analyzed. The effects of previous sexual experience on subsequent sexual behavior in DOR-treated rats (0.3 mg/kg, 24 h prior to the test) were also assessed. The standard therapeutic dose (0.2 mg/kg) did not modify general activity or penile erection. A slightly concentrated dose of 0.3 mg/kg, which is still within the therapeutic range, decreased apomorphine-induced penile erection, whereas 0.2 mg/kg did not modify this behavior. Compared with controls, sexual behavior in inexperienced male rats was impaired after 0.3 mg/kg DOR. Previous sexual experience had little impact on the effects of 0.3 mg/kg DOR. In conclusion, the 0.2 mg/kg dose of DOR did not affect motor behavior or apomorphine-induced penile erection. At a more slightly higher dose level, the appetitive and consummatory phases of sexual behavior in inexperienced male rats were impaired. Previous sexual experience was unable to reverse this sexual impairment, suggesting that previous sexual experience does not exert a positive effect in attenuating sexual impairment produced by DOR treatment. abstract_id: PUBMED:3689107 Correlates of fantasy-induced and film-induced male sexual arousal. Penile circumference and subjective arousal were recorded while 66 men attempted to achieve erection by engaging in fantasy. Similar measures were taken while the same men viewed an erotic film. Not only was fantasy-induced sexual arousal relatively unrelated to film-induced sexual arousal, but the two sets of measures had different correlates. Over and above the contribution attributable to demographic and state variables, the subjects' rated level of fantasy during masturbation and their scores on the Betts QMI Scales predicted significant variance in subjective and physiological arousal. Subjects with low and high levels of fantasy-induced sexual arousal were differentiated on the basis of the vividness of their mental imagery as well as the frequency with which they used the most erotic sexual fantasies when attempting to achieve erection. In contrast to fantasy-induced arousal, film-induced arousal levels were independent of a person's capacity to form images. abstract_id: PUBMED:17559896 Moxidectin interference on sexual behavior, penile erection and hypothalamic GABA levels of male rats. The moxidectin (MXD) is an antiparasitic drug used in domestic animals. The mechanism of action, in mammals, involves GABA, a neurotransmitter with an important role in the sexual behavior control. Presently, the effects of 0.2 mg/kg therapeutic dose were studied on sexual behavior, sexual motivation, penile erection and central GABA levels. Sexual behavior results showed increased latencies to the first mount and intromission as well as in inter-intromission interval; a reduction in total mounts was detected on the drug post-treatment. No difference was observed between sexual motivation of control and experimental animals. MXD treatment reduced penile erection and hypothalamic GABA levels. The results suggest that MXD reduced sexual behavior and penile erection by an action on the hypothalamic GABA system. Probably, the lack of effects in the motivational test and the increased mount and intromission latencies as well as decreased total mounts could be explained as a consequence of reduced male rat erection process. abstract_id: PUBMED:30415812 Concurrent Assessment of Penile Blood Flow and Circumference as Indicators of Male Sexual Arousal. Introduction: The current study aims to validate a new imaging tool to assess men's sexual psychophysiological responding: laser Doppler imaging (LDI), which directly measures subcutaneous genital blood flow. In this study, we concurrently assessed genital sexual arousal in a sample of men using the LDI and the current gold standard of male sexual psychophysiology, the penile strain gauge (PSG). Aim: To (i) assess the validity of the LDI as a measure of male sexual arousal, (ii) evaluate the relationship between the LDI and PSG, and (iii) compare sexual concordance produced by the LDI and PSG. Methods: A total of 25 male participants (Mage = 24.28 years, SD = 6.10, range 18-37) watched 4 experimental films (ie, anxiety-provoking, humorous, erotic, neutral nature content) while their sexual arousal was assessed. Genital sexual arousal was assessed using the LDI (blood perfusion) and PSG (penile tumescence) concurrently. Subjective sexual arousal was assessed using continuous and discrete self-reported measures. Main Outcome Measure: Results demonstrate the validity of the LDI as an imaging-based measure of male genital arousal, and one that is comparable to the PSG. Results: Significant increases in genital blood flow assessed via the LDI were observed for erotic, but not anxiety-provoking, humorous, or neutral films (F[1.65, 39.57] = 18.23, P < .001, ηp2 = .43). A moderate, positive correlation between genital arousal measured via the LDI and PSG concurrently was observed (r = .40), despite considerable across-participant variability. In the current sample, the LDI and PSG both produced strong sexual concordance estimates (r = .51 and r = .46, respectively). When all LDI and PSG concordance estimates were correlated, a moderate relationship was revealed (r = .37). Clinical Implications: As a new imaging tool for male sexual psychophysiological arousal, the LDI holds promise for improving our understanding of issues related to men's sexual health. Strength & Limitations: Movement artifacts produced by penile engorgement were a limitation to the LDI as an imaging technique. Further, the LDI used in the current study was a discrete measure of arousal, whereas the PSG was continuous; future research would benefit by using continuous measurement capabilities of contemporary LDI systems. Conclusion: As a valid measure of genital sexual arousal in men, the LDI holds promise as a tool that can be used to explore more nuanced questions about human sexuality, including cross-gender comparisons and real-time exploration of genital arousal patterns. Bossio JA, Singh M, Pukall CF. Concurrent assessment of penile blood flow and circumference as indicators of male sexual arousal. J Sex Med 2018;15:1570-1578. abstract_id: PUBMED:24636071 Changes in sexual arousal as measured by penile plethysmography in men with pedophilic sexual interest. Introduction: Although pedophilia is defined by a recurrent sexual interest in prepubescent children, little attention has been paid to the stability or fluidity of this sexual interest over time. Aim: The aim of the current study was to investigate if patterns of penile tumescence (as a proxy for sexual interest) measured by penile plethysmography testing (PPT) can change. Methods: In this retrospective chart review study, PPT results of 43 men diagnosed with pedophilia were collected and analyzed. All participants displayed a pedophilic sexual arousal pattern at the time of their first PPT. To test for change, we compared initial PPT results with subsequent PPT results measured at least 6 months later. Main Outcome Measure: Sexual arousal was assessed using PPT by measuring change in penile circumference induced by the presentation of standardized sexual audio stimuli. Results: Approximately half of the sample (n = 21) displayed a change in PPT results. This change was characterized by a significant decrease of sexual arousal in response to pedophilic (child) stimuli and a significant increase of sexual arousal in response to nonpedophilic (adult) stimuli. No differences between sexual interest changers (ICs) and nonchangers (NC) were found for demographic data or for length of time between assessments. However, between-group comparisons revealed that ICs had significantly lower pedophilic indices at the initial assessment than NCs. Conclusions: Results from the current study indicate that relative pedophilic interest, as defined by increase in penile circumference in response to nonpedophilic stimuli as measured by PPT, changed in about 50% of men diagnosed with pedophilia who also had initial pedophilic PPT sexual responses. This represents a significant challenge to the hypothesis that sexual interest in men with pedophilia is unchangeable and should be the focus of future studies. abstract_id: PUBMED:29726639 Value of non-sexual penile erection for penile rehabilitation in men with erectile dysfunction Erectile dysfunction (ED) is a common male disease. Some related studies show that the prevalence of ED is nearly 52% in men aged 40 to 70 years and is increasing among younger males. Hypoxia is now considered to be an independent risk factor for ED and the mechanisms of hypoxia inducing ED are varied and complicated. Recently, an idea in penile rehabilitation has attracted much attention, which aims at improving erectile function by increasing oxygen supply to the cavernosum and reducing tissue fibrosis and apoptosis. The approaches to achieve non-sexual penile erection by increasing oxygen supply to the cavernosum, such as behavior therapy, medication, vacuum constriction device, and intracavernous injection, can simulate normal sexual erection and help patients with penile rehabilitation. This review focuses on the strategies for non-sexual penile erection in penile rehabilitation. abstract_id: PUBMED:2673137 Validity and ethics of penile circumference measures of sexual arousal: a critical review. Wheeler and Rubin (1987) advanced evidence that penile volume responses (PVRs) were no more sensitive than penile circumference responses (PCRs) in measuring erection which the authors incorrectly identified with sexual arousal. Knowledge of the literature would have led them to question that identification and the methodology of their study. PVRs have repeatedly been demonstrated to assess validly not erection but the sexual orientation of individuals, when derived from the early stage of erectile response to brief stimuli that were from their onset of moderate erotic strength. PCR assessment has been of the degree of erection to stimuli of 2-10 min duration. No success has been reported using PCR measures of erection to classify subjects individually as to their sexual orientation. Classification of groups of 30 but not 6 homosexuals was successful using their PCRs to nudes. Attempts to identify rapists and pedophiles from normals, and aggressive from nonaggressive rapists and pedophiles by PCRs have failed to be replicated. In comparing PVRs and PCRs, Wheeler and Rubin used as stimuli three 10-min presentations of a film which apparently did not immediately introduce erotic material. This procedure would not elicit meaningful PVRs. Though never validated as a measure of individuals' sexual arousal, PCR measures of erection are currently widely recommended for assessment and determining treatment of individual sex offenders. If these assessments could affect or are believed by the offenders to affect the outcome of the legal processes in which they are involved, the procedure is not only scientifically unsupported, it is unethical. Answer: Yes, there is evidence suggesting an inhibitory role of cortisol in the mechanism of male sexual arousal and penile erection. A study conducted to evaluate the role of cortisol in male sexual activity found that cortisol serum levels significantly decreased in the systemic circulation and the cavernous blood with increasing sexual arousal, as the flaccid penis became rigid. During detumescence, the mean cortisol level remained unaltered in the systemic circulation, while it decreased further in the cavernous compartment. The study concluded that the results strongly suggest an inhibitory role for cortisol in the mechanism of male sexual response and behavior, mediated via an effect on central structures rather than on the penile erectile tissue (PUBMED:14508617).
Instruction: Can a prediction model for vaginal birth after cesarean also predict the probability of morbidity related to a trial of labor? Abstracts: abstract_id: PUBMED:18822401 Can a prediction model for vaginal birth after cesarean also predict the probability of morbidity related to a trial of labor? Objective: The objective of the study was to determine whether a model for predicting vaginal birth after cesarean (VBAC) can also predict the probabilty of morbidity associated with a trial of labor (TOL). Study Design: Using a previously published prediction model, we categorized women with 1 prior cesarean by chance of VBAC. Prevalence of maternal and neonatal morbidity was stratfied by probability of VBAC success and delivery approach. Results: Morbidity became less frequent as the predicted chance of VBAC increased among women who underwent TOL (P < .001) but not elective repeat cesarean section (ERCS) (P > .05). When the predicted chance of VBAC was less than 70%, women undergoing a TOL were more likely to have maternal morbidity (relative risk [RR], 2.2; 95% confidence interval [CI], 1.5-3.1) than those who underwent an ERCS; when the predicted chance of VBAC was at least 70%, total maternal morbidity was not different between the 2 groups (RR, 0.8; 95% CI, 0.5-1.2). The results were similar for neonatal morbidity. Conclusion: A prediction model for VBAC provides information regarding the chance of TOL-related morbidity and suggests that maternal morbidity is not greater for those women who undergo TOL than those who undergo ERCS if the chance of VBAC is at least 70%. abstract_id: PUBMED:22971449 Validation of a prediction model for predicting the probability of morbidity related to a trial of labour in Quebec. Background: Pregnant women with a history of previous Caesarean section face the decision either to undergo an elective repeat Caesarean section (ERCS) or to attempt a trial of labour with the goal of achieving a vaginal birth after Caesarean (VBAC). Both choices are associated with their own risks of maternal and neonatal morbidity. We aimed to determine the external validity of a prediction model for the success of trial of labour after Caesarean section (TOLAC) that could help these women in their decision-making. Methods: We used a perinatal database including 185,437 deliveries from 32 obstetrical centres in Quebec between 2007 and 2011 and selected women with one previous Caesarean section who were eligible for a TOLAC. We compared the frequency of maternal and neonatal morbidity between women who underwent TOLAC and those who underwent an ERCS according to the probability of success of TOLAC calculated from a published model of prediction. Results: Of 8508 eligible women, including 3113 who underwent TOLAC, both maternal and neonatal morbidities became less frequent as the predicted chance of VBAC increased (P < 0.05). Women undergoing a TOLAC were more likely to have maternal morbidity than those who underwent an ERCS when the predicted probability of VBAC was less than 60% (relative risk [RR] 2.3; 95% CI 1.4 to 4.0); conversely, maternal morbidity was not different between the two groups when the predicted probability of VBAC was at least 60% (RR 0.8; 95% CI 0.6 to 1.1). Neonatal morbidity was similar between groups when the probability of VBAC success was 70% or greater (RR 1.2; 95% CI 0.9 to 1.5). Conclusion: The use of a prediction model for TOLAC success could be useful in the prediction of TOLAC success and perinatal morbidity in a Canadian population. Neither maternal nor neonatal morbidity are increased with a TOLAC when the probability of VBAC success is at least 70%. abstract_id: PUBMED:22927774 Validation of the prediction model for success of vaginal birth after cesarean delivery in Japanese women. Aim: To validate a previously developed prediction model for vaginal birth after cesarean (VBAC) using a Japanese cohort. Methods: We performed a cohort study of all term pregnant women with a vertex position, singleton gestation, and one prior low transverse cesarean delivery attempting a trial of labor between April 1985 and March 2010. Variables necessary for the prediction of successful VBAC were maternal age, pre-pregnancy body mass index, ethnicity, prior vaginal delivery, prior VBAC, and indication for prior cesarean delivery. They were extracted from medical records and put into the formula that calculates an individual woman's predicted VBAC success rate. The predicted rates were then partitioned into deciles and compared with the actual VBAC rates. The predictive ability of the model was assessed with a receiver operating characteristic and the area under the curve (AUC) was determined. Results: Seven hundred and twenty-five women who met the inclusion criteria had complete data available, of which 664 (91.6%) had VBAC. The predicted probability of VBAC, as calculated by the regression equation, was significantly higher in those who had a successful trial of labor (median 80.1%, interquartile range 71.5-88.7) than those who did not (median 69.4%, interquartile range 59.9-78.9, P<0.001). The predictive model had AUC of 0.80, which was comparative to the originally described one. When the predicted rates were each deciles of over 70%, the actual success rates were more than 90%. Conclusion: The previously published prediction model for VBAC developed in the USA is also available to Japanese women. abstract_id: PUBMED:30716285 Are prediction models for vaginal birth after cesarean accurate? Background: The use of trial of labor after cesarean delivery calculators in the prediction of successful vaginal birth after cesarean delivery gives physicians an evidence-based tool to assist with patient counseling and risk stratification. Before deployment of prediction models for routine care at an institutional level, it is recommended to test their performance initially in the institution's target population. This allows the institution to understand not only the overall accuracy of the model for the intended population but also to comprehend where the accuracy of the model is most limited when predicting across the range of predictions (calibration). Objective: The purpose of this study was to compare 3 models that predict successful vaginal birth after cesarean delivery with the use of a single tertiary referral cohort before continuous model deployment in the electronic medical record. Study Design: All cesarean births for failed trial of labor after cesarean delivery and successful vaginal birth after cesarean delivery at an academic health system between May 2013 and March 2016 were reviewed. Women with a history of 1 previous cesarean birth who underwent a trial of labor with a term (≥37 weeks gestation), cephalic, and singleton gestation were included. Women with antepartum intrauterine fetal death or fetal anomalies were excluded. The probability of successful vaginal birth after cesarean delivery was calculated with the use of 3 prediction models: Grobman 2007, Grobman 2009, and Metz 2013 and compared with actual vaginal birth after cesarean delivery success. Each model's performance was measured with the use of concordance indices, Brier scores, and calibration plots. Decision curve analysis identified the range of threshold probabilities for which the best prediction model would be of clinical value. Results: Four hundred four women met the eligibility criteria. The observed rate of successful vaginal birth after cesarean delivery was 75% (305/404). Concordance indices were 0.717 (95% confidence interval, 0.659-0.778), 0.703 (95% confidence interval, 0.647-0.758), and 0.727 (95% confidence interval, 0.669-0.779), respectively. Brier scores were 0.172, 0.205, and 0.179, respectively. Calibration demonstrated that Grobman 2007 and Metz vaginal birth after cesarean delivery models were most accurate when predicted probabilities were >60% and were beneficial for counseling women who did not desire to have vaginal birth after cesarean delivery but had a predicted success rates of 60-90%. The models underpredicted actual probabilities when predicting success at <60%. The Grobman 2007 and Metz vaginal birth after cesarean delivery models provided greatest net benefit between threshold probabilities of 60-90% but did not provide a net benefit with lower predicted probabilities of success compared with a strategy of recommending vaginal birth after cesarean delivery for all women . Conclusion: When 3 commonly used vaginal birth after cesarean delivery prediction models are compared in the same population, there are differences in performance that may affect an institution's choice of which model to use. abstract_id: PUBMED:27262971 A prediction model of vaginal birth after cesarean in the preterm period. Background: A validated model exists that predicts the probability of vaginal birth after cesarean delivery in patients at term who are undergoing a trial of labor after cesarean delivery. However, a model that predicts the success of a vaginal birth after cesarean delivery in the preterm period has not been developed. Objective: We sought to develop and validate a predictive model for vaginal birth after cesarean delivery for women undergoing a trial of labor after cesarean delivery during the preterm period. Study Design: We performed a secondary analysis of a prospective cohort study designed to evaluate perinatal outcomes in women with a prior cesarean scar. We included women with 1 prior low transverse cesarean delivery undergoing a trial of labor after cesarean delivery with a vertex singleton pregnancy in the preterm period (26-36 weeks). Using multivariable logistic regression modeling, we constructed a predictive model for vaginal birth after cesarean delivery with information known at admission for preterm delivery. Using a 70% to 30% random split of the data, the model was developed in the training data and subsequently confirmed in the validation data. Predictions and area under the curve were based on a 10-fold cross-validated jackknife estimation and based on 1000 bootstrap resampling methods. The adequacy of all models was evaluated based on the Hosmer-Lemeshow goodness-of-fit test. Results: One thousand two hundred ninety-five women met our criteria for analysis. The significant predictors of vaginal birth after cesarean delivery success were chronic hypertension, hypertensive disease of pregnancy (gestational hypertension or preeclampsia), prior vaginal delivery, dilation on cervical examination at admission, prior vaginal birth after cesarean delivery, a recurring indication in a prior cesarean delivery, and induction of labor as well as a 2-way interactions between dilation and hypertensive disease of pregnancy, dilation and diabetes mellitus (pregestational or gestational), and induction of labor and hypertensive disease of pregnancy. The area under the curve from the prediction model was 0.80 (95% confidence interval, 0.77-0.83) and the model fit the data well (Hosmer-Lemeshow P = .367). The bootstrap and 10-fold cross-validated jackknife estimates of the corrected area under the curve of the model were 0.78 (95% confidence interval, 0.74-0.82) and 0.77 (95% confidence interval, 0.73-0.82), respectively, following incorporation of regression shrinkage. Conclusion: A cross-validated predictive model was created for patients undergoing a trial of labor after cesarean delivery in the preterm period using 8 variables known on admission. These factors were notably different from factors used in the model for term patients. This new model can be used to counsel patients in the preterm period who want to undergo a trial of labor after cesarean delivery on their predicted vaginal birth after cesarean delivery success. abstract_id: PUBMED:31788783 Third-trimester prediction of successful vaginal birth after one cesarean delivery-A Swedish model. Introduction: The objective was to create a clinically useful prediction model for vaginal birth in trial of labor after one cesarean section, appropriate for a third trimester consultation. Material And Methods: Women with one cesarean section and at least one following delivery (N = 38 686) in the Swedish Medical Birth Register, 1998-2013, were studied. The women were randomly divided into one development and one validation data set. From the development data set, variables associated with vaginal birth after cesarean (VBAC) were identified by univariable logistic regression. Stepwise backward selection was performed until all variables were statistically significant. From the final fitted multivariable logistic model, likelihood ratios were calculated, in order to transpose odds ratios into clinically useful measurements. A constant, based on the delivery ward VBAC in trial of labor rate, was used. By applying the likelihood ratios on the validation data set, the VBAC chance for each woman was estimated with the Bayesian theorem, and the ability of the model to predict VBAC was explored using receiver operating characteristics (ROC) curves. Results: A previous VBAC, and a previous cesarean section for non-cephalic presentation, were the strongest VBAC predictors. The lowest chances were found for a previous cesarean section due to dystocia, and among women with <18 months since the last cesarean section. The area under the ROC curve was 0.67. Conclusions: The new model was satisfactory in predicting VBAC in trial of labor. Developed as a software application, it would become a clinically useful decision-aid. abstract_id: PUBMED:33792462 Neonatal and maternal outcomes with trial of labor after two prior cesarean births: stratified by history of vaginal birth. Introduction: To determine the impact of prior vaginal birth on neonatal and maternal outcomes among individuals undergoing a trial of labor after two cesarean births. Materials And Methods: This was a cross-sectional study using the U.S. National Vital Statistics 2014-2018 period linked birth and infant death data. Inclusion criteria were term, cephalic, singleton pregnancies with two prior cesarean births. The primary exposure variable was a trial of labor after cesarean vs prelabor repeat cesarean birth. Cohorts were defined by the presence or absence of a prior vaginal birth. The primary outcome was a composite of adverse neonatal outcomes (Apgar score <5 at 5 min, assisted ventilation >6 h, neonatal seizures, or neonatal death within 27 days). Secondary outcomes included a maternal composite and the cesarean birth rate. Propensity score matching was used to account for baseline differences in treatment allocation within each cohort, and conditional logistic regression assessed the association between the exposure and outcomes. Results: The composite neonatal adverse outcome was significantly higher in those undergoing a trial of labor after cesarean compared to prelabor repeat cesarean birth in both individuals without a prior vaginal birth (8.2 vs 11.6 per 1000 live births, OR 1.41; 95% CI 1.12-1.70) and with a prior vaginal birth (9.6 vs 12.4 per 1000 live births, OR 1.30; 95% CI 1.08-1.57). The composite maternal adverse outcome was significantly higher among individuals without a prior vaginal birth undergoing trial of labor after cesarean (6.0 vs 9.5 per 1000 live births, OR 1.59; 95% CI 1.26-2.09), but was similar in those with a prior vaginal birth (7.9 vs 9.3 per 1000 live births, OR 1.18; 95% CI 0.97-1.46). Conclusion: In individuals with two prior cesarean births, trial of labor after cesarean was associated with increased neonatal adverse outcomes when compared to prelabor repeat cesarean birth, irrespective of a history of vaginal birth. In individuals with a prior vaginal birth, the composite maternal adverse outcome was not elevated in the trial of labor cohort. abstract_id: PUBMED:25801723 Predicting the chance of vaginal delivery after one cesarean section: validation and elaboration of a published prediction model. Objective: We aimed to validate a widely used US prediction model for vaginal birth after cesarean (Grobman et al. [8]) and modify it to suit Swedish conditions. Study Design: Women having experienced one cesarean section and at least one subsequent delivery (n=49,472) in the Swedish Medical Birth Registry 1992-2011 were randomly divided into two data sets. In the development data set, variables associated with successful trial of labor were identified using multiple logistic regression. The predictive ability of the estimates previously published by Grobman et al., and of our modified and new estimates, respectively, was then evaluated using the validation data set. The accuracy of the models for prediction of vaginal birth after cesarean was measured by area under the receiver operating characteristics curve. Results: For maternal age, body mass index, prior vaginal delivery, and prior labor arrest, the odds ratio estimates for vaginal birth after cesarean were similar to those previously published. The prediction accuracy increased when information on indication for the previous cesarean section was added (from area under the receiver operating characteristics curve=0.69-0.71), and increased further when maternal height and delivery unit cesarean section rates were included (area under the receiver operating characteristics curve=0.74). The correlation between the individual predicted vaginal birth after cesarean probability and the observed trial of labor success rate was high in all the respective predicted probability decentiles. Conclusion: Customization of prediction models for vaginal birth after cesarean is of considerable value. Choosing relevant indicators for a Swedish setting made it possible to achieve excellent prediction accuracy for success in trial of labor after cesarean. During the delicate process of counseling about preferred delivery mode after one cesarean section, considering the results of our study may facilitate the choice between a trial of labor or an elective repeat cesarean section. abstract_id: PUBMED:29052016 Prediction of vaginal birth after one ceasarean delivery for non-progressive labor. Purpose: A model exists that predicts the probability of vaginal birth after cesarean (VBAC). That model is not stratified by indication at first cesarean. The aim of the study was to identify factors that may predict successful VBAC in patients operated for arrest of dilatation or descent at their first cesarean. Methods: Retrospective analysis of all women with trials of labor after one cesarean (TOLAC) for non-progressive labor between November 2008 and October 2015 was performed (n = 231). A multivariate logistic regression analysis was carried out to generate a prediction model for VBAC at hospital admission for planned TOLAC. Results: During the study period, we had 231 parturient women who chose to undergo TOLAC following one previous cesarean delivery for non-progressive labor. Successful VBAC occurred in 155 (67.0%) parturient women. A model consisting of previous successful VBAC, lower head station on decision at previous cesarean delivery, lower newborn weight at previous cesarean delivery and larger cervical effacement on admission at delivery planned for TOLAC correctly classified 75.3% of cases (R 2 = 0.324, AUC 0.80, 95% CI 0.70-0.89, p < 0.001). Conclusion: A predictive model, which incorporates four variables available at hospital admission for the planned TOLAC, has been developed that allows the determination of likelihood of successful VBAC following one cesarean delivery for non-progressive labor. abstract_id: PUBMED:33279807 Role of ante-partum ultrasound in predicting vaginal birth after cesarean section: A prospective cohort study. Introduction: Vaginal birth after caesarean delivery is associated with better outcomes compared to repeat caesarean section. Accurate antenatal risk stratification of women undergoing a trial of labor after caesarean section is crucial in order to maximize perinatal and maternal outcomes. The primary aim of this study was to explore the role of antepartum ultrasound in predicting the probability of vaginal birth in women attempting trial of labor; the secondary aim was to build a multiparametric prediction model including pregnancy and ultrasound characteristics able to predict vaginal birth and compare its diagnostic performance with previously developed models based exclusively upon clinical and pregnancy characteristics. Methods: Prospective study of consecutive singleton pregnancies scheduled for trial of labor undergoing a dedicated antepartum ultrasound assessment at 36-38 weeks of gestation. Head circumference, estimated fetal weight cervical length, sub-pubic angle were recorded before the onset of labour. The obstetricians and midwives attending the delivery suite were blinded to the ultrasound findings. Multivariate logistic regression and area under the curve analyses were used to explore the strength of association and test the diagnostic accuracy of different maternal and ultrasound characteristics in predicting vaginal birth. Comparison with previously reported clinical models developed by the Maternal-Fetal Medicine Unit Network (Grobman's models) was performed using De Long analysis. Results: A total of 161women who underwent trial of labor were included in the study. Among them 114 (70.8 %) women had successful vaginal birth. At multivariable logistic regression analysis maternal height (adjusted odds ratio (aOR):1.24;9 5% Confidence Interval (CI)1.17-1.33), previous C-section for arrest labor (aOR:0.77; 95 %CI0.66-0.93), cervical dilation at admission (aOR:1.35 ; 95 %CI1.12-1.74), fetal head circumference (aOR:0.77 ; 5%CI0.43-0.89), subpubic angle (aOR:1.39 95 %CI1.11-1.99) and cervical length (aOR:0.82 95 % CI0.54-0.98) were independently associated with VBAC. A model integrating these variables had an area under curve of 0.839(95 % CI 0.710-0.727) for the prediction of vaginal birth, significantly higher than those achieved with intake (0.694; 95 %CI0.549-0.815; p = 0.01) and admission (0.732: 95 % CI 0.590-0.84; p = 0.04) models reported by Grobman. Conclusion: Antepartum prediction of vaginal birth after a caesarean section is feasible. Fetal head circumference, subpubic angle and cervical length are independently associated and predictive of vaginal birth. Adding these variables to a multiparametric model including maternal parameters improves the diagnostic accuracy of vaginal birth compared to those based only on maternal characteristic. Answer: Yes, a prediction model for vaginal birth after cesarean (VBAC) can also predict the probability of morbidity related to a trial of labor (TOL). A study found that as the predicted chance of VBAC increased among women who underwent TOL, the prevalence of maternal and neonatal morbidity became less frequent. Specifically, when the predicted chance of VBAC was less than 70%, women undergoing a TOL were more likely to have maternal morbidity compared to those who underwent an elective repeat cesarean section (ERCS). However, when the predicted chance of VBAC was at least 70%, total maternal morbidity was not different between the two groups. The results were similar for neonatal morbidity, suggesting that the prediction model for VBAC provides information regarding the chance of TOL-related morbidity (PUBMED:18822401). Furthermore, the external validity of a prediction model for the success of TOLAC was confirmed in a study using a perinatal database in Quebec. The study showed that both maternal and neonatal morbidities became less frequent as the predicted chance of VBAC increased. Women undergoing a TOLAC were more likely to have maternal morbidity than those who underwent an ERCS when the predicted probability of VBAC was less than 60%. Conversely, maternal morbidity was not different between the two groups when the predicted probability of VBAC was at least 60%. Neonatal morbidity was similar between groups when the probability of VBAC success was 70% or greater (PUBMED:22971449). These findings indicate that a prediction model for VBAC can be a useful tool not only for predicting the likelihood of a successful vaginal birth after cesarean but also for assessing the risk of morbidity associated with attempting a TOL.
Instruction: Are high-pass resolution perimetry thresholds sampling limited or optically limited? Abstracts: abstract_id: PUBMED:12199543 Are high-pass resolution perimetry thresholds sampling limited or optically limited? Purpose: It has been reported that high-pass resolution perimetry (HRP) provides a means of noninvasively determining retinal ganglion cell density. However, there is evidence to suggest that this may not be true. The purpose of the present study was to determine whether HRP thresholds are sampling limited, which is a necessary condition for being able to determine retinal ganglion cell density psychophysically. Methods: This study measured resolution and detection performance for a range of grating-based stimuli under the testing conditions that HRP uses and compared these with performance of the ring stimulus. Results: The results show that detection and resolution acuity under HRP test conditions were often equivalent, in accordance with previous investigations. However, the results also show that the thresholds underestimated the true level of resolution acuity in the periphery because increasing stimulus contrast increased performance. Conclusion: These findings suggest that HRP thresholds cannot be regarded as sampling limited, but rather they are optically limited. We therefore conclude that HRP thresholds cannot be regarded as a direct measure of the underlying ganglion cell density. abstract_id: PUBMED:8362630 Properties of high-pass resolution perimetry targets. Relationships between high-pass resolution perimetry ring targets and conventional perimetry and acuity targets were explored by optical analytical techniques and by comparative measurements in normal humans. High-pass resolution and acuity targets produced closely proportional resolution measurements, showing that the critical ring detail is the width of the bright core. High-pass resolution and conventional perimetry thresholds were not equally well correlated. From a purely optical point of view, high-pass resolution targets appeared to lose somewhat more contrast on defocusing. On the other hand, high-pass resolution appears to show less variability in practical, clinical use. abstract_id: PUBMED:9197567 Cataract and high-pass resolution perimetry. The present study describes the influence of cataract on high-pass resolution perimetry results. Twenty-five otherwise healthy patients were examined before and after cataract surgery. Their preoperative visual acuities ranged from 0.1 to 0.8 and their mean resolution thresholds from 3.9 to 12.3 dB. Both elevated mean thresholds and local visual fields defects were observed. In patients with low-grade cataract, i.e. preoperative visual acuity 0.3 to 0.65, the mean improvement in perimetric results after operation was approximately 1 dB. In patients with higher degrees of lens opacity, visual acuity 0.1 to 0.25, the difference between pre- and postoperative perimetry results showed a wide range, 1.4-6.2 dB. In conclusion, cataract induced different types of visual field defects. The general threshold increase due to low-grade cataract (VA > or = 0.3) could be compensated for by subtracting 1 dB from the measured value. abstract_id: PUBMED:2023738 Intratest variability in conventional and high-pass resolution perimetry. The authors studied the effects of threshold, age, and visual field location on intratest variability in 11 normal subjects between the ages of 15 and 50 years (mean, 35.37 years). The subjects were tested with a conventional manual (Tübinger) perimeter and a high-pass resolution (Ring) perimeter at the following locations: (0 degrees, 30 degrees), (0 degrees, 15 degrees), (0 degrees, -15 degrees), and (0 degrees, -30 degrees). Frequency-of-seeing curves were constructed to obtain accurate estimates of threshold and intratest variability. Although location did not have a significant effect on intratest variability in conventional perimetry, intratest variability increased with threshold (P = 0.031) and age (P = 0.012). The results with resolution perimetry contrasted sharply and showed that neither threshold, age, nor location had a significant effect on intratest variability (P greater than 0.225). Although the thresholds with the two types of perimetry were correlated, intratest variability was not. These results show that resolution perimetry may be able to bypass some of the limitations encountered in obtaining thresholds with conventional perimetry, thereby providing more reliable information. abstract_id: PUBMED:1594214 High-pass resolution perimetry in neuro-ophthalmology. Clinical impressions. High-pass resolution perimetry is a new technique for visual field testing that determines extrafoveal resolution thresholds. The authors used this method in a neuro-ophthalmology clinic and compared it with other visual field tests. The main advantages of the technique were the short test time and the strong preference by patients for this technique over conventional automated perimetry. The sensitivity and specificity of the test seemed to be as good as or better than conventional perimetry. A minor disadvantage was its slightly less precise spatial definition of field defects. In this report, the authors present their experiences with this new perimetric technique. abstract_id: PUBMED:1762726 Influence of amblyopia on high-pass resolution perimetry. Seventeen subjects with amblyopia due to strabismus and anisometropia were examined using high-pass resolution perimetry. In addition to the standard technique, testing the 5-30 degrees visual field, an optional program testing the central 4 degrees visual field was used. Threshold elevation was confined to the central 4 degrees visual field and strongly correlated with visual acuity, expressed as minimum angle of resolution. Standard high-pass resolution perimetry did not reveal any significant abnormalities in the amblyopic subjects. In conclusion, the technique is useful for the diagnosis of ocular or neurologic disorders also in subjects with a high degree of amblyopia. abstract_id: PUBMED:7825405 The effect of treatment on the results of high-pass resolution perimetry in glaucoma. In order to find out to which extent high-pass resolution perimetry would detect any changes in patients treated for glaucoma, resolution perimetry results were evaluated from 56 glaucoma patients and 15 untreated ocular hypertensive patients followed for 2 to 3 years. Fifty-nine of the 71 examined patients showed lower resolution thresholds, i.e. increased sensitivity, after 2 years, compared to initial values. The threshold decrease was significantly larger in the treated glaucoma patients (median 1.22 dB) than in the untreated ocular hypertensive patients (0.48 dB). The threshold decrease in the untreated ocular hypertensive group corresponds to the previously described learning effect. In 35 of the 56 treated glaucoma patients the thresholds improved more than 0.84 dB, the upper confidence limit in the untreated group, which may indicate a beneficial effect of antiglaucoma therapy in these patients. The threshold change was unrelated to initial resolution threshold and cannot be explained by a 'sorting' effect. The observations in the current study using resolution perimetry indicate that improved visual function can be demonstrated in many patients treated for early glaucoma, at least during the first 2 years of treatment. abstract_id: PUBMED:8334916 High-pass resolution perimetry. A clinical review. Review of all available reports comparing high-pass resolution perimetry (HRP) and conventional perimetry in normals and in subjects with different visual disorders reveals closely comparable aspects of sensitivity, specificity, and reliability. HRP shows important advantages concerning variability, test duration, and subject preferences. Drawbacks seem largely limited to somewhat loose renditions of visual field defects of small area and large depth. Otherwise HRP's novel format of graphic result presentation may be better suited to visual evaluation than conventional gray-scale maps with their lumping of thresholds and extensive interpolations. Several examples are provided of visual field defects due to various lesions throughout the visual system. abstract_id: PUBMED:1543225 High-pass resolution perimetry in eyes with ocular hypertension and primary open-angle glaucoma. We analyzed the results of high-pass resolution perimetry on normal eyes, eyes with ocular hypertension, and eyes with primary open-angle glaucoma. The subjects were matched for age and lens density. We controlled for refraction, visual acuity, pupil size, miotic medications, and learning effects. Under these conditions the glaucomatous eyes showed a significant reduction in overall resolution threshold compared to both normal (P less than or equal to .01) and hypertensive eyes (P less than .01). The hypertensive eyes were not significantly different from normal. Standard increment threshold fields and high-pass resolution visual fields had an agreement of abnormality of 67%. Resolution perimetry showed a 92% agreement with standard perimetry in the location of the defect, when the defect was apparent on both tests. These results indicated that resolution perimetry may be useful for diagnosis and management of primary open-angle glaucoma. abstract_id: PUBMED:14704538 Visual field of high-pass resolution perimetry in normal subjects. Purpose: To investigate the properties of the visual field of high-pass resolution perimetry in normal subjects. Methods: Four centers collected normative data for high-pass resolution perimetry. In two of the centers the subjects were stratified by age. One eye was tested per subject using high-pass resolution perimetry (Ophthimus). We tested 640 normal subjects and describe their visual field results by test location. We also analyzed the data by concentric zone, age, and by testing center. Results: The individual test location averages confirmed a reduction in resolution with eccentricity. Resolution thresholds increased with age by 0.025 dB per year (P < 0.001). The mean ring size increased by about 1 dB from age 20 to 70. Among centers there were significant differences in the means and the change in threshold with age (P = 0.009). Conclusion: The normal visual field of high-pass resolution perimetry is characterized by an increase in threshold with eccentricity. We found significant differences among the centers that were likely due to multiple factors including differences in subject selection criteria. Whether such differences occur with other perimetric techniques is unknown. Answer: High-pass resolution perimetry (HRP) thresholds are optically limited rather than sampling limited. This conclusion is drawn from a study that measured resolution and detection performance for a range of grating-based stimuli under HRP test conditions and compared these with the performance of the ring stimulus. The findings indicated that detection and resolution acuity under HRP test conditions were often equivalent, but the thresholds underestimated the true level of resolution acuity in the periphery because increasing stimulus contrast increased performance. Therefore, HRP thresholds cannot be regarded as a direct measure of the underlying ganglion cell density (PUBMED:12199543).
Instruction: Wrong-side/wrong-site, wrong-procedure, and wrong-patient adverse events: Are they preventable? Abstracts: abstract_id: PUBMED:31934519 An Update on Wrong-Site Spine Surgery. Study Design: Broad narrative review of current literature and adverse event databases. Objective: The aim of this review is to report the current state of wrong-site spine surgery (WSSS), whether the Universal Protocol has affected the rate, and the current trends regarding WSSS. Methods: An updated review of the current literature on WSSS, the Joint Commission sentinel event statistics database, and other state adverse event statistics database were performed. Results: WSSS is an adverse event that remains a potentially devastating problem, and although the incidence is difficult to determine, the rate is low. However, given the potential consequences for the patient as well as the surgeon, WSSS remains an event that continues to be reported alarmingly as often as before the implementation of the Universal Protocol. Conclusions: A systems-based approach like the Universal Protocol should be effective in preventing wrong-patient, wrong-procedure, and wrong-sided surgeries if the established protocol is implemented and followed consistently within a given institution. However, wrong-level surgery can still occur after successful completion of the Universal Protocol. The surgeon is the sole provider who can establish the correct vertebral level during the operation, and therefore, it is imperative that the surgeon design and implement a patient-specific protocol to ensure that the appropriate level is identified during the operation. abstract_id: PUBMED:34088279 Analysis and outcomes of wrong site thyroid surgery. Background: In thyroid surgery, wrong-site surgery (WSS) is considered a rare event and seldom reported in the literature. Case Presentation: This report presents 5 WSS cases following thyroid surgery in a 20-year period. We stratified the subtypes of WSS in wrong target, wrong side, wrong procedure and wrong patient. Only planned and elective thyroid surgeries present WSS cases. The interventions were performed in low-volume hospitals, and subsequently, the patients were referred to our centres. Four cases of wrong-target procedures (thymectomies [n = 3] and lymph node excision [n = 1] performed instead of thyroidectomies) and one case of wrong-side procedure were observed in this study. Two wrong target cases resulting additionally in wrong procedure were noted. Wrong patient cases were not detected in the review. Patients experienced benign, malignant, or suspicious pathology and underwent traditional surgery (no endoscopic or robotic surgery). 40% of WSS led to legal action against the surgeon or a monetary settlement. Conclusion: WSS is also observed in thyroid surgery. Considering that reports regarding the serious complications of WSS are not yet available, these complications should be discussed with the surgical community. Etiologic causes, outcomes, preventive strategies of WSS and expert opinion are presented. abstract_id: PUBMED:25673117 Wrong site surgery : Incidence, risk factors and prevention Background: Wrong site surgery defines a category of rare but totally preventable complications in surgery and other invasive disciplines. Such complications could be associated with severe morbidity or even death. As such complications are entirely preventable, wrong site surgery has been declared by the World Health Organization to be a "never event". Material And Methods: A selective search of the PubMed database using the MeSH terms "wrong site surgery", "wrong site procedure", "wrong side surgery" and "wrong side procedure" was performed. Results: The incidence of wrong site surgery has been estimated at 1 out of 112,994 procedures; however, the number of unreported cases is estimated to be higher. Although wrong site surgery occurs in all surgical specialities, the majority of cases have been recorded in orthopedic surgery. Breakdown in communication has been identified as the primary cause of wrong site surgery. Risk factors for wrong site surgery include time pressure, emergency procedures, multiple procedures on the same patient by different surgeons and obesity. Check lists have the potential to reduce or prevent the occurrence of wrong site surgery. Conclusion: The awareness that to err is human and the individual willingness to recognize and prevent errors are the prerequisites for reducing and preventing wrong site surgery. abstract_id: PUBMED:34221617 A perspective on wrong level, wrong side, and wrong site spine surgery. Background: Four of the most common "errors" in spine surgery include: operating on the wrong patient, doing the wrong procedure, performing wrong-level surgery (WLS), and/or performing wrong-sided surgery (WSS). Although preoperative verification protocols (i.e. Universal Protocol, routine Time-Outs, and using the 3 R's (i.e. right patient, right procedure, right level/side)) have largely limited the first two "errors," WLS and WSS still occur with an unacceptably high frequency. Methods: In 20 studies, we identified the predominant factors contributing to WLS/WSS; unusual/anatomical anomalies/variants (i.e. sacralized lumbar vertebrae. lumbarized sacral vertebra, Klippel-Feil vertebrae, block vertebrae, butterfly vertebrae, obesity/morbid obesity), inadequate/poor interpretation of X-rays/fluoroscopic intraoperative images, and failure to follow different verification protocols. Results: "Human error" was another major risk factor contributing to the failure to operate at the correct level/side (WLS/WSS). Factors comprising "human error" included; surgeon/staff fatigue, rushing, emergency circumstances, lack of communication, hierarchical behavior in the operating room, and failure to "speak up". Conclusion: Utilizing the Universal Protocol, routine Time Outs, and the 3 R's largelly avoid operating on the wrong spine patient, and performing the wrong procedure. However, these guidelines have not yet sufficiently reduced the frequently of WLS and WSS. Greater recognition of the potential pitfalls contributing to WLS/WSS as reviewed in this perspective should better equip spine surgeons to avert/limit such "errors" in the future. abstract_id: PUBMED:23508946 Patient safety in spine surgery: regarding the wrong-site surgery. Patient safety regarding wrong site surgery has been one of the priority issues in surgical fields including that of spine care. Since the wrong-side surgery in the DM foot patient was reported on a public mass media in 1996, the wrong-site surgery issue has attracted wide public interest as regarding patient safety. Despite the many wrong-site surgery prevention campaigns in spine care such as the operate through your initial program by the Canadian Orthopaedic Association, the sign your site program by the American Academy of Orthopedic Surgeon, the sign, mark and X-ray program by the North American Spine Society, and the Universal Protocol program by the Joint Commission, the incidence of wrong-site surgery has not decreased. To prevent wrong-site surgery in spine surgeries, the spine surgeons must put patient safety first, complying with the hospital policies regarding patient safety. In the operating rooms, the surgeons need to do their best to level the hierarchy, enabling all to speak up if any patient safety concerns are noted. Changing the operating room culture is the essential part of the patient safety concerning spine surgery. abstract_id: PUBMED:28195826 Wrong-Site Surgery in California, 2007-2014. Objective The implementation of a universal surgical safety protocol in 2004 was intended to minimize the prevalence of wrong-site surgery (WSS). However, complete elimination of WSS in the operating room continues to be a challenge. The purpose of this study is to evaluate the prevalence and etiology of WSS in the state of California. Study Design A retrospective study of all WSS reports investigated by the California Department of Public Health between 2007 and 2014. Methods Prevalence of overall and specialty-specific WSS, causative factors, and recommendations on further improvement are discussed. Results A total of 95 cases resulted in incident reports to the California Department of Public Health and were included in our study. The most common errors were operating on the wrong side of the patient's body (n = 60, 62%), performing the wrong procedure (n = 21, 21%), operating on the wrong body part (n = 12, 12%), and operating on the wrong patient (n = 2, 2%). WSS was most prevalent in orthopedic surgery (n = 33, 35%), followed by general surgery (n = 26, 27%) and neurosurgery (n = 16, 17%). All 3 otolaryngology WSS cases in California are associated with the ear. Conclusion WSS continues to surface despite national efforts to decrease its prevalence. Future research could establish best practices to avoid these "never events" in otolaryngology and other surgical specialties. abstract_id: PUBMED:33195386 Innovative Technology System to Prevent Wrong Site Surgery and Capture Near Misses: A Multi-Center Review of 487 Cases. Introduction: Wrong site surgery (WSS) is a preventable error. When these events do occur, they are often devastating to the patient, nursing staff, surgeon, and facility where the surgery was performed. Despite the implementation of protocols and checklists to reduce the occurrence of WSS, the rates are estimated to be unchanged. Materials and Methods: An innovative technology was designed to prevent WSS through a systems-based approach. The StartBox Patient Safety System was utilized at six sites by 11 surgeons. The incidence of near misses and WSS was reviewed. Results: The StartBox System was utilized for 487 orthopedic procedures including Spine, Sports Medicine, Hand, and Joint Replacement. There were no occurrences of WSS events. Over the course of these procedures, medical staff recorded 17 near misses utilizing the StartBox System. Conclusions: StartBox successfully performed all tasks without technical errors and identified 17 near miss events. The use of this system resulted in the occurrence of zero wrong site surgeries. abstract_id: PUBMED:26025741 Concept analysis: wrong-site surgery. A concept analysis was conducted on the concept of wrong-site surgery (WSS) using the principle-based method by Penrod and Hupcey. It included analysis of WSS within the context of epistemological, pragmatic, linguistic, and logical principles. The analysis found that WSS is an important concept that is universally accepted, but the definition could be improved with inclusion of comprehensive labeling for types of WSS that may occur, such as wrong patient, wrong site, wrong level/part, wrong procedure, and wrong side. Wrong-site surgery falls into the domains of both nursing and medicine, and there is limited research on the topic specific to nursing interventions, perceptions, and contributions to prevent WSS. abstract_id: PUBMED:23960510 Experience of wrong-site tooth extraction among Nigerian dentists. Objective: To report the experience of wrong-site tooth extraction among Nigerian dentists. Study Design: A self-administered questionnaire was distributed among a cross-section of Nigerian dentists. Information requested included personal experience on wrong-site tooth/teeth extraction and its after-effect, possible reasons for wrong-site tooth extraction and documentation of the event in patients' case. Respondents were also asked if they were aware of any colleagues who had previously experienced wrong-site tooth extraction and possible legal implication of the event, and if they aware of the universal protocol for preventing wrong site, wrong procedure, and wrong person surgery. Results: Twenty-two (13%) of the respondents reported having extracted a wrong tooth. The event occurred within 5 years after graduation in most cases. Most respondents (53.6%) informed the patient immediately after the event. Only 68% of the respondents documented the event in patient's case record. Most common reasons for wrong-site tooth extraction were heavy workload, presence of multiple condemned teeth and miscommunication between dentists. Fifty-five percent of respondents were aware of a colleague who had extracted a wrong tooth. The most probable legal implication of wrong-site tooth extraction according to the respondents was litigation by the patient. Only 25% of dentists were aware of a universal protocol for preventing wrong-site surgery. Conclusions: Wrong tooth/teeth extraction is not an uncommon event in the studied environment. The need to be familiar with universal protocol on wrong-site surgery and its legal implications are highlighted. abstract_id: PUBMED:24669124 Wrong site surgery! How can we stop it? Introduction: "Primum non nocere" (first do no harm): Hippocrates (c. 460 BC-377 BC). Wrong site surgery is the fourth commonest sentinel event after patient suicide, operative and post-operative complications, and medication errors. Misinterpretation of the clinic letters or radiology reports is the commonest reason for the wrong site being marked before surgery. Materials And Methods: We analyzed 50 cases each of operations carried out on the kidney, ureter, and the testis. The side mentioned on clinic letters, the consent form, and radiology reports lists were also studied. The results were analyzed in detail to determine where the potential pitfalls were likely to arise. Results: A total of 803 clinic letters from 150 cases were reviewed. The side of disease was not documented in 8.71% and five patients had the wrong side mentioned in one of their clinic letters. In the radiology reports, the side was not mentioned in three cases and it was reported wrongly in two patients. No wrong side was ever consented for and no wrong side surgery was performed. Conclusion: The side of surgery was not always indicated in clinic letter, theatre list, or the consent form despite the procedure being carried on a bilateral organ. As misinterpretation is a major cause of wrong side surgery, it is prudent that the side is mentioned every time in every clinic letter, consent form, and on the theatre list. The WHO surgical safety checklist has already been very effective in minimizing the wrong site surgery in the National Health Service. Answer: Wrong side, wrong site, wrong procedure, and wrong patient adverse events, collectively known as wrong site surgery (WSS), are considered preventable "never events" as declared by the World Health Organization (PUBMED:25673117). Despite the implementation of various protocols and safety measures, such as the Universal Protocol, routine Time-Outs, and the use of checklists, these adverse events continue to occur, although they are rare (PUBMED:31934519; PUBMED:34221617; PUBMED:33195386). The Universal Protocol and other preoperative verification protocols have been effective in reducing the incidence of operating on the wrong patient and performing the wrong procedure, but wrong-level surgery (WLS) and wrong-sided surgery (WSS) still happen with an unacceptably high frequency (PUBMED:34221617). Factors contributing to these errors include unusual anatomical anomalies, inadequate interpretation of images, and human error such as fatigue, rushing, emergency circumstances, lack of communication, and hierarchical behavior in the operating room (PUBMED:34221617). The incidence of WSS is difficult to determine, but it is known to occur across various surgical specialties, with a significant number of cases reported in orthopedic surgery, general surgery, and neurosurgery (PUBMED:28195826). The use of innovative technology systems, such as the StartBox Patient Safety System, has shown promise in preventing WSS and capturing near misses (PUBMED:33195386). To prevent WSS, it is crucial for surgeons to comply with hospital policies regarding patient safety, encourage a culture change in the operating room to level hierarchies, and enable all staff to speak up about patient safety concerns (PUBMED:23508946). Awareness, recognition of risk factors, and individual willingness to prevent errors are essential for reducing and preventing WSS (PUBMED:25673117). In conclusion, while WSS events are preventable, their complete elimination remains a challenge. Continuous efforts to improve communication, adherence to safety protocols, and the use of technology may help further reduce the incidence of these adverse events.
Instruction: Functional recovery period after arthroscopic rotator cuff repair: is it predictable before surgery? Abstracts: abstract_id: PUBMED:21107925 Functional recovery period after arthroscopic rotator cuff repair: is it predictable before surgery? Background: Several studies have reported functional recovery of the shoulder after arthroscopic rotator cuff repair (ARCR). Preoperative estimation of the time required for functional recovery is important for determining surgical indications and for planning timing of the surgery and an appropriate postoperative physical therapy. Questions/purposes: We therefore asked: (1) how long it takes to obtain functional recovery after ARCR, and (2) what preoperative factors influence functional recovery time. Patients And Methods: We retrospectively evaluated 201 patients who had undergone ARCR. Using the Japanese Orthopaedic Association (JOA) shoulder scoring system, we defined the functional recovery period as the time required to achieve a score greater than 80% in each component. We evaluated the functional recovery periods and assessed preoperative influencing factors such as age, gender, shoulder stiffness, morphologic features of rotator cuff tears, and rotator cuff tear size. Results: Sixty-three patients (31%) took less than 3 months, 81 patients (40%) took between 3 and 6 months, and 57 patients (28%) took greater than 6 months to achieve a score greater than 80% in each JOA shoulder assessment component. Younger patients without shoulder stiffness and with smaller rotator cuff tears had shorter functional recovery periods. Conclusions: One hundred forty-four patients (72%) obtained functional recovery within 6 months after ARCR. Age, shoulder stiffness, and rotator cuff tear size influenced functional recovery time. abstract_id: PUBMED:28131695 Speed of recovery after arthroscopic rotator cuff repair. Background: The purpose of this study was to delineate the time taken to achieve maximum improvement (plateau of recovery) and the degree of recovery observed at various time points (speed of recovery) for pain and function after arthroscopic rotator cuff repair. Methods: An institutional shoulder surgery registry query identified 627 patients who underwent arthroscopic rotator cuff repair between 2006 and 2015. Measured range of motion, patient satisfaction, and patient-reported outcome measures were analyzed for preoperative, 3-month, 6-month, 1-year, and 2-year intervals. Subgroup analysis was performed on the basis of tear size by retraction grade and number of anchors used. Results: As an entire group, the plateau of maximum recovery for pain, function, and motion occurred at 1 year. Satisfaction with surgery was >96% at all time points. At 3 months, 74% of improvement in pain and 45% to 58% of functional improvement were realized. However, only 22% of elevation improvement was achieved (P < .001). At 6 months, 89% of improvement in pain, 81% to 88% of functional improvement, and 78% of elevation improvement were achieved (P < .001). Larger tears had a slower speed of recovery for Single Assessment Numeric Evaluation scores, forward elevation, and external rotation. Smaller tears had higher motion and functional scores across all time points. Tear size did not influence pain levels. Conclusion: The plateau of maximum recovery after rotator cuff repair occurred at 1 year with high satisfaction rates at all time points. At 3 months, approximately 75% of pain relief and 50% of functional recovery can be expected. Larger tears have a slower speed of recovery. abstract_id: PUBMED:26189806 Prognostic factors for recovery after arthroscopic rotator cuff repair: a prognostic study. Background: Studies concerning prognostic factors of recovery after arthroscopic rotator cuff repair mostly focus on tendon integrity or functional recovery as an outcome. Little is known about how they influence quality of life after surgery. We therefore tried to identify prognostic factors having an impact on quality of life after arthroscopic rotator cuff repair. Methods: This study included 30 patients who underwent arthroscopic rotator cuff repair. We assessed Western Ontario Rotator Cuff Index as primary outcome and RAND-36, Constant-Murley score, and a shoulder hindrance score as secondary outcomes. Patients were repeatedly measured: once preoperatively and 4 times postoperatively. Preoperative range of motion, obesity, fatty infiltration, and cuff retraction were preselected as prognostic factors. Results: Patients were significantly improved at 3 months and 6 months after arthroscopic rotator cuff repair. In multiple regression analysis, none of the preselected factors could be identified as a prognostic factor influencing quality of life after arthroscopic rotator cuff repair (measured with the Western Ontario Rotator Cuff Index). For the outcome variables RAND-36 (6 months, 1 year) and shoulder hindrance score (1 year), fatty infiltration Goutallier stages 1 and 2 and retraction grades II, III, and IV were significant predictors. Conclusion: Although fatty infiltration and retraction grade predict the RAND-36 and shoulder hindrance score, this study could not support preoperative range of motion, obesity, fatty infiltration, or retraction of the cuff as a prognostic factor for quality of life after arthroscopic rotator cuff repair. This study shows that if selection of patients is done properly, these factors do not influence a successful outcome. abstract_id: PUBMED:26589917 Functional status and failed rotator cuff repair predict outcomes after arthroscopic-assisted latissimus dorsi transfer for irreparable massive rotator cuff tears. Background: Arthroscopic-assisted latissimus dorsi tendon transfer (LDTT) has been recently introduced for treatment of irreparable, posterosuperior massive rotator cuff tears. We sought to evaluate the functional outcomes of this technique and to check for possible outcome predictors. Methods: The study reviewed 86 patients (aged 59.8 ± 5.9 years) who underwent an arthroscopic-assisted latissimus dorsi tendon transfer after 36.4 ± 9 months of follow-up. Of these, 14 patients (16.3%) sustained an irreparable massive rotator cuff tear after a failed arthroscopic rotator cuff repair. The Constant and Murley score (CMS) was used to assess patients' functionality preoperatively and at follow-up. Results: As a group, the CMS improved with surgery from 35.5 ± 6.1 to 69.5 ± 12.3 (P < .001). A lower preoperative CMS and a previous failed rotator cuff repair resulted in lower postoperative range of motion (P = .044 and P = .007, respectively) and CMS (P = .042 and P = .018, respectively). A previous rotator cuff repair resulted in lower satisfaction with surgery (P = .009). Gender and age did not affect the clinical outcomes. Conclusions: Our results support the effectiveness of arthroscopic-assisted LDTT in the treatment of patients with an irreparable, posterosuperior massive rotator cuff tears in pain relief, functional recovery, and postoperative satisfaction. Patients with lower preoperative CMS and a history of failed rotator cuff repair have a greater likelihood of having a lower clinical result. However, the favorable values of summary postoperative scores do not exclude these patients as candidates for arthroscopic-assisted LDTT. abstract_id: PUBMED:26851272 Recovery of Muscle Strength After Intact Arthroscopic Rotator Cuff Repair According to Preoperative Rotator Cuff Tear Size. Background: The recovery of muscle strength after arthroscopic rotator cuff repair based on the preoperative tear size has not yet been well described. Purpose/hypothesis: The purpose of this study was to evaluate the recovery period of muscle strength by a serial assessment of isometric strength after arthroscopic rotator cuff repair based on the preoperative tear size. The hypothesis was that muscle strength in patients with small and medium tears would recover faster than that in those with large-to-massive tears. Study Design: Cohort study; Level of evidence, 3. Methods: A total of 164 patients who underwent arthroscopic rotator cuff repair were included. Isometric strength in forward flexion (FF), internal rotation (IR), and external rotation (ER) was evaluated preoperatively and at 6, 12, 18, and 24 months after surgery. Preoperative magnetic resonance imaging scans were assessed to evaluate the quality of the rotator cuff muscle, including fatty infiltration, occupation ratio, and tangent sign. Patient satisfaction as well as visual analog scale (VAS) for pain, American Shoulder and Elbow Surgeons (ASES), and Constant scores were assessed at every follow-up. Results: Muscle strength demonstrated the slowest recovery in pain relief and the restoration of shoulder function. To reach the strength of the uninjured contralateral shoulder in all 3 planes of motion, recovery took 6 months in patients with small tears and 18 months in patients with medium tears. Patients with large-to-massive tears showed continuous improvement in strength up to 18 months; however, they did not reach the strength of the contralateral shoulder at final follow-up. At final follow-up, mean strength in FF, IR, and ER was 113.0%, 118.0%, and 112.6% of the contralateral shoulder in patients with small tears, respectively; 105.0%, 112.1%, and 102.6% in patients with medium tears, respectively; and 87.6%, 89.5%, and 85.2% in patients with large-to-massive tears, respectively. Muscle strength in any direction did not significantly correlate with postoperative patient satisfaction (P = .374, .515, and .692 for FF, IR, and ER, respectively), whereas it highly correlated with preoperative quality of the muscle. Conclusion: The recovery of muscle strength after arthroscopic repair was poorly correlated with patient satisfaction. This study recommends that regardless of pain relief and improved shoulder function, patients with larger than medium tears should be encouraged to continue with rehabilitation for the maximal restoration of muscle strength beyond 1 year postoperatively. abstract_id: PUBMED:27066966 Functional outcomes after bilateral arthroscopic rotator cuff repair. Background: Arthroscopic repair of rotator cuff tears is a common procedure performed by orthopedic surgeons. There is a well-known incidence of up to 35% of bilateral rotator cuff tear disease in patients who have a known unilateral tear. The majority of the literature focuses on outcomes after unilateral surgery. The purpose of this study was to determine if there are clinical differences in shoulders of patients who underwent staged bilateral rotator cuff repairs during their lifetime. Methods: A retrospective review of all patients who underwent staged bilateral arthroscopic rotator cuff surgery at our institution was performed. All patients had at least 2 years of follow-up. Clinical outcome scores including the American Shoulder and Elbow Surgeons (ASES), Single Assessment Numeric Evaluation, and Rowe measures were obtained. A subset of patients returned for clinical and ultrasound evaluation performed by an independent fellowship-trained musculoskeletal radiologist. Results: Overall, 110 shoulders in 55 patients, representing 68% of all eligible patients, participated. No clinical or statistical difference was found in any outcome measure. ASES scores averaged 86.5 (36.7-100) in the dominant shoulder compared with 89.6 (23.3-100) in the nondominant shoulder (P = .42). Ultrasound was available on 34 shoulders and showed complete healing rate of 88%. The shoulders with retearing of the rotator cuff (12%) demonstrated clinically relevant lower ASES scores (72.5) compared with shoulders with confirmed healed repairs (86.2; P = .2). Discussion: Patients who undergo staged bilateral rotator cuff repair can expect to have similarly good clinical outcomes regardless of hand dominance or chronologic incidence with excellent healing rates in both shoulders. abstract_id: PUBMED:38393862 Clinical outcome of arthroscopic acromioplasty vs. arthroscopic rotator cuff repair for irreparable rotator cuff tears: A3-year follow-up. Background: Rotator cuff tendon tears at the shoulder joint are one of the most common causes of shoulder discomfort and impairment. Objective: This study aims to compare the long-term patient-reported outcomes of arthroscopic acromioplasty to arthroscopic rotator cuff repair. Methods: Patients who underwent arthroscopic repair for irreparable rotator cuff tears between January 1 and December 1, 2022, and had any morphology of the acromial bone were randomly assigned to either the arthroscopic acromioplasty or arthroscopic rotator cuff repair groups. Excluded from the study were patients who had undergone revision surgery, had subscapularis involvement, had severe neurologic diseases, or had passed away. Baseline and long-term follow-up surveys collected scores from the American Shoulder and Elbow Surgeons (ASES), Simple Shoulder Test (SST), University of California-Los Angeles (UCLA), Visual Analog Scale (VAS) for Pain, and consistent scores. Averages were calculated after adjusting for standard deviation. Results: In the first trial, 30 patients from Group A (arthroscopic acromioplasty) and 30 patients from Group B (arthroscopic rotator cuff repair) underwent long-term follow-up. The average age of patients in Group A was 58.62 ± 5.95 years, while in Group B it was 59.11 ± 7.98 years. At the final follow-up, there were no statistically significant differences between Group A and Group B patients for ASES (P= 0.233), VAS pain (P= 0.154), Constant (P= 0.584), or SST (P= 0.155). However, Group A showed statistically significant improvements compared to Group B at the 2-year postoperative follow-up for UCLA (P= 0.015) and SF-12 (P= 0.021). Additionally, in the short-term results, Group A outperformed Group B significantly in forward flexion (P< 0.05), abduction (P< 0.05), and external rotation in abduction (P< 0.05). Conclusion: Our study revealed that acromioplasty alone was superior to rotator cuff repair in terms of patient assessments of shoulder mobility and discomfort. abstract_id: PUBMED:37588146 Current concepts in the evolution of arthroscopic rotator cuff repair. Arthroscopic rotator cuff repair has become one of the most common and successful surgeries performed in orthopedics. It represents the culmination of advances in many diverse fields such as optics, fluid dynamics, mechanical engineering, and most recently, orthobiologics. This article reviews the current state of the art of arthroscopic rotator cuff repair, through the lens of its historical context and evolution to our present understanding. We review the limitations in the current approach, and glance toward the future of rotator cuff regeneration with emerging technologies. abstract_id: PUBMED:33599856 Arthroscopic rotator cuff repair in patients over 70 years of age: a systematic review. Background: Failure of conservative treatment in patients over 70 years of age with a rotator cuff tear makes surgery a possible option, considering the increase in life expectancy and the high functional demands of elderly patients. The purpose of this systematic review of the literature was to evaluate the subjective and objective outcomes after arthroscopic rotator cuff repair in patients over 70 years of age. Methods: A systematic review was performed to identify all the studies reporting subjective and objective outcomes in patients aged 70 years or older undergoing arthroscopic rotator cuff repair. Constant Murley Score (CMS), visual analog scale (VAS), American Shoulder and Elbow Surgeons Score (ASES), and Simple Shoulder Test (SST) were used to detect any clinical improvement after surgery. Retear and satisfaction were also analyzed. Results: Out of 941 studies identified, only 6 papers have been included in the review. All studies reported improvements in postoperative functional outcome scores that exceed the minimal clinically relevant difference. The mean retear rate amounts to 21.9%, which is in line with the failure rate of rotator cuff repair in general population. Moreover, postoperative satisfaction is very high (95%). Conclusion: This systematic review suggests that arthroscopic rotator cuff repair in patients over 70 years of age could be a valid treatment option after failure of conservative approach. Level Of Evidence: 4 Trial registration The study was registered on PROSPERO (registration ID: CRD42018088613). abstract_id: PUBMED:31327943 Trends Associated with Open Versus Arthroscopic Rotator Cuff Repair. Background: Rotator cuff (RTC) repair is performed using open/mini-open or arthroscopic procedures, and the use of arthroscopic techniques is increasing. The extent to which surgery has transitioned from open to arthroscopic techniques has yet to be elucidated. Questions/purposes: The purpose of this study was to evaluate trends in open and arthroscopic rotator cuff repair in the USA and describe tendencies in treatment across gender, age, and geographic region. We hypothesized that surgeons would be more likely to perform arthroscopic rotator cuff repair, with similar trends across the USA. Methods: A retrospective review of a comprehensive national insurance database (Humana) was performed using the PearlDiver software for all patients who underwent RTC repair between January 2007 and June 2015. Patients were identified by Current Procedural Terminology (CPT) codes. χ2 tests evaluated the proportion of arthroscopic surgeries by gender and geographic region; logistic regression analysis assessed differences from 2007 to 2015. Results: In the study period, 54,740 patients underwent RTC repair (68% arthroscopic, 52% male), with the highest frequency of RTC repair in patients between 65 and 69 years old. The proportion of open RTC repair increased with increasing patient age, with no significant difference between men and women. The proportion of arthroscopic RTC surgeries increased from 56.9% in 2007 to 75.1% in 2015. The overall trend was 188% increase in total RTC repairs. Arthroscopic repair was more frequent than open repair in all US regions, with the highest proportion in the South. Conclusion: Arthroscopic RTC surgery predominates and continues to rise. With increasing patient age, there was an increase in the proportion of open repair. The majority of RTC repairs were performed in patients between 65 and 69 years of age. Answer: The functional recovery period after arthroscopic rotator cuff repair (ARCR) can be somewhat predictable before surgery based on certain preoperative factors. Studies have identified several factors that influence the time required for functional recovery after ARCR. One study found that 72% of patients obtained functional recovery within 6 months after ARCR, and that younger patients without shoulder stiffness and with smaller rotator cuff tears had shorter functional recovery periods. Age, shoulder stiffness, and rotator cuff tear size were identified as factors influencing functional recovery time (PUBMED:21107925). Another study reported that the plateau of maximum recovery for pain, function, and motion occurred at 1 year, with high satisfaction rates at all time points. At 3 months, approximately 75% of pain relief and 50% of functional recovery could be expected. Larger tears were associated with a slower speed of recovery (PUBMED:28131695). A study that focused on quality of life after ARCR could not support preoperative range of motion, obesity, fatty infiltration, or retraction of the cuff as prognostic factors for quality of life. However, fatty infiltration and retraction grade were significant predictors for other outcome variables (PUBMED:26189806). In terms of muscle strength recovery, it was found that patients with small and medium tears recovered faster than those with large-to-massive tears. Muscle strength recovery was poorly correlated with patient satisfaction, suggesting that patients with larger tears should continue rehabilitation beyond 1 year postoperatively (PUBMED:26851272). Overall, while some preoperative factors can give an indication of the functional recovery period after ARCR, individual patient outcomes may vary. It is important for surgeons to consider these factors when planning surgery and postoperative physical therapy to provide patients with realistic expectations regarding their recovery timeline.
Instruction: Is endoscopic totally extraperitoneal hernioplasty justified for the repair of groin hernia in female patients? Abstracts: abstract_id: PUBMED:35477199 Laparoscopic totally extraperitoneal repair of strangulated groin hernia Objective: To study the possibility of totally extraperitoneal repair combined with diagnostic laparoscopy in the treatment of strangulated groin hernias (Laparoscopy-Assisted Totally Extraperitoneal Plasty, LATEP). Material And Methods: We analyzed the results of laparoscopic totally extraperitoneal hernia repair. The trocar placement technique was modified. There were 38 patients with strangulated groin hernia. The sample included 26 patients with strangulated inguinal hernia, 9 ones with strangulated femoral hernia and 3 patients with recurrent strangulated inguinal hernia. Results: LATEP was attempted in 38 patients and successful in 37 (97.3%) cases. In 1 (2.6%) patient, correction of small bowel strangulation was failed and conversion to open surgery was required. In 29 patients (76.3%), correction of strangulation was performed after laparoscopy-assisted external manipulations and careful traction from abdominal cavity. In 8 (21%) cases, strangulation spontaneously regressed before laparoscopy. Laparoscopy confirmed viability of strangulated organs in 36 patients. One (2.6%) patient required bowel resection due to small intestine wall necrosis. Later, all patients underwent totally extraperitoneal repair. We were able to prevent the contact of hernia sac fluid with the implant in all cases. At the stage of preperitoneal repair, local damage to peritoneum occurred in 9 patients. Nevertheless, sealing was not required since hernia fluid was previously removed from abdominal cavity. There were no signs of implant infection and hernia recurrence within 6-14 months. Conclusion: Totally extraperitoneal repair combined with diagnostic laparoscopy is possible for strangulated groin hernias. abstract_id: PUBMED:16308798 Is endoscopic totally extraperitoneal hernioplasty justified for the repair of groin hernia in female patients? Background: Groin hernia is an uncommon surgical pathology in females. The efficacy of the endoscopic approach for the repair of female groin hernia has yet to be examined. The current study was undertaken to compare the clinical outcomes of female patients who underwent open and endoscopic totally extraperitoneal inguinal or femoral hernioplasty (TEP). Methods: From July 1998 to June 2004, 108 female patients who underwent elective repair of groin hernia were recruited. The patients were divided into TEP (n = 30) and open groups (n = 78) based on the type of operation. Clinical data and outcome parameters were compared between the two groups. Results: The mean ages and hernia types were comparable between the two groups. All TEPs were successfully performed. The mean operative times were 52 min for unilateral TEP and 51 min for open repair. The difference was not statistically significant. Comparisons of the length of hospital stay, postoperative morbidity, pain score, and time taken to resume normal activities showed no significant differences between the two groups. A single patient in the TEP group experienced recurrence of hernia. Conclusions: The findings show equivalent postoperative outcomes after TEP and open repair of groin hernia in female patients. Because the wound scar after open repair is well concealed beneath the pubic hair and no superior clinical benefits are observed after TEP, open repair appears to be the technique of choice for the management of primary groin hernia in females. The TEP approach should be reserved for female patients with recurrent or multiple groin hernia. abstract_id: PUBMED:36722535 Endoscopic totally extraperitoneal repair for groin hernia: Experience from a tertiary care centre. Background: Totally extra-peritoneal (TEP) and transabdominal preperitoneal (TAPP) repair are two established minimal access techniques of groin hernia surgery. TEP offers the advantage of avoiding violation of the peritoneal cavity. Aim: This study aims to describe the decade-long experience of TEP in groin hernia repair in a tertiary care teaching institute and the feasibility of the same in difficult scenarios. Materials And Methods: Retrospective analysis of the database of patients undergoing TEP repair for inguinal hernia in a single surgical unit at a tertiary teaching hospital between January 2008 and December 2019 was performed. Detailed pre-operative clinical details, operative duration, intraoperative and post-operative complications, including pain, length of post-operative hospital stay and hernia recurrence data were analysed. Results: Over 12 years' duration, 511 patients underwent endoscopic TEP mesh repair and the total number of hernias repaired was 614. Majority (97.45%) of patients were male. The mean age of the patient population was 51.3 years. Primary hernia was seen in 490 patients. The mean operating time for unilateral inguinal hernia repair was 56.8 ± 16 min and for bilateral repair 80.9 ± 25.2 min. TEP in previous lower abdominal/suprapubic surgical scars was attempted in 17 (3.3%) patients, with only one requiring conversion. The intraoperative peritoneal breach was the most common documented complication (34.8%). Seroma was seen in 9.4% of patients. Seventeen patients required conversion (14 TAPP and 3 open). Recurrence was seen in 4 (0.7) patients. Conclusion: TEP repair is an effective method of groin hernia repair and can be attempted in the majority of patients groin hernia, including patients with previous lower abdominal incisions. abstract_id: PUBMED:37962710 Totally extraperitoneal repair of groin hernia in liver transplant patients: a case series study (TOHER). Purpose: The present study aims to describe the feasibility and the postoperative results of groin hernia repair in liver transplant patients using a totally extra-peritoneal (TEP) repair approach. Methods: From May 2022 to March 2023, liver transplant patients with groin hernia underwent TEP groin hernia repair, by the single common senior experimented surgeon. Background information, intraoperative findings, postoperative complications, postoperative pain, health, and well-being were registered. Results: Thirteen TEP approach groin hernioplasties were performed in 10 patients, completing the procedure in all cases without the need for conversion either to open or transabdominal preperitoneal approaches. 70% of surgical explorations revealed multiple hernia defects: lateral hernias in all patients, medial defects in 62%, and femoral defects in 30.8%. Median hospital stay was 1 day [range (0.3)], with 30% treated as outpatients. Post-surgical complications occurred in 30% of cases: 1 hematoma and 2 seromas. Postoperative pain and physical functioning scored 100 (IQR 44) and 90 (IQR 15), respectively. Conclusion: TEP groin hernioplasty is safe and feasible for liver transplant patients, with low complication rates, short hospital stays, and a significant proportion treated as outpatients. The posterior approach allows comprehensive repair of myopectineal defects, crucial due to associated hernial defects. abstract_id: PUBMED:32394168 Modified port positions for totally extraperitoneal (TEP) repair for groin hernias: our experience. Background: Totally extraperitoneal repair (TEP) is the preferred method of inguinal hernia repair for many surgeons worldwide. However, because of limited working space, there may be difficulties when operating on large and irreducible groin hernias and short stature patients. There are many ways in which the port positions can be modified during TEP for different kinds of hernias and according to the comfort of the surgeons. Methods: This is a retrospective analysis of prospectively collected data. We describe two different variations in port placement for TEP in patients with large irreducible hernias and short stature patients. Results: A total of 19 procedures were performed with these port positions. Most of these were large irreducible hernias or patients with short stature. With these modified port positions, difficult hernias could be completed with extraperitoneal approach. We had no recurrence and 5 seromas that eventually settled in 1-3 months. Conclusion: Modified port positions in TEP by shifting the ports farther away can be useful for hernias where the surgery is difficult due to limited working space and is a good alternative to TAPP. abstract_id: PUBMED:34248342 Groin Hernia in Females Routinely Treated by Totally Extraperitoneal Laparoscopic Approach. Background And Objectives: There is a dearth of studies on laparoscopic treatment of female groin hernia. Our study assessed the outcome of groin hernia repair in females employing the totally extraperitoneal laparoscopic (TEP) access. Methods: Data of all females who were subjected to laparoscopic groin herniorrhaphy, from August 1998 to February 2020 were retrospectively obtained. Groin hernia repair was routinely started with TEP access. Results: A total of 2,399 patients who underwent laparoscopic groin herniorrhaphy, 254 (10.6%), were females. Most females (n = 191; 75.2%) had single hernia and the remaining (n = 63; 24.8%) had bilateral hernias, making a total of 317 hernias operated. Indirect inguinal hernia was the most common hernia type (72.5%), followed by femoral hernia (17.4%) and direct hernia (10.1%). Prior lower abdominal operations were recorded in 97 (38.2%) patients. Conversion to a laparoscopic transabdominal preperitoneal procedure was performed due to technical difficulties to dissect the preperitoneal space in 17 patients (6.7%) and to open procedure in only one patient (0.4%) with incarcerated femoral hernia in whom an incidental perforation of the small bowel occurred. Intra- and postoperative complications occurred in 12 (4.7%) and 15 (5.9%) patients, respectively. There was no mortality. Most patients (n = 221; 87%) were discharged on the same day of the operation. Hernia recurrence was diagnosed in 6 patients (2.4%). Conclusion: It is concluded that females with groin hernia may be successfully treated with totally extraperitoneal laparoscopic access, with low conversion and complication rates. abstract_id: PUBMED:38443886 TPP (totally preperitoneal) making single incision laparoscopic inguinal hernia repair more feasible: a comparison with single incision laparoscopic totally extraperitoneal hernioplasty (SIL-TEP). Background: Totally preperitoneal hernioplasty (TPP) is a concept which was introduced for distinguishing with totally extraperitoneal (TEP). There is few evidence reflecting the single incision laparoscopic totally preperitoneal (SIL-TPP) characteristic. The aim of study is to demonstrate the feasibility of single incision laparoscopic totally preperitoneal hernioplasty (SIL-TPP) and compare the outcomes with the single incision laparoscopic totally extraperitoneal hernioplasty (SIL-TEP) technique. Methods: During August 2018 and July 2022, 200 inguinal hernia patients received SIL-TPP and 56 patients received SIL-TEP in the First hospital of Ningbo university. The demographics, clinical characteristics, intraoperative and postoperative parameters were retrospectively analysed. Results: SIL-TPP and SIL-TEP hernia repair were successfully conducted in all patients. There was no conversation happened in two group. Patients' demographics were comparable when compared between the two groups adding the comparison initial 52 cases analysis (P > 0.05). The mean unilateral hernia operative time was significant shorter in the SIL-TPP group than SIL-TEP group (unilateral: 81.38 ± 25.32 vs. 95.96 ± 28.54, P: 0.001). Further study of unilateral hernia operative time revealed the mean indirect hernia operative time was significant shorter in the SIL-TPP group than SIL-TEP group (indirect: 81.38 ± 25.33 vs. 95.87 ± 28.54, P: 0.001). The unilateral hernia operation time trend of initial 52 cases of two group analysis revealed the operation time of SIL-TPP reduced faster than SIL-TEP along with treating number increasing (Figs. 2 and 3). The comparison of initial equal quantity unilateral hernia patient mean operative time revealed the SIL-TPP group was significant shorter than SIL-TEP group (85.77 ± 22.76 vs. 95.87 ± 28.54, P: 0.049). The rate of peritoneum tearing of SIL-TPP group was significant high than SIL-TEP (P = 0.005). Conclusion: SIL-TPP hernia repair is a superior procedure and possess its own distinguished advantages. We recommend it rather than SIL-TEP for treating inguinal hernia, especially for indirect hernia. However, large-scale randomized controlled trials comparing SIL-TPP and SIL-TEP are needed to confirm these results. abstract_id: PUBMED:30416144 Dulucq's technique for laparoscopic totally extraperitoneal hernioplasty. Background: Inguinal hernia repair has been a controversial area in surgical practice. Its complexity is reflected by the fact that numerous different procedures including both open and laparoscopic techniques are in use today. Laparoscopic totally extraperitoneal (TEP) repair is preferred over transabdominal pre-peritoneal repair as the peritoneum is not breached and also due to fewer intra-abdominal complications. This is the most elegant technique but rather difficult to perform. Aim: The purpose of this study was to describe Dulucq's technique for inguinal hernia repair and the use of three-dimensional mesh without fixation in laparoscopic TEP inguinal hernioplasty. Methods: Surgical technique of laparoscopic TEP inguinal hernia repair is detailed in the text. Results: A total of 945 hernia repairs were included in the study. The hernias were repaired by Dulucq's technique. Mean operative time was 45 min in unilateral hernia and 65 min in bilateral hernia. There were no serious complications. Conclusion: The laparoscopic TEP hernioplasty by Dulucq's technique is feasible with fewer intra-abdominal complications. The dissection must always be done with the same stages with minimal monopolar diathermy and patient in a slight Trendelenburg position. abstract_id: PUBMED:29794361 Extended totally extraperitoneal repair (eTEP) for ventral hernias: Short-term results from a single centre. Introduction: There has been a surge of innovative procedures in the field of abdominal wall hernias. Works of pioneers such as Dr. Yuri Novitsky, Dr. Jorge Daes and Dr. Igor Belyansky have started a new era in the field of hernia surgery. Conventional and popular surgeries for ventral hernias are open onlay mesh hernioplasty, open retromuscular mesh hernioplasty (Rives-Stoppa procedure) and laparoscopic intraperitoneal mesh hernioplasty. Evidence seems to suggest that retromuscular mesh hernioplasty has advantages over other procedures regarding recurrence and surgical site occurrences. An alternative strategy has been developed for this setting where a mesh is placed in retromuscular space by minimal access technique of the extended Totally Extraperitoneal approach (eTEP). Methods: We have retrospectively analysed the data of 21 patients who underwent an eTEP procedure with a minimum follow-up of 2 months. Their data were analysed for operative details, intra-operative and post-operative complications. Results: For a total of 21 patients, we have recorded a total of two surgical site occurrences (1 seroma and 1 linea alba dehiscence) and one recurrence. One patient had chronic pain. There was no surgical site infection. Conclusion: Judging from our short-term results, we suggest that the eTEP technique can be adapted in centres with advanced laparoscopic skills with the careful patient selection. abstract_id: PUBMED:26395580 Totally extraperitoneal (TEP) endoscopic hernia repair in elderly patients. Background: Inguinal hernias are common in elderly males. We addressed outcome following totally extraperitoneal (TEP) hernia repair in patients older than 70 years. Methods: We prospectively collected data of patients >70 years with a unilateral or bilateral inguinal hernia operated in our hospital between January 2005 and January 2010 using the TEP technique. Results: A total of 429 patients underwent TEP hernia repair under general anaesthesia, mostly men (n = 405; 94.4%). Median age was 74 years (range 70-89). The mean pre-operative pain score was 3.7 (SD ± 2.5). Ninety-four percent of patients had an ASA score of 1 or 2. Three hundred thirty-six patients underwent a unilateral repair (78%). The conversion rate to an anterior procedure was 0.7 % (n = 3). In 8 patients (1.9%), intra-operative complications occurred, and the postoperative course was complicated in 3 patients (0.7 %). Severe complications attributable to the endoscopic approach occurred in 6 patients (1.4%): a bladder injury (n = 5) and a trocar-induced bowel perforation (n = 1). The mean postoperative pain score after 6 weeks was 1.6 (SD ± 1.2). Patients were able to resume their daily activities after a median of 7 days (range 1-42). Conclusion: Totally, extraperitoneal endoscopic inguinal hernia repair in elderly patients is associated with low overall complication rates and a fast recovery. In a small proportion of patients, severe complications occur attributable to the endoscopic approach. Answer: Endoscopic totally extraperitoneal hernioplasty (TEP) is a technique used for the repair of groin hernias and has been evaluated in female patients. According to the study by PUBMED:16308798, the clinical outcomes of female patients who underwent TEP were compared with those who underwent open repair for groin hernia. The study found that the postoperative outcomes after TEP and open repair of groin hernia in female patients were equivalent. The operative times were similar, and there were no significant differences in the length of hospital stay, postoperative morbidity, pain score, and time taken to resume normal activities. However, the study concluded that open repair might be the technique of choice for the management of primary groin hernia in females, as the wound scar after open repair is well concealed and no superior clinical benefits were observed after TEP. TEP was suggested to be reserved for female patients with recurrent or multiple groin hernias. On the other hand, the study by PUBMED:34248342 reported on the outcome of groin hernia repair in females using the TEP approach and found that it was successful with low conversion and complication rates. The study concluded that females with groin hernia could be successfully treated with TEP, suggesting its justification for use in female patients. In summary, while one study (PUBMED:16308798) suggests that open repair may be preferred for primary groin hernias in females due to the lack of superior benefits from TEP and the cosmetic advantage of open repair, another study (PUBMED:34248342) supports the use of TEP in females, highlighting its success and low complication rates. Therefore, TEP can be justified for the repair of groin hernia in female patients, particularly in cases of recurrent or multiple hernias, or based on patient and surgeon preference considering the low complication rates and successful outcomes reported.
Instruction: Assessment of cardiovascular disease risk factors and diabetes mellitus in Australian prisons: is the prisoner population unhealthier than the rest of the Australian population? Abstracts: abstract_id: PUBMED:16222927 Assessment of cardiovascular disease risk factors and diabetes mellitus in Australian prisons: is the prisoner population unhealthier than the rest of the Australian population? Introduction: This paper compares the prevalence estimates of diabetes and cardiovascular disease (CVD) risk factors in the Indigenous and non-Indigenous New South Wales (NSW) prisoner population in 1996 and 2001, and also compares the 2001 prevalence estimates with Australian population data. Methods: In 1996 and 2001, 789 and 916 prisoners, respectively, in NSW underwent a face-to-face interview covering behavioural risks and physical and mental health. Weight, height and blood pressure were recorded and blood was taken for measurement of cholesterol and random blood sugar. Results: The prevalence of hypertension, hypercholesterolaemia and smoking were lower in the 2001 prison survey as compared with the 1996 survey but the prevalence of smoking was extremely high in both the prison surveys (88% in 1996 and 79% in 2001). There were no differences in the age and sex-adjusted prevalence estimates for any condition except that the prison sample had a higher standardised morbidity ratio for angina than the AusDiab population. Conclusion: This study highlights the high prevalence of CVD risk factors, particularly in younger prisoners, when compared with the Australian non-prison population. Implications: Programs should be put in place to routinely screen for chronic disease conditions and to educate Australian prisoners regarding CVD and diabetes risk factors and their long-term management. abstract_id: PUBMED:19741542 Recalibration and validation of the SCORE risk chart in the Australian population: the AusSCORE chart. Background: Development of a validated risk prediction model for future cardiovascular disease (CVD) in Australians is a high priority for cardiovascular health strategies. Design: Recalibration of the SCORE (Systematic COronary Risk Evaluation) risk chart based on Australian national mortality data and average major CVD risk factor levels. Methods: Australian national mortality data (2003-2005) were used to estimate 10-year cumulative CVD mortality rates for people aged 40-74 years. Average age-specific and sex-specific levels of systolic blood pressure, total cholesterol and prevalence of current smoking were generated from data obtained in eight Australian large-scale population-based surveys undertaken from the late 1980s. The SCORE risk chart was then recalibrated by applying hazard ratios for 10-year CVD mortality obtained in the SCORE project. Discrimination and calibration of the recalibrated model was evaluated and compared with that of the original SCORE and Framingham equations in the Blue Mountains Eye Study in Australia using Harrell's c and Hosmer-Lemeshow chi statistics, respectively. Results: An Australian risk prediction chart for CVD mortality was derived. Among 1998 Blue Mountains Eye Study participants aged 49-74 years with neither CVD nor diabetes at baseline, the Harrell's c statistics for the Australian risk prediction chart for CVD mortality were 0.76 (95% confidence interval: 0.69-0.84) and 0.71 (confidence interval: 0.62-0.80) in men and women, respectively. The corresponding Hosmer-Lemeshow chi statistics, the measure of calibration, were 2.32 (P = 0.68) and 7.43 (P = 0.11), which were superior to both the SCORE and Framingham equations. Conclusion: This new tool provides a valid and reliable method to predict risk of CVD mortality in the general Australian population. abstract_id: PUBMED:9436515 Risk assessment of hypertension: from population to individual Epidemiological studies have contributed to our understanding of several aspects of hypertension which could not have been remarked upon by clinical medicine alone: 1) the continuous nature of the relationship between the level of hypertension and cardiovascular risk has shown that the clinical definition by criteria based on numbers is arbitrary and should be adjusted according to the context and therapeutic management; 2) the risk of high blood pressure is strongly dependent on the presence or absence of other risk factors (smoking, diabetes, serum lipids, left ventricular hypertrophy), a factor which is not sufficiently taken into account in clinical practice; 3) assessment of the absolute risk of an individual on the basis of the respective roles of the principal risk factors is a useful guide to treatment. Moreover, population studies have shown that mild increases in blood pressure, associated with a long individual risk, are responsible for a large number of cardiovascular events because of the many people affected. Preventive action on the whole population is necessary in addition to individual therapeutic intervention in the clinical setting. abstract_id: PUBMED:33808743 Predicting Australian Adults at High Risk of Cardiovascular Disease Mortality Using Standard Risk Factors and Machine Learning. Effective cardiovascular disease (CVD) prevention relies on timely identification and intervention for individuals at risk. Conventional formula-based techniques have been demonstrated to over- or under-predict the risk of CVD in the Australian population. This study assessed the ability of machine learning models to predict CVD mortality risk in the Australian population and compare performance with the well-established Framingham model. Data is drawn from three Australian cohort studies: the North West Adelaide Health Study (NWAHS), the Australian Diabetes, Obesity, and Lifestyle study, and the Melbourne Collaborative Cohort Study (MCCS). Four machine learning models for predicting 15-year CVD mortality risk were developed and compared to the 2008 Framingham model. Machine learning models performed significantly better compared to the Framingham model when applied to the three Australian cohorts. Machine learning based models improved prediction by 2.7% to 5.2% across three Australian cohorts. In an aggregated cohort, machine learning models improved prediction by up to 5.1% (area-under-curve (AUC) 0.852, 95% CI 0.837-0.867). Net reclassification improvement (NRI) was up to 26% with machine learning models. Machine learning based models also showed improved performance when stratified by sex and diabetes status. Results suggest a potential for improving CVD risk prediction in the Australian population using machine learning models. abstract_id: PUBMED:19345528 Prevalence of conventional cardiovascular risk factors in the Great Tunis population Background: This study was designed to determine the prevalence of main cardiovascular risk factors in the population of Great Tunis. Subjects And Methods: This cross-sectional study included 2483 individuals aged 35 to 70 years dwelling in the Great Tunis region, recruited between March 2004 and June 2005. The sample was weighted using the inverse of response rate according to governorate, district and sex. Results: Obesity and abdominal obesity were observed respectively in 34 and 48% of subjects. The prevalence of these two factors was particularly elevated in females (46 and 69% respectively). Hypertension was common (31%), especially in women (36%). Diabetes mellitus and dyslipemia were found in 15 and 21% of subjects, respectively, without difference according to sex. More than half of men and 8% of women were current smokers. Conclusion: The prevalence of conventional cardiovascular risk factors is dramatically high in the population of Great Tunis. These findings predict a future expansion of cardiovascular diseases in this population. Profound changes of lifestyle and dietary habits of Tunisians are needed to reduce the risk of cardiovascular morbidity and mortality. abstract_id: PUBMED:30126378 Body mass index in an Australian population with chronic kidney disease. Background: Obesity emerged as the leading global health concern in 2017. Although higher body mass index (BMI) is a health risk in the general population, its implications for chronic kidney disease (CKD) are not entirely clear. Our aim was to compare BMI in an Australian CKD population with BMI in a sample of the general Australian population, and, in the same group of CKD patients, to describe associations of higher BMI categories with demographic and clinical features. Methods: A cross-sectional study of BMI in CKD patients was conducted from three major sites who were enrolled in the CKD.QLD registry between May 2011 and July 2015. BMI was categorized according to the World Health Organisation (WHO) guidelines. The prevalence of obesity was compared with a sample of the general Australian population from the most recent National Health Survey (NHS). Associations of BMI with demographic and clinical characteristics of the CKD patients were also analysed. Results: There were 3382 CKD patients in this study (median age 68, IQR 56-76 years); 50.5% had BMI ≥30, the WHO threshold for obesity, in contrast with 28.4% having BMI ≥30 in the NHS cohort. Higher BMI categories were correlated with age < 70 years, male gender, and lower socioeconomic status. After adjustment for age and gender, characteristics which significantly correlated with higher BMI category included hypertension, dyslipidemia, diabetes, diabetic nephropathy, coronary heart disease, other cardiovascular diseases, gout, obstructive sleep apnoea, depression and chronic lung disease. Conclusions: Patients with CKD in public renal specialty practices in Queensland have strikingly higher rates of obesity than the general Australian population. Within the CKD population, low socio-economic position strongly predisposes to higher BMI categories. Higher BMI categories also strongly correlated with important co-morbidities that contribute to burden of illness. These data flag major opportunities for primary prevention of CKD and for reductions in morbidity in people who already have CKD, which should be considered in public health policy in relation to obesity. abstract_id: PUBMED:20473180 Validation of two Framingham cardiovascular risk prediction algorithms in an Australian population: the 'old' versus the 'new' Framingham equation. Background: Multivariable risk prediction equations attempt to quantify an individual’s cardiovascular risk. Those borne from the Framingham Heart Study remain the most well-established and widely used. In February 2008, a new Framingham risk equation was published. We sought to determine the differences between the most commonly used Framingham equation from 1991 and the 2008 version through their application to a contemporary Australian population. Methods And Results: The two risk equations were applied to 7329 individuals from the Australian Diabetes, Obesity and Lifestyle study. All individuals were aged 30–74 years and free of cardiovascular disease. Differences in median risk scores were analyzed through the Wilcoxon’s signed rank test. Compared with the 1991 equation, median cardiovascular risk scores derived from the 2008 equation increased by 7 and 24% over 5 years, among males and females, respectively. The differences were statistically significant across all age-groups for both males and females, P value of less than 0.001. The performance of the equations in predicting cardiovascular outcomes were compared using event rates. The discriminative ability was increased using the 2008 equation; however the difference was non-significant [area under the receiver operating characteristic curve: 1991 equation 0.74 (0.69–0.80); 2008 equation 0.76 (0.71–0.81)]. Conclusion: Earlier Framingham equations have been suggested to over-predict cardiovascular risk in low-risk populations and under-predict risk in high-risk groups. This is the first comparative validation of the previous 1991 and most recent 2008 equations. This study highlights the need to validate and calibrate cardiovascular risk prediction equations using the population-specific outcome data. abstract_id: PUBMED:25862452 Cardiovascular disease (CVD) risk factors in Kraków and in the whole Poland adult population. Results from the WOBASZ study and Polish arm of the HAPIEE project. In Kraków, the second largest town in Poland, cardiovascular disease (CVD) mortality rate is lower than in most top largest towns in Poland and lower than the rate for total Polish population. Aim: The aim of the present analysis was to compare socioeconomic status (SES), prevalence of CVD risk factors and SCORE assessment of risk in Krakow with residents of other big towns in Poland and with general Polish population. Materialand Methods: We used data from the two large, population studies which used comparable methods for risk factors assessment: 1) Polish part of the HAPIEE Project in which 10 615 residents of Krakow at age between 45-69 years were examined, and (2) The WOBASZ Study which contributed with a sub-sample 6 888 of residents of Poland at corresponding age group. WOBASZ sample included 992 residents of big towns other than Krakow. Age-standardized proportions of persons with CVD risk factors were compared between Krakow and the other big towns in Poland and between Krakow and the whole Poland using χ2 test. Results: The striking observation was that in Krakow proportions of participants with university education were substantially higher than average for the other big towns and the whole Poland. Also, the proportion of occupationally active men and women was the highest in Krakow. In both sexes, prevalence of smoking, hypercholesterolemia and hypertension in Krakow was similar to the other big towns but the prevalence of hypercholesterolemia and hypertension (in men only) was lower than average for Poland. The distribution by SCORE risk categories were similar in all three samples studied. In general, the distribution by BMI categories was less favourable but the prevalence of central obesity was lower among residents of Kraków than among residents of the other big towns and citizens of the whole Poland. Prevalence of diabetes was higher in Krakow than in the other samples studied. The differences between population of Krakow and population of other parts of Poland in the exposure to the main risk factors were found diverse and not big enough to be followed by differences in the distribution by the categories of SCORE risk assessment. The study suggested the importance of obesity and diabetes which are not used for the SCORE risk assessment and especially the importance of psychosocial and economic factors which may influence CVD risk and contribute more to the explanation of the regional differences in CVD mortality. abstract_id: PUBMED:21029470 Fibrinogen and associated risk factors in a high-risk population: urban Indigenous Australians, the DRUID Study. Background: Epidemiological evidence suggests that fibrinogen and CRP are associated with coronary heart disease risk. High CRP in Indigenous Australians has been reported in previous studies including our 'Diabetes and Related diseases in Urban Indigenous population in Darwin region' (DRUID) Study. We studied levels of fibrinogen and its cross-sectional relationship with traditional and non-traditional cardiovascular risk factors in an urban Indigenous Australian cohort. Methods: Fibrinogen data were available from 287 males and 628 females (aged ≥ 15 years) from the DRUID study. Analysis was performed for associations with the following risk factors: diabetes, HbA1c, age, BMI, waist circumference, waist-hip ratio, total cholesterol, triglyceride, HDL cholesterol, C-reactive protein, homocysteine, blood pressure, heart rate, urine ACR, smoking status, alcohol abstinence. Results: Fibrinogen generally increased with age in both genders; levels by age group were higher than those previously reported in other populations, including Native Americans. Fibrinogen was higher in those with than without diabetes (4.24 vs 3.56 g/L, p < 0.001). After adjusting for age and sex, the following were significantly associated with fibrinogen: BMI, waist, waist-hip ratio, systolic blood pressure, heart rate, fasting triglycerides, HDL cholesterol, HbA1c, CRP, ACR and alcohol abstinence. On multivariate regression (age and sex-adjusted) CRP and HbA1c were significant independent predictors of fibrinogen, explaining 27% of its variance; CRP alone explained 25% of fibrinogen variance. On factor analysis, both CRP and fibrinogen clustered with obesity in women (this factor explained 20% of variance); but in men, CRP clustered with obesity (factor explained 18% of variance) whilst fibrinogen clustered with HbA1c and urine ACR (factor explained 13% of variance). Conclusions: Fibrinogen is associated with traditional and non-traditional cardiovascular risk factors in this urban Indigenous cohort and may be a useful biomarker of CVD in this high-risk population. The apparent different associations of fibrinogen with cardiovascular disease risk markers in men and women should be explored further. abstract_id: PUBMED:28952005 Comparison of Health and Risk Factors of Older, Working-age Australians, Italians and Italian-born Migrants to Australia, with Data from an Italian (PASSI), and an Australian (SAMSS) Risk Factor Surveillance System. Italian-born migrants (post-WWII) are the largest non-English-speaking background migrant group in South Australia. A cross-sectional, inter-country comparison using independent samples (40-69 years of age) from two (one in Australia, one in Italy) similar risk factor and chronic disease surveillance systems. None of the three groups (Italians, Australian-born and Italian-born Australians) had definitively worse health although the Italians had high rates for four of the seven risk factors reported (current high blood pressure, current high cholesterol, current smoking, eating less than five fruit and/or vegetables per day) than Australian-born and Italian-born Australians. Italian-born Australians had higher rates for insufficient physical activity, overweight/obese, poor self-reported health and diabetes. Australian respondents were more likely to report having two or more drinks of alcohol per day. Issues facing an ageing population require appropriate health care needs and an assessment of structural or cultural barriers to health services. Answer: The assessment of cardiovascular disease (CVD) risk factors and diabetes mellitus in Australian prisons suggests that the prisoner population may indeed be unhealthier than the rest of the Australian population. A study comparing the prevalence estimates of diabetes and CVD risk factors in the Indigenous and non-Indigenous New South Wales (NSW) prisoner population in 1996 and 2001 found that while the prevalence of hypertension, hypercholesterolemia, and smoking were lower in the 2001 prison survey compared to the 1996 survey, the prevalence of smoking was still extremely high in both prison surveys (88% in 1996 and 79% in 2001). Additionally, the prison sample had a higher standardized morbidity ratio for angina than the AusDiab population, which is a representative sample of the Australian non-prison population. This study highlights the high prevalence of CVD risk factors, particularly in younger prisoners, when compared with the Australian non-prison population (PUBMED:16222927). The implications of these findings are significant, suggesting that programs should be implemented to routinely screen for chronic disease conditions and to educate Australian prisoners regarding CVD and diabetes risk factors and their long-term management. This is particularly important given that the prisoner population appears to have a higher burden of CVD risk factors, which could lead to increased morbidity and mortality if not properly managed.
Instruction: Does attitude toward epilepsy mediate the relationship between perceived stigma and mental health outcomes in children with epilepsy? Abstracts: abstract_id: PUBMED:17531540 Does attitude toward epilepsy mediate the relationship between perceived stigma and mental health outcomes in children with epilepsy? Objective: The purpose of this study was to examine the extent to which children's attitude toward epilepsy mediates the relationship between perceived stigma and the mental health outcomes of self-concept, behavioral problems, and social competence. Method: Subjects were 173 youth aged 9-14 who had been diagnosed and treated for epilepsy for at least 6 months. A secondary data analysis from a larger study was completed to test if the children's attitude mediated the relationship between stigma and mental health outcomes using multiple regression. Results: Children's attitude was found to mediate the relationships between stigma and self-concept and behavior problems, respectively. In contrast, attitude did not mediate the relationship between stigma and social competence. Conclusions: Results suggest that enhancing a more positive attitude toward having epilepsy might help improve problems with poor self-concept or behavior problems, but might not influence social competence. abstract_id: PUBMED:28431368 Correlates of perceived stigma for people living with epilepsy: A meta-analysis. Objective: Epilepsy, one of the most common, serious chronic neurological diseases, is accompanied by different levels of perceived stigma that affects people in almost all age groups. This stigma can negatively impact the physical and mental health of people living with epilepsy (PLWE). Good knowledge of perceived stigma for PLWE is important. In this study, we conducted a meta-analysis to identify the correlates of perceived stigma for PLWE. Methods: Studies on factors associated with perceived stigma for PLWE, including sociodemographic, psychosocial, and disease-related variables, were searched in PubMed, PsychINFO, EMBASE, and Web of Science. Results: Nineteen variables (k>1) were included in the meta-analysis. For sociodemographic characteristics, findings revealed that the significant weighted mean correlation (R) for "residence" and "poor financial status" were 0.177 and 0.286, respectively. For disease-related characteristics, all variables of significance, including "seizure severity," "seizure frequency," "number of medicines," and "adverse event" (R ranging from 0.190 to 0.362), were positively correlated with perceived stigma. For psychosocial characteristics, "depression" and "anxiety" with R values of 0.414 and 0.369 were significantly associated with perceived stigma. In addition, "social support," "quality of life (QOLIE-31,89)," "knowledge," and "attitude," with R values ranging from -0.444 to -0.200 indicating negative correlation with perceived stigma. Conclusion: The current meta-analysis evaluated the correlates of perceived stigma for PLWE. Results can serve as a basis for policymakers and healthcare professionals for formulating health promotion and prevention strategies. abstract_id: PUBMED:36152384 Perceived stigma in adults with epilepsy in Sweden and associations with country of birth, socioeconomic status, and mental health. Background: Stigma contributes to negative health outcomes and amplifies health disparities in epilepsy. This study aimed to investigate associations of perceived stigma with the country of birth and socioeconomic status (SES). Methods: This is a cross-sectional questionnaire study. Participants were recruited consecutively from three demographically different neurology outpatient clinics in the southwest of Sweden. Participants responded to a questionnaire concerning their epilepsy, country of birth, mother tongue, and different SES-variables. The Neuro-QOL stigma scale and the Jacoby stigma scale were applied to assess stigma and the Hospital Anxiety and Depression Scale (HADS) and PROMIS Mental Health scale were applied to assess mental health. Results: In total 161 adults with epilepsy were included in the cohort. The median Neuro-QOL stigma score was 48.3, and was higher among foreign-born than in native-born participants (foreign-born in non-European country 52.3, in other European country 49.8, and in native-born 47.0, p = 0.003). Other factors associated with Neuro-QOL were seizure frequency last year (≥2 seizures 52.4 compared to 1 seizure 50.9 and no seizures 44.3, p < 0.001), having had seizures in public (yes 50.9 compared to no 44.7, p = 0.035), HADS depression score ≥11 (57.4 compared to 47.8 for score <11 points, p < 0.001), HADS anxiety score ≥11 (53.5 compared to 46.8 for score <11 points, p < 0.001), and lower PROMIS Mental Health score (42.9 for PROMIS < 40 compared to 54.4 for PROMIS > 60, p < 0.01). A stepwise multiple regression analysis indicated that having had seizures the last year increased the average Neuro-QOL stigma score with 5.89 and appeared as the most determining factor for the Neuro-QOL stigma score among the variables investigated. Conclusions: It is important that the concerns of foreign-born patients are acknowledged and that the focus of seizure control and the detection and treatment of comorbidities are prioritized in the management of epilepsy and perceived stigma. abstract_id: PUBMED:29567525 Perceived stigma and associated factors among children and adolescents with epilepsy in south western Uganda: A cross sectional study. Purpose: To determine the prevalence of perceived stigma and its associated factors among children and adolescents with epilepsy in southwestern Uganda. Methods: We conducted a cross sectional study at a large referral hospital and a small rural health facility in Mbarara district, southwestern Uganda. Participants were aged 6-18 years being managed for epilepsy for at least 3 months, with no medical emergencies. Perceived stigma was measured using the Kilifi Stigma Scale of Epilepsy. Data on associated factors were collected by a pre-piloted investigator designed questionnaire. Logistic regression was used to determine associated factors considering 5% statistical significance. Results: Prevalence of high perceived stigma was 34% with higher levels among older children and adolescents. Children who had never attended school were more likely to report perceived stigma (62%). Factors associated with this stigma included having epilepsy related injuries or deformities (p = 0.022), other chronic illnesses (p = 0.009) and a longer duration of antiepileptic drug use (p = 0.004). Conclusions: Perceived stigma of epilepsy remains a major public health problem among children and adolescents and it is highly associated with preventable or modifiable factors. Therefore, there is need to design interventions that can address these factors in order to reduce the stigma and its potential future complications such as educational inequalities. abstract_id: PUBMED:33402925 Perceived stigma and school attendance among children and adolescents with epilepsy in South Western Uganda. Background: Epilepsy is a neurological disorder that has a high worldwide prevalence with eighty percent of the global burden being in low and middle-income countries. There is a high level of perceived stigma among children and adolescents with epilepsy, which has severe debilitating effects and affects school attendance. Objective: To assess the effect of perceived stigma on school attendance patterns among children and adolescents with epilepsy. Methods: We conducted a cross sectional study among 191 children and adolescents aged from 6-18 years with epilepsy at one large semi-urban hospital and a small rural health center in SouthWestern Uganda. Epilepsy-related perceived stigma was measured using the adapted Kilifi Stigma Scale of Epilepsy and school attendance patterns were assessed using a piloted investigator-designed questionnaire. Results: Children with high-perceived stigma were more likely to have never attended school (13.8%) or started school late (average age 5.7 years) compared to those with low-perceived stigma (average age 4.9 years). Additionally, those with high epilepsy-related perceived stigma repeated classes 2.5 times more compared to those with low-perceived stigma. Conclusion: These preliminary findings suggest correlation between high-perceived stigma and disrupted school attendance patterns among children and adolescents with epilepsy, hence the need to address this social challenge. abstract_id: PUBMED:30333078 Health of parents of individuals with developmental disorders or mental health problems: Impacts of stigma. Objective: Parents of individuals with developmental disorders or mental health problems often provide life-long care and support to their children, which negatively affects their health in part due to chronic stress. This study aimed to examine the experience of stigma as a source of chronic stress among parents of individuals with developmental disorders or mental health problems and the effect of stigma on parental health outcomes. Method: Using data from the Survey of Midlife in the United States (MIDUS 2 and 3), we constructed a sample for a longitudinal analysis including 128 parents of individuals with developmental disorders (e.g., autism, cerebral palsy, epilepsy, Down syndrome, intellectual disabilities, brain injury, ADD/ADHD) or mental health problems (e.g., bipolar disorder, schizophrenia, major depression) and 2256 parents whose children were nondisabled. Results: Parents who had children with developmental disorders or mental health problems prior to the beginning of the study (i.e., at MIDUS 1) reported higher levels of stigma related to embarrassment/shame and daily discrimination than parents of nondisabled individuals ten years later at MIDUS 2, which in turn were associated with poorer parental health outcomes (poorer self-rated health and a greater number of chronic conditions) nearly a decade after that at MIDUS 3. Conclusions: The findings suggest that the stigma associated with parenting a child with disabilities may be one mechanism that places such parents at risk for poor health. Efforts to alleviate the stigma associated with developmental disorders or mental health problems may have beneficial effects on health of parents of individuals with such conditions. abstract_id: PUBMED:32450533 The association between seizure self-efficacy of children with epilepsy and the perceived stigma. This study used a descriptive, correlational, and cross-sectional research design to evaluate the effect of seizure self-efficacy of children with epilepsy on their perceived stigma of seizure. The study was conducted with 303 children with epilepsy. The data of the study were collected using "the Seizure Self-Efficacy Scale for Children" and "the Scale for Perceived Stigma in Children with Epilepsy". The mean age of the children included in the study was 12.65 ± 2.37. The correlation between seizure self-efficacy of the children and their perceived seizure stigma was examined; a strong, significant, and negative correlation was found. It is recommended that the awareness of all health professionals should be increased in approaching children with epilepsy and that self-efficacy and stigma should be addressed. abstract_id: PUBMED:34077877 Effects of perceived stigma, unemployment and depression on suicidal risk in people with epilepsy. Purpose: Stigma toward people with epilepsy (PWE) is common around the globe. Perceived stigma produced by mental or physical disorders may represent a significant risk factor for suicide.This study examines whether and how perceived stigma, unemployment and depression interact to influence suicidal risk in PWE. Methods: A consecutive cohort of people with epilepsy (PWE) was recruited from the First Affiliated Hospital of Chongqing Medical University. Each patient completed the Stigma Scale for Epilepsy (SSE), the Neurological Disorders Depression Inventory for Epilepsy scale (NDDI-E) and the suicidality module of Mini-International Neuropsychiatric Interview(MINI) v.5.0.0. Spearman's correlation and moderated mediation analysis were used to examine the associations among perceived stigma, depression, unemployment and suicidal risk. Results: Perceived stigma was positively associated with depression severity and suicidal risk. Depression severity mediated the association between perceived stigma and suicidal risk. The indirect effect of perceived stigma on suicidal risk through depression severity was positively moderated by unemployment. Conclusions: The effect of perceived stigma on suicidal risk can be explained by the mediation of depression severity, At the same time, getting out from the shadow of perceived stigma may help reducing suicidal risk in PWE. In addition, improving employment status of PWE may attenuate the indirect effect of perceived stigma on suicidal risk through depression severity. abstract_id: PUBMED:30032810 Relationships between knowledge, attitudes, stigma, anxiety and depression, and quality of life in epilepsy: A structural equation modeling. Purpose: Epilepsy not only is a medical disorder characterized by seizures, but is also associated with stigma and a neurological disorder that affects quality of life. Insufficient knowledge and misconceptions about epilepsy that causes the development of negative attitudes towards patients with epilepsy increase the stigmatization and psychosocial problems and impact the quality of life. The aim of this study was to examine the relationship between epilepsy knowledge, attitude, stigma, anxiety and depression, and quality of life within the framework of structural equation modeling. Method: This research was carried out between May 2015 and May 2016 at the Epilepsy Outpatient Clinic of Istanbul University Istanbul Faculty of Medicine. The Epilepsy Knowledge Scale, Epilepsy Attitude Scale, Stigma Scale, Hospital Anxiety and Depression Scale, and Quality of Life in Epilepsy Scale-10 (QOLIE-10) were used. To determine the relationship between the concepts, research hypotheses were created, and structural equation modeling was made. Results: Two hundred five patients were included in the study, 53.7% were women, and the mean age was 32.5. It was found that 72.2% had generalized seizures, and 46.3% had more than one seizure per month. According to our proposed model, knowledge had a moderate relationship with attitude (including stigma) (r = 0.50) that, in turn, had a strong relationship with mental health (r = -0.62) while a moderate relationship with quality of life (r = 0.45). Our findings revealed that proposed model accounted for 20% of the variance in quality of life and 39% of the variance in mental health (anxiety and depression). Conclusion: Mental health and quality of life of patient with epilepsy are closely related to the knowledge, attitude, and perceived stigma of the patients. Therefore, educating/supporting patients can help increase the mental health and quality of life. abstract_id: PUBMED:36113252 Perceived stigma and self-esteem for children with epilepsy. Objective: The study was conducted to determine the sense of stigma and self-esteem in children with epilepsy. Methods: The study has a descriptive-correlational design.The research group included 150 children aged 9 to 15 who presented to a hospital in eastern Turkey, the Pediatric Neurology Clinic and Polyclinic, between January and June 2021, and were admitted to a pediatric clinic associated with the epilepsy polyclinic. A personal information form, a child perceptions scale (CPS), and the Coopersmith Self-Esteem Inventory (CSEI) were used to collect the data. Results: Children scored an average of 3.31 ± 1.10 on the CPS and an average of 37.33 ± 27.78 on the CSEI. A statistically significant, high-level, and negative relationship was found between CSEI and CPS scores (p < 0.05). As the CSEI score increases, the CPS score drops. A statistically significant model was obtained from the regression analysis (F = 246.816; p = 0.000; R2 = 0.791; SH = 17.07). The CSEI score significantly predicts the CPS score. Conclusion: The research study revealed that children with epilepsy have a high perception of stigma and low self-esteem levels, and that children's perceptions of being stigmatized are influenced by their educational status, their parents' educational status, their income level, their family type variables, and their self-esteem. The study determined that children's self-esteem decreased as their perception of stigma increased. Answer: Yes, attitude toward epilepsy does mediate the relationship between perceived stigma and mental health outcomes in children with epilepsy. A study found that children's attitude was a mediator in the relationships between stigma and self-concept and behavior problems. However, attitude did not mediate the relationship between stigma and social competence (PUBMED:17531540).
Instruction: Are PSA density and PSA density of the transition zone more accurate than PSA in predicting the pathological stage of clinically localized prostate cancer? Abstracts: abstract_id: PUBMED:18367100 Are PSA density and PSA density of the transition zone more accurate than PSA in predicting the pathological stage of clinically localized prostate cancer? Purpose: To assess whether PSA density (PSAD) and PSA density of the transition zone (PSADTZ) are more accurate than PSA alone in predicting the pathological stage of prostate cancer. Materials And Methods: One hundred and nine consecutive patients with clinically localized prostate cancer and preoperative PSA values over the whole range, treated with radical retropubic prostatectomy and limited pelvic lymph node dissection were included in this prospective study. Total prostate and transition zone volumes were measured by transrectal ultrasound using the prolate ellipsoid method. PSA, PSAD, and PSADTZ were compared to percentage of positive biopsy cores (% PC), biopsy and surgical Gleason score, and pathological stage, using univariate and multivariate analysis. Results: Pathological stage was pT2a, pT2b, pT3a, and pT3b in 25.6%, 37.7%, 25.6%, and 11.1% of patients, respectively. Lymph node metastases were found in 4.6% of patients. PSA, PSAD, and PSADTZ were significantly related to % PC, biopsy, and surgical Gleason score and pathological stage (P < 0.001), and were equally able to predict higher pathological stage, i.e., seminal vesicle invasion and lymph node metastases. Only by adding % PC in multivariate analysis was it possible to discriminate intra- from extracapsular tumors. Conclusions: The results of the present study demonstrate that PSAD and PSADTZ failed to outperform PSA in preoperative stage prediction of prostate cancer, possibly because the formula used to calculate them does not eliminate the contribution to total PSA of the nonmalignant portion of the gland. abstract_id: PUBMED:33028274 Zonal adjusted PSA density improves prostate cancer detection rates compared with PSA in Taiwanese males with PSA < 20 ng/ml. Background: The current study aimed to compare the efficacy of transition zone PSA density (TZPSAD) with traditional PSA and PSA density (PSAD), for the diagnosis of prostate cancer (PCa) in Taiwanese males. Methods: Men with PSA between 4.0 and 20.0 ng/ml who underwent a transrectal ultrasound (TRUS) guided prostate biopsy between the studied period were retrospectively identified. The demographic data, PSAD and TZPSAD were calculated in all patients. Receiver operating characteristic (ROC) curves were used to analyze the accuracy of a positive PCa diagnosis. Results: The area under the ROC (AUC) was 0.615, 0.748 and 0.746 for PSA, PSAD and TZPSAD, respectively. The best cut-off of value for TZPSAD in predicting PCa in men with a PSA of 4.0-10.0 ng/ml was 0.367 ng/ml/ml with a sensitivity of 50% and a specificity of 77.5%. In men with a PSA of 10.1-20.0 ng/ml, the best cut-off value was 0.454 ng/ml, with a sensitivity of 74.8% and specificity of 70.9%. Conclusion: The use of TZPSAD can improve the efficiency and specificity of PSA for the diagnosis of PCa in Taiwanese men with PSA 4.0-20.0 ng/ml. TZPSAD efficiency was similar to PSAD but TZPSAD had better cancer specificity. abstract_id: PUBMED:34094990 PSA Density Help to Identify Patients With Elevated PSA Due to Prostate Cancer Rather Than Intraprostatic Inflammation: A Prospective Single Center Study. The association between PSA density, prostate cancer (PCa) and BPH is well established. The aim of the present study was to establish whether PSA density can be used as a reliable parameter to predict csPCa and to determine its optimal cutoff to exclude increased PSA levels due to intraprostatic inflammation. This is a large prospective single-center, observational study evaluating the role of PSA density in the discrimination between intraprostatic inflammation and clinically significant PCa (csPCa). Patients with PSA ≥ 4 ng/ml and/or positive digito-rectal examination (DRE) and scheduled for prostate biopsy were enrolled. Prostatic inflammation (PI) was assessed and graded using the Irani Scores. Multivariable binary logistic regression analysis was used to assess if PSA density was associated with clinically significant PCa (csPCa) rather than prostatic inflammation. A total of 1988 patients met the inclusion criteria. Any PCa and csPCa rates were 47% and 24% respectively. In the group without csPCa, patients with prostatic inflammation had a higher PSA (6.0 vs 5.0 ng/ml; p=0.0003), higher prostate volume (58 vs 52 cc; p<0.0001), were more likely to have a previous negative biopsy (29% vs 21%; p=0.0005) and a negative DRE (70% vs 65%; p=0.023) but no difference in PSA density (0.1 vs 0.11; p=0.2). Conversely in the group with csPCa, patients with prostatic inflammation had a higher prostate volume (43 vs 40 cc; p=0.007) but no difference in the other clinical parameters. At multivariable analysis adjusting for age, biopsy history, DRE and prostate volume, PSA density emerged as a strong predictor of csPCA but was not associated with prostatic inflammation. The optimal cutoffs of PSA density to diagnose csPCa and rule out the presence of prostatic inflammation in patients with an elevated PSA (>4 ng/ml) were 0.10 ng/ml2 in biopsy naïve patients and 0.15 ng/ml2 in patients with a previous negative biopsy. PSA density rather than PSA, should be used to evaluate patients at risk of prostate cancer who may need additional testing or prostate biopsy. This readily available parameter can potentially identify men who do not have PCa but have an elevated PSA secondary to benign conditions. abstract_id: PUBMED:36757372 Immuno-analytical characteristics of PSA and derived biomarkers (total PSA, free PSA, p2PSA) Prostate-specific antigen (PSA) is the recommended tumor marker for individual screening and follow-up of prostate cancer. This paper reviews main structural and physiological data about prostate specific antigen isoforms: total PSA, free PSA, [-2]proPSA (also named p2PSA). It describes the pre-, per- and post-analytical conditions for these different parameters. It presents the interpretation of results and derived calculated indices (free/total PSA ratio, Prostate Health Index or PHI) for the management of prostate cancer (initial diagnosis and follow-up). abstract_id: PUBMED:33670632 Optimized Identification of High-Grade Prostate Cancer by Combining Different PSA Molecular Forms and PSA Density in a Deep Learning Model. After skin cancer, prostate cancer (PC) is the most common cancer among men. The gold standard for PC diagnosis is based on the PSA (prostate-specific antigen) test. Based on this preliminary screening, the physician decides whether to proceed with further tests, typically prostate biopsy, to confirm cancer and evaluate its aggressiveness. Nevertheless, the specificity of the PSA test is suboptimal and, as a result, about 75% of men who undergo a prostate biopsy do not have cancer even if they have elevated PSA levels. Overdiagnosis leads to unnecessary overtreatment of prostate cancer with undesirable side effects, such as incontinence, erectile dysfunction, infections, and pain. Here, we used artificial neuronal networks to develop models that can diagnose PC efficiently. The model receives as an input a panel of 4 clinical variables (total PSA, free PSA, p2PSA, and PSA density) plus age. The output of the model is an estimate of the Gleason score of the patient. After training on a dataset of 190 samples and optimization of the variables, the model achieved values of sensitivity as high as 86% and 89% specificity. The efficiency of the method can be improved even further by training the model on larger datasets. abstract_id: PUBMED:35463344 Modified Prostate Health Index Density Significantly Improves Clinically Significant Prostate Cancer (csPCa) Detection. Background: Early screening of clinically significant prostate cancer (csPCa) may offer opportunities in revolutionizing the survival benefits of this lethal disease. We sought to introduce a modified prostate health index density (mPHI) model using imaging indicators and to compare its diagnostic performance for early detection of occult onset csPCa within the prostate-specific antigen (PSA) gray zone with that of PHI and PHID. Methods And Participation: Between August 2020 and January 2022, a training cohort of 278 patients (total PSA 4.0-10.0 ng/ml) who were scheduled for a prostate biopsy were prospectively recruited. PHI and PHID were compared with mPHI (LDTRD×APD×TPV×PHI) for the diagnosis performance in identifying csPCa. Pathology outcomes from systematic prostate biopsies were considered the gold standard. Results: This model was tested in a training cohort consisting of 73 csPCa, 14 non-clinically significant prostate cancer(non-csPCa), and 191 benign prostatic hyperplasia (BPH) samples. In the univariate analysis for the PSA gray zone cohort, for overall PCa, the AUC of mPHI (0.856) was higher than PHI (0.774) and PHID (0.835). For csPCa, the AUC of mPHI (0.859) also surpassed PHI (0.787) and PHID (0.825). For detection of csPCa, compared with lower specificities from PHI and PHID, mPHI performed the highest specificity (76.5%), by sparing 60.0% of unnecessary biopsies at the cost of missing 11 cases of csPCa. The mPHI outperformed PHI and PHID for overall PCa detection. In terms of csPCa, mPHI exceeds diagnostic performance with a better net benefit in decision curve analysis (DCA) compared with PHI or PHID. Conclusions: We have developed a modified PHI density (mPHI) model that can sensitively distinguish early-stage csPCa patients within the PSA gray zone. Clinical Trial Registration: ClinicalTrials.gov, NCT04251546. abstract_id: PUBMED:30543191 Relevance of total PSA and free PSA prescriptions. The total PSA testing has been frequently criticized for its effectiveness in the early detection of prostate cancer. These widely resumed in and out of the medical community have led to a decrease in prescriptions. Recommendations against the use of PSA for screening have been issued in France and in most countries. In the US, the significant decline in total PSA use since 2004 has led to an increase in advanced cancer diagnosis, which has led the authorities to back down. The US preventive services task force (USPSTF) now recommends that men between the ages of 55 and 69 make an individual decision about prostate cancer screening with their clinician. The purpose of this work is to summarize the good practices for the use of the total and free PSA with the views of these last data. abstract_id: PUBMED:10957773 Additional aids in detection of prostate carcinomas? PSA-prostatic volume quotient, PSA-doubling time, age-dependent PSA reference values and PSA in urine Different concepts to improve the clinical utility of prostatic-specific antigen (PSA) for prostate cancer detection have been developed. PSA density and transition zone PSA density are not useful screening tools due to a high variability of prostate volume measurement. PSA velocity monitors the change in PSA in a 2-year period, therefore it is not suitable for treatment decisions in men with serum PSA levels between 4 and 10 ng/ml. Measurement of urinary PSA is not helpful for prostate cancer detection. Age-specific PSA reference ranges provide greater sensitivity for cancer detection in younger men at the expense of a greater negative biopsy rate. In older men unnecessary biopsies could be spared. However, this might lead to a lower sensitivity in these patients, missing possibly clinically insignificant tumors. abstract_id: PUBMED:11221034 Validity of PSA density of the transition zone in the diagnosis of prostate cancer One hundred four patients (mean age 70.6 years) with prostatic specific antigen (PSA) values between 4 and 10 ng/ml (average 7.9 ng/ml), and with no suspects for neoplasia by digital rectal examination (DRE) and transrectal ultrasound (TRUS) were studied. In all patients PSA density for the entire prostate (PSAD) and PSA density for the transition zone (PSAT) were calculated. TRUS was performed using a 5 MHz probe. Prostate and transition zone volumes were obtained by ellipsoid formula. Aim of the study was to evaluate the PSAT predictivity for prostate cancer compared to the PSAD. Sixteen out of 104 patients (15.4%) had histologically confirmed prostate cancer, and 88 (84.6%) had benign prostatic hyperplasia. When cut-off for PSAD was 0.15 ng/ml/cc, specificity and sensitivity were respectively 75% and 68% with positive and negative predictive values of 54% and 17%; when cut-off for PSAT was 0.34% ng/ml/cc, sensitivity and specificity were respectively 100% and 68% with positive and negative predictive values of 60% and 18%. Our results, according to the literature data, suggest that PSAT seems to have a higher predictivity for prostate cancer than PSAD, providing an optimization for the employ of prostatic biopsy, especially for those patients with PSA values between 4 and 10 ng/ml. abstract_id: PUBMED:27896239 Prostate-Specific Antigen Density and Gleason Score Predict Adverse Pathologic Features in Patients with Clinically Localized Prostate Cancer. Background: According to recent studies, prostate cancer is the second most common cancer among Iranian men. Radical prostatectomy has been considered the gold standard treatment in patients with clinically localized prostate cancer. Gleason score, PSA density, and PSA velocity are some of the parameters used to predict adverse pathologic features. Objectives: The aim of this study was to evaluate the prognostic value of PSA density and Gleason score in predicting adverse pathologic features in patients with localized prostate cancer who undergo radical prostatectomy. Methods: We conducted a cross-sectional study of 105 patients with localized prostate cancer who underwent radical prostatectomy between 2006 and 2013. We recorded Gleason scores and PSA levels, in addition to the results of pathological evaluations after radical prostatectomy, including prostate volume, stage, LNI (lymph node involvement), SVI (seminal vesicle invasion), and extraprostatic extension (EPE). Data were analyzed using SPSS version 21. Results: Mean PSA density was 0.27 (0.17 SD). The frequencies of EPE, SVI, and LNI were 21.9, 16.2, and 2.9, respectively. The Mann-Whitney U-test demonstrated a significant correlation between PSA density and adverse pathologic features (EPE, SVI, and LNI). Conclusions: PSA, PSA density, and Gleason score should be considered together in order to more accurately predict the adverse pathologic features of prostate cancer. Answer: The available abstracts present mixed findings regarding the accuracy of PSA density (PSAD) and PSA density of the transition zone (PSADTZ) compared to PSA alone in predicting the pathological stage of clinically localized prostate cancer. One study found that PSAD and PSADTZ failed to outperform PSA in preoperative stage prediction of prostate cancer, suggesting that the formula used to calculate them does not eliminate the contribution to total PSA of the nonmalignant portion of the gland (PUBMED:18367100). This indicates that PSAD and PSADTZ may not be more accurate than PSA alone in predicting the pathological stage. However, another study reported that the use of TZPSAD can improve the efficiency and specificity of PSA for the diagnosis of prostate cancer in Taiwanese men with PSA levels between 4.0-20.0 ng/ml, with TZPSAD showing better cancer specificity compared to PSAD (PUBMED:33028274). This suggests that in certain populations, PSADTZ may offer some advantages over PSA alone. A further study indicated that PSA density is a strong predictor of clinically significant prostate cancer (csPCa) but was not associated with prostatic inflammation, suggesting that PSA density could be a useful parameter to distinguish between elevated PSA levels due to cancer versus benign conditions like intraprostatic inflammation (PUBMED:34094990). Other studies have explored the use of different PSA-derived biomarkers and their combinations, such as the Prostate Health Index (PHI) and its density (PHID), and have developed models that incorporate these markers to improve the diagnosis of prostate cancer (PUBMED:35463344, PUBMED:33670632). One study introduced a modified PHI density (mPHI) model that showed improved diagnostic performance for early detection of csPCa within the PSA gray zone compared to PHI and PHID (PUBMED:35463344). In summary, while some studies suggest that PSAD and PSADTZ may not be more accurate than PSA alone in predicting the pathological stage of prostate cancer (PUBMED:18367100), others indicate that PSADTZ and other derived biomarkers could improve cancer detection rates and specificity in certain contexts (PUBMED:33028274, PUBMED:34094990, PUBMED:35463344). Therefore, the accuracy of PSAD and PSADTZ compared to PSA alone may depend on the specific clinical context and the population being studied.
Instruction: Delayed attenuation of myocardial ischemia with repeated exercise in subjects with stable angina: a possible model for the second window of protection? Abstracts: abstract_id: PUBMED:11099170 Delayed attenuation of myocardial ischemia with repeated exercise in subjects with stable angina: a possible model for the second window of protection? Aims: A delayed myocardial protection extends between 24 and 96 h after ischemic preconditioning in animals. To test for this phenomenon in humans, subjects with stable angina were subjected to exercise test-induced myocardial ischemia and the effect of this "preconditioning" ischemic insult on the exercise-induced myocardial ischemia with the re-exercise after 24-96 hours was studied. Methods And Results: Forty-eight males with a history of infarction and positive exercise test were recruited to the study. After baseline symptom-limited exercise test, the subjects were randomized to four experimental groups (n = 12/group). The groups were allowed to recover for 24 h, 48 h, 72 h or 96 h before performing the second exercise test. Variables analyzed were heart rate-systolic blood pressure product at 1 mm ST segment depression, time to 1 mm ST segment depression, maximum ST segment depression, exercise duration, and the total ischemic time. There were no intergroup differences in baseline values for these variables. All variables were significantly improved at 24 h, the improvement peaked usually at 48 h (maximum increase in the variables by 31-46%), and the variables returned to baseline by 96 h after the first test. Conclusions: The exercise-induced ischemia caused transient attenuation of myocardial ischemia with re-exercise. Although the time-window and the time-course of this effect shows striking resemblance to those of the delayed preconditioning in animals, its mechanism remains speculative. The most probable mechanisms that may be involved include increased myocardial perfusion and/or some adaptive changes in the myocardium, the delayed preconditioning being one possibility. abstract_id: PUBMED:12679219 Exercise-induced ischemia initiates the second window of protection in humans independent of collateral recruitment. Objectives: This study was designed to examine if exercise-induced ischemia initiated late preconditioning in humans that becomes manifest during subsequent exercise and serial balloon occlusion of the left anterior descending coronary artery (LAD). Background: The existence of late preconditioning in humans is controversial. We therefore compared myocardial responses to exercise-induced and intracoronary balloon inflation-induced ischemia in two groups of patients subjected to different temporal patterns of ischemia. Methods: Thirty patients with stable angina secondary to single-vessel LAD disease underwent percutaneous coronary intervention (PCI) after two separate exercise tolerance test (ETT) protocols designed to investigate isolated early preconditioning (IEP) alone or the second window of protection (SWOP). The IEP subjects underwent three sequential ETTs at least two weeks before PCI. The SWOP subjects underwent five sequential ETTs commencing 24 h before PCI. Results: During PCI there was no significant difference in intracoronary pressure-derived collateral flow index (CFI) between groups (IEP = 0.15 +/- 0.13, SWOP = 0.19 +/- 0.15). In SWOP patients, compared with the initial ETT, the ETT performed 24 h later had a 40% (p < 0.001) increase in time to 0.1-mV ST depression and a 60% (p < 0.05) decrease in ventricular ectopic frequency. During the first balloon inflation, peak ST elevation was reduced by 49% (p < 0.05) in the SWOP versus the IEP group, and the dependence on CFI observed in the IEP group was abolished (analysis of covariance, p < 0.05). The significant attenuation of ST elevation (47%, p < 0.005) seen at the time of the second inflation in the IEP patients was not seen in the SWOP patients. Conclusions: Exercise-induced ischemia triggers late preconditioning in humans, which becomes manifest during exercise and PCI. This is the first evidence that ischemia induced by coronary occlusion is attenuated in humans by a late preconditioning effect induced by exercise. abstract_id: PUBMED:9822094 Attenuation of myocardial ischemia with repeated exercise in subjects with chronic stable angina: relation to myocardial contractility, intensity of exercise and the adenosine triphosphate-sensitive potassium channel. Objectives: This study characterized the attenuation of myocardial ischemia observed with re-exercise to determine whether: 1) a differing exercise intensity modifies this attenuation; 2) it could be explained by contractile down-regulation or stunning; 3) it is mediated by activation of ATP-sensitive potassium channels (K+-ATP). Background: Subjects with ischemic heart disease (IHD) frequently note less angina with re-exercise after a brief rest. Potential mechanisms of this 'warm-up' phenomenon have been little explored. Methods: IHD subjects with a positive exercise test were studied. Groups I and II (12 subjects each) underwent 2 successive Naughton protocol exercise echocardiography tests (with 1 min instead of 2 min stages for Group II). Group D (10 subjects) had type II diabetes, were on > or =10 mg daily of the K+-ATP blocker, glibenclamide, and underwent the group I exercise protocol. The ischemic threshold or rate-pressure product at 1 mm ST segment depression, ST depression corresponding to the peak rate-pressure product of the first exercise (maximum ST depression equivalent), and left ventricular wall motion indexes before and immediately after each exercise were analyzed. Results: Exercise-induced myocardial ischemia with re-exercise was similarly attenuated in groups I, II, and D. The ischemic threshold was raised by nearly 20% with re-exercise (p=0.001, p=0.02, and p=0.02, respectively) and the maximum ST depression equivalent was nearly halved on re-exercise (p=0.005, p=0.006, and p=0.001, respectively). Exercise-induced wall motion dysfunction was attenuated with re-exercise. In group I, wall motion returned to the initial baseline score prior to exercise 2, whereas in the more intense protocol of group II, wall motion dysfunction persisted prior to exercise 2. Conclusions: Thus, the attenuation of myocardial ischemia observed with re-exercise appears to be independent of the intensity of the exercise protocol and is not explained by down-regulation of myocardial contractility induced by the initial ischemic stimulus. Since results were similar in diabetic subjects on robust doses of glibenclamide, this phenomenon does not appear to be mediated by K+-ATP activation. abstract_id: PUBMED:8721640 Adaptation to myocardial ischemia during repeated dynamic exercise in relation to findings at cardiac catheterization. It has been suggested that the myocardium is able to recruit endogenous protective mechanisms in response to repeated ischemia and reperfusion. We set out to study whether this is manifested in patients with coronary artery disease in the form of fewer signs of myocardial ischemia during the second of two successive exercise tests and whether any relations exist between ischemia adaptation and findings at cardiac catheterization. Twenty-one patients with typical angina pectoris symptoms underwent two repeated bicycle exercise tests with identical protocols, followed by cardiac catheterization and coronary angiography the next day. The first exercise test was discontinued whenever a 2 mm ST depression in the electrocardiogram (ECG) was achieved or further exercise was limited by symptoms. The second exercise test was performed after disappearance of the symptoms or ST depression or both. Kaplan-Meier survival analysis for the appearance of a 1 mm ST depression demonstrated improved ischemia tolerance during the second test, when the required time for its appearance was significantly longer (6.5 +/- 0.8 min vs 4.5 +/- 0.5 min; p = 0.005). The maximal intensity of anginal pain was lower during the second exercise (2.2 +/- 1.0 min vs 0.7 +/- 0.3 min in Borg's scale; p < 0.001), and the time required for disappearance of the ST depression was shorter after this exercise (3.0 +/- 0.8 min vs 6.2 +/- 0.9 min; p = 0.003), with a similar tendency in the disappearance of angina. The rate-pressure product on the appearance of a 1 mm ST depression was significantly higher during the second test (17,990 +/- 1210 mm Hg x min-1 vs 15,960 +/- 869 mm Hg x min-1; p = 0.009). Eighteen of the patients had three-vessel disease, as evidenced by coronary angiography, and the change in the time required for the appearance of a 1 mm ST depression in the repeated exercise tests was inversely correlated with the severity of the left anterior descending (LAD) coronary artery obstruction (r = -0.61; p = 0.006) and left ventricular end-diastolic pressure (r = -0.50; p = 0.03). No significant correlation with the degree of collateral vessels was found. We conclude that most patients with extensive coronary artery disease are able to increase their tolerance of ischemia during repeated dynamic exercise and that increased vasodilation and oxygen delivery are the major mechanisms for this warm-up phenomenon. On the other hand, collaterals visible in routine resting anglography do not predict the degree of adaptation to ischemia during repeated dynamic exercise. abstract_id: PUBMED:14962490 Delayed preconditioning of the human myocardium: signal transduction and clinical implications. Objective: Ischemic preconditioning confers cardioprotection in early and delayed phases. We investigated the delayed window of pharmacological and ischemic preconditioning in human myocardium, and the involvement of mitoKATP, PKC and p38MAPK. Methods: These studies were carried out using human right atrial tissue in a cell necrosis model. The tissue was obtained from patients undergoing coronary artery surgery. Results: The second window triggered by ischemia, phenylephrine or adenosine resulted in similar cardioprotection between 24 and 72 h following the intervention. Atrial tissue taken from patients with a single episode of angina between 24 and 72 h prior to surgery were already protected and preconditioning with ischemia, phenylephrine or adenosine did not add to the protection. The trigger of preconditioning with mitoKATP channel opener diazoxide, PKC activator PMA and p38MAPK activator anisomycin produced similar delayed protection to that of ischemia or phenylephrine. Cardioprotection was lost when mitoKATP channels were blocked by 5HD, PKC by chelerythrine and p38MAPK by SB203580 24 h after the trigger of preconditioning. Conclusions: Ischemic and pharmacological preconditioning induce similar delayed cardioprotection of the human heart. This second window of protection that is seen between 24 and 72 h occurs in vitro and in vivo and requires opening of mitoKATP channels and activation of PKC and p38MAPK. abstract_id: PUBMED:15174324 Exercise and nicotinic acid delayed action drug Enduracin: application in outpatient rehabilitation of patients with ischemic heart disease Aim: To evaluate efficacy of combined use of moderate exercise and nicotinic acid drug enduracin in patients with coronary heart disease (CHD) with moderate dyslipidemia (DE). Material And Methods: The effects of exercise therapy alone, enduracin alone and their combination on physical performance (PP), hemodynamics, blood lipid spectrum and clinical course of CHD were studied in 93 CHD patients with moderate DE. The results were evaluated clinically after 1-year treatment. Results: Combined used of exercise and enduracin in CHD patients showed its efficacy manifesting in improvement of PP, hemodynamics at rest and exercise test, left ventricular systolic function, clinical course, reduction of DE. Enduracin + exercise appeared more efficient than their use in monotherapy. Enduracin monotherapy had a positive action on PP, arterial pressure and anginal attacks frequency. Conclusion: Enduracin is recommended as monotherapy and in combination with moderate exercise in outpatient rehabilitation and secondary prophylaxis of CHD patients with moderate dyslipidemia and angina pectoris to relieve myocardial ischemia under exercise, to raise PP, improve lipid composition of blood and prevent maladaptive left ventricular remodeling. abstract_id: PUBMED:17703339 Predicting breast attenuation in patients undergoing myocardial perfusion scintigraphy: a digital x-ray study. Attenuation artifacts are the most common sources of error in myocardial single-photon emission computed tomography (SPECT) imaging. Breast artifacts are the most frequent causes of false positive planar images in female subjects. The purpose of this study was to predict breast adverse attenuation by measuring breast tissue thickness with digital x-ray. Sixty-five consecutive female patients with angina pectoris, who were referred to myocardial perfusion scintigraphy were enrolled in this study. Eighteen patients with normal perfusion imaging and normal coronary angiography composed the first group, whereas the second group consisted of 28 patients with a positive exercise electrocardiogram with anterior ischemia on myocardial perfusion imaging and greater than 50% left anterior descending artery stenosis on angiography. Nineteen patients in the third group had normal exercise electrocardiograms and normal coronary angiographies, but anterior ischemia on perfusion imaging. Digital x-ray records were obtained for measuring breast tissue thickness and Hounsfield density. The rate of breast adverse attenuation was 40% (19/47) in patients with anterior ischemia. The sensitivity and specificity of the prediction of breast adverse attenuation (lateral density less than 550 Hounsfield) were 79% and 11%, respectively. When breast attenuation for a breast thickness greater than 6 cm measured in the left anterior oblique view was predicted, the sensitivity and specificity were 79% and 93%, respectively. In conclusion, breast thickness greater than 6 cm measured from the left anterior oblique view with digital x-ray can predict breast adverse attenuation in female patients, and thereby may decrease the number of unnecessary invasive diagnostic procedures to be performed. abstract_id: PUBMED:1894759 Comparison of the effect of isosorbide-5-mononitrate and isosorbide dinitrate in a slow-release form on exercise-induced myocardial ischemia. A randomized, double blind, placebo-controlled crossover study on 20 patients with exercise-induced angina pectoris and reproducible ST-segment depression during exercise-stress test was performed to compare the effect of a single dose of 120 mg of isosorbide dinitrate in a slow-release form with that of a twice-daily application of 20 mg of isosorbide-5-mononitrate. Symptom-limited exercise tests were done, and nitrate plasma levels were measured in the subjects 6, 10, and 24 hours after the first administration of the drug. Both drugs produced a highly significant reduction in the size of exercise-induced ST-depressions (P less than .001) 6 and 10 hours after the first administration of isosorbide dinitrate as well as 6 hours after the first and 4 hours after the second dose of isosorbide-5-mononitrate. The effect was still significant (P less than .05) 24 hours after the administration of isosorbide dinitrate in a slow-release form and 18 hours after the second dose of isosorbide-5-mononitrate. In the case of the drug isosorbide dinitrate, nitrate plasma levels for its metabolite, isosorbide-5-mononitrate, were highest 10 hours after first application. In the case of the drug isosorbide-5-mononitrate, nitrate plasma levels were highest 4 hours after the second dose. Two 20 mg doses of isosorbide-5-mononitrate and a single dose of 120 mg isosorbide dinitrate in a slow release form have a comparable effect on the reduction of exercise-induced ST-segment depressions. abstract_id: PUBMED:7596123 Acute and chronic effect of molsidomine extended release on exercise capacity in patients with stable angina, a double-blind cross-over clinical trial versus placebo. A double-blind, placebo-controlled, cross-over study was performed in 50 patients with ischemic heart disease and stable angina to determine the duration of efficacy of 8 mg molsidomine in extended-release form. Exercise testing was performed at baseline and 2, 4, 6, 8, and 10 h after intake of either the medication or the placebo. Total duration of exercise (in minutes) and total work performance (workload x min) was significantly improved in the molsidomine retard group, not only compared with baseline but also with placebo for all time-points. ST segment depression at 60 W and at maximal exercise improved similarly until 10 h after molsidomine retard treatment. The rate-pressure product (heart rate x systolic blood pressure) showed significant improvement only at 60 W. No attenuation of the obtained effects was observed after 14 days of treatment. The number of anginal attacks and the consumption of sublingual nitroderivates were significantly reduced with molsidomine retard 8 mg as compared with placebo. Molsidomine retard 8 mg is effective until at least 10 h after oral (p.o.) intake. A dose schedule of molsidomine retard 8 mg twice daily definitely reduces anginal symptoms. abstract_id: PUBMED:11796545 Does a selective adenosine A(1) receptor agonist protect against exercise induced ischaemia in patients with coronary artery disease? Background: The "warm up" effect in angina may represent ischaemic preconditioning, which is mediated by adenosine A(1) receptors in most models. Objective: To investigate the effect of a selective A(1) agonist, GR79236 (GlaxoSmithKline), on exercise induced angina and ischaemic left ventricular dysfunction in patients with coronary artery disease. Design: A double blind crossover study. Patients: 25 patients with multivessel coronary artery disease. Interventions: On mornings one week apart, patients received intravenous GR79236 10 microgram/kg or placebo, and then carried out two supine bicycle exercise tests separated by 30 minutes. Equilibrium radionuclide angiography was done before and during exercise. Results: The onset of chest pain or 1 mm ST depression was delayed and occurred at a higher rate-pressure product during the second exercise test following either placebo or GR79236. Compared with placebo, GR79236 did not affect these indices during equivalent tests. GR79236 reduced resting global ejection fraction from (mean (SD)) 63 (7)% to 61 (5)% (p < 0.05) by a selective reduction in the regional ejection fraction of "ischaemic" left ventricular sectors (those where the ejection fraction fell during the first exercise test following placebo). Ischaemic sectors showed increased function during the second test following placebo (72 (21)% v 66 (20)%; p = 0.0001), or during the first test following GR79236 (69 (21)% v 66 (20)%; p = 0.0001). Sequential exercise further increased the function of ischaemic sectors even after drug administration. Conclusions: GR79236 failed to mimic the warm up effect, and warm up occurred even in the presence of this agent. This suggests that ischaemic preconditioning is not an important component of this type of protection. The complex actions of the drug on regional left ventricular function at rest and during exercise suggest several competing A(1) mediated actions. Answer: The concept of delayed attenuation of myocardial ischemia with repeated exercise in subjects with stable angina has been explored as a possible model for the second window of protection. This phenomenon, observed in humans, is characterized by a transient reduction in myocardial ischemia upon re-exercise after a period of 24 to 96 hours following an initial ischemic event, such as an exercise test (PUBMED:11099170). The improvement in exercise-induced ischemic parameters, such as heart rate-systolic blood pressure product at 1 mm ST segment depression, time to 1 mm ST segment depression, and exercise duration, typically peaks at around 48 hours and returns to baseline by 96 hours after the first exercise test (PUBMED:11099170). Further studies have supported the existence of this late preconditioning effect in humans, showing that exercise-induced ischemia can trigger a protective response that becomes evident during subsequent exercise and interventions like percutaneous coronary intervention (PCI). This late preconditioning effect is independent of collateral recruitment, as demonstrated by similar intracoronary pressure-derived collateral flow index (CFI) between groups subjected to different ischemic patterns (PUBMED:12679219). The attenuation of myocardial ischemia with re-exercise has been shown to be independent of exercise intensity and is not explained by down-regulation of myocardial contractility induced by the initial ischemic stimulus. Additionally, this phenomenon does not appear to be mediated by activation of ATP-sensitive potassium channels (K+-ATP), as evidenced by similar results in diabetic subjects on robust doses of glibenclamide, a K+-ATP blocker (PUBMED:9822094). The adaptation to myocardial ischemia during repeated dynamic exercise has also been linked to increased vasodilation and oxygen delivery, rather than the presence of collateral vessels visible in routine resting angiography (PUBMED:8721640). This suggests that endogenous protective mechanisms may be recruited in response to repeated ischemia and reperfusion. In summary, the delayed attenuation of myocardial ischemia with repeated exercise in subjects with stable angina does present a possible model for the second window of protection, resembling the delayed preconditioning observed in animal studies. The mechanisms behind this protective effect in humans are still speculative but may involve increased myocardial perfusion and adaptive changes in the myocardium (PUBMED:11099170).
Instruction: Race: predictor versus proxy variable? Abstracts: abstract_id: PUBMED:28678941 Analysis of prevalence of self-reported hearing loss and associated factors: primary versus proxy informant The objective was to evaluate differences between prevalence rates for self-reported hearing loss and associated factors, obtained from responses by primary and proxy informants in a Population-Based Study on Human Communication Disorders (DCH-POP in Portuguese). This was a study on epidemiological methods using data from a cross-sectional household survey with a sample of 1,253 individuals from Southern Brazil. To verify differences between prevalence rates comparing primary informants and proxy informants, we used the chi-square or Fisher's exact test for categorical variables and Mann-Whitney for continuous variables. The log-binomial model was adjusted for hearing loss as the dependent variable, considering three datasets: the entire sample, only primary informants, and only proxy informants, estimating association by prevalence ratios. In the final models, only the independent variables age and dizziness were associated with hearing loss, independently of the dataset that was used. Proxy informants generally underestimated the prevalence rates for the target outcomes, when compared to primary informants. abstract_id: PUBMED:7885511 Munchausen syndrome by proxy The term "Munchausen by proxy" denotes a special form of factitious disorder and a rare kind of child abuse. We present the case of a 12-year-old boy, whose mother's manipulations culminated in his immobilization in a wheelchair and the implantation of a pacemaker. Problems of diagnosis and clinical management of Munchausen by proxy are discussed. abstract_id: PUBMED:11204335 Munchausen's syndrome by proxy--a malignant form of child abuse Munchhausen syndrome by proxy is a malignant form of child abuse in which illness in a child is fabricated and/or induced by a parent. It can result in serious illness and even death of the child and it is difficult to detect. The authors give a significant amount of literature examples. They try to find out the difference in ways of hurting by perpetrators. Various types of personalities and emotional disturbances in Munchhausen on proxy syndrome are shown. Child maltreatment and Munchhausen by proxy syndrome need to be part of the differential diagnosis when the clinical picture is atypical or does not appear medically plausible. abstract_id: PUBMED:32027175 The Perilous Use of Proxy Variables. It is often not stated or quantified how well measured proxy variables account for the variance in latent constructs they are intended to represent. A sensitivity analysis was run using data from the Survey of Health, Ageing and Retirement in Europe to estimate models varying in the degree to which proxy variables represent intended constructs. Results showed that parameter estimates differ substantially across different levels of variable representation. When variables are used with poor construct validity, an insufficient amount of variance is removed from the observed spurious relationship between design variable and outcome. The findings from this methodological demonstration underscore the importance of selecting proxy variables that accurately represent the underlying construct for which control is intended. abstract_id: PUBMED:24411558 Validity and reliability of the spanish EQ-5D-Y proxy version Introduction: A proxy version of the EQ-5D-Y, a questionnaire to evaluate the Health Related Quality of Life (HRQoL) in children and adolescents, has recently been developed. There are currently no data on the validity and reliability of this tool. The objective of this study was to analyze the validity and reliability of the EQ-5D-Y proxy version. Methodology: A core set of self-report tools, including the Spanish version of the EQ-5D-Y were administered to a group of Spanish children and adolescents drawn from the general population. A similar core set of internationally standardized proxy tools, including the EQ-5D-Y proxy version were administered to their parents. Test-retest reliability was determined, and correlations with other generic measurements of HRQoL were calculated. Additionally, known group validity was examined by comparing groups with a priori expected differences in HRQoL. The agreement between the self-report and proxy version responses was also calculated. Results: A total of 477 children and adolescents and their parents participated in the study. One week later, 158 participants completed the EQ-5D-Y/EQ-5D-Y proxy to facilitate reliability analysis. Agreement between the test-retest scores was higher than 88% for EQ-5D-Y self-report, and proxy version. Correlations with other health measurements showed similar convergent validity to that observed in the international EQ-5D-Y. Agreement between the self-report and proxy versions ranged from 72.9% to 97.1%. Conclusions: The results provide preliminary evidence of the reliability and validity of the EQ-5D-Y proxy version. abstract_id: PUBMED:26509970 Attachment Quality of Young Children with Mentally Ill Parents on the Example of the Munchausen by Proxy Syndrome One of the most discussed questions in clinical literature concerns the impact of child abuse by mentally ill parents (cf. Mattejat, 1998). It's obvious that most children cannot understand such a parental behaviour and that this lack of understanding along with the lack of knowledge about their parents' emotional disorder results in childrens' fear, disorientation and uncertainty. The consequences are massive interferences in the relationship between parents and children, who could develop an anxious-resistant insecure or even a disorganized/disoriented attachment. But how does a child react, if the behaviour of its parents is ambivalent itself and alternates from abuse to care? Such a parental behaviour is described as the "Munchausen by Proxy Syndrome". This article regards the effects of a "Munchausen by Proxy Syndrome" on the childrens' attachment development. After discussing the basic assumptions about the "Munchhauen by Proxy Syndrome" and the attachment theory we draw conclusions about the syndrome's effect on childrens' attachment behaviour. abstract_id: PUBMED:15506259 Factitious disorder and factitious disorder by proxy Similar to the adult patient, a child or adolescent may actively feign or produce artificial symptoms (synonymous: Munchausen syndrome). The more frequent case is that the child suffers from being an object of symptom fabrication induced by a close person caring for the child, regularly the mother (Munchausen syndrome by proxy). This review focuses on psychopathological aspects of the clinically more relevant factitious disorder by proxy. Typical behaviour and personality characteristics are presented that can be taken as clinical warning signs. Doctor-mother-interaction is affectively challenging due to conflicting tasks imposed on the physician. Complementary to pediatric exclusion of genuine disease, psychopathological assessment is required to exclude other sources of deviant illness behaviour. Factious disorder shares particular features (active violation of the child, false report of history, aggravated symptom presentation and increased doctor-hopping, difficulties in conforming maternal report in biomedical data) with other psychopathological entities (child abuse, simulation, dissociative disorders, somatoform disorders including hypochondria, variants of maternal overprotection and infantilization, psychosis or delusion in the mother). Criteria for differentiation are presented. Three concepts on the psychopathological etiology of factitious disorder by proxy are relevant: In some cases, it may be conceived as secondary manifestation of a primary psychopathological entity or personality disorder. Learning theory emphasises operant rewards received from vicarious sick role. Attachment theory provides possible explanations concerning the traumatic impact on the child, early sources of psychopathology in the fabricating mother and risks for intergenerational transmission of factitious disorders. abstract_id: PUBMED:33343006 Identifying Causal Effects With Proxy Variables of an Unmeasured Confounder. We consider a causal effect that is confounded by an unobserved variable, but with observed proxy variables of the confounder. We show that, with at least two independent proxy variables satisfying a certain rank condition, the causal effect is nonparametrically identified, even if the measurement error mechanism, i.e., the conditional distribution of the proxies given the confounder, may not be identified. Our result generalizes the identification strategy of Kuroki & Pearl (2014) that rests on identification of the measurement error mechanism. When only one proxy for the confounder is available, or the required rank condition is not met, we develop a strategy to test the null hypothesis of no causal effect. abstract_id: PUBMED:24416819 Münchausen syndrome by proxy Münchausen syndrome by proxy (also known as factitious disorder by proxy) is a psychiatric disorder which consists of fabricating or inducing illness in a child, usually by his mother. The motivation for the perpetrator's behavior is receivng satisfaction from the investigations and treatments that the child receives and from the medical environment, as part of a unique mental disturbance. External incentives for the behavior (such as economic gain) are absent. During the last few years about 20 cases of Münchausen syndrome by proxy were diagnosed in our ward. We describe a few less typical cases of patients with Münchausen syndrome by proxy. In all cases the offender, the mother, had only secondary education and no medical background. The socioeconomic background was variable. Recognition of the syndrome and a high index of clinical suspicion are needed in order to diagnose the disorder. Suspicious signs and symptoms include prolonged and illogical course of disease, odd complications, exacerbation that appear just before discharge from the hospital and symptoms that occur only in the presence of a specific care giver. Münchausen syndrome by proxy is not very rare. It appears in all ethnic groups, socioeconomic status and educational backgrounds. abstract_id: PUBMED:34254262 The use of proxies and proxy-reported measures: a report of the international society for quality of life research (ISOQOL) proxy task force. Aims: Proxy reports are often used when patients are unable to self-report. It is unclear how proxy measures are currently in use in adult health care and research settings. We aimed to describe how proxy reports are used in these settings, including the use of measures developed specifically for proxy reporting in adult health populations. Methods: We systematically searched Medline, PsycINFO, PsycTESTS, CINAHL and EMBASE from database inception to February 2018. Search terms included a combination of terms for quality of life and health outcomes, proxy-reporters, and health condition terms. The data extracted included clinical context, the name of the proxy measure(s) used and other descriptive data. We determined whether the measures were developed specifically for proxy use or were existing measures adapted for proxy use. Results: The database search identified 17,677 possible articles, from which 14,098 abstracts were reviewed. Of these, 11,763 were excluded and 2335 articles were reviewed in full, with 880 included for data extraction. The most common clinical settings were dementia (30%), geriatrics (15%) and cancer (13%). A majority of articles (51%) were paired studies with proxy and patient responses for the same person on the same measure. Most paired studies (77%) were concordance studies comparing patient and proxy responses on these measures. Discussion: Most published research using proxies has focused on proxy-patient concordance. Relatively few measures used in research with proxies were specifically developed for proxy use. Future work is needed to examine the performance of measures specifically developed for proxies. Systematic Review Registration: PROSPERO No. CRD42018103179. Answer: The provided abstracts do not directly address the concept of race as a predictor versus a proxy variable. However, the abstracts do discuss the use of proxy variables in general and the potential issues that can arise from their use. For instance, the abstract from PUBMED:32027175 highlights the importance of selecting proxy variables that accurately represent the underlying construct for which control is intended, as using variables with poor construct validity can lead to an insufficient amount of variance being removed from the observed spurious relationship between the design variable and outcome. This suggests that if race were to be used as a proxy variable, it would be crucial to ensure that it accurately represents the latent construct it is intended to measure. The abstract from PUBMED:33343006 discusses the identification of causal effects with proxy variables of an unmeasured confounder, indicating that with at least two independent proxy variables satisfying a certain rank condition, the causal effect can be nonparametrically identified. This could imply that if race is used as a proxy variable for an unmeasured confounder in a study, careful consideration must be given to the selection and use of these proxies to ensure valid causal inference. The abstract from PUBMED:34254262 reports on the use of proxy measures and highlights that most research using proxies has focused on proxy-patient concordance, with relatively few measures developed specifically for proxy use. This suggests that when race is used as a proxy variable, it may often be in the context of comparing it to self-reported measures, and there may be a lack of tools specifically designed to measure race as a proxy. In summary, while the abstracts do not directly discuss race as a predictor versus a proxy variable, they do provide insights into the careful consideration required when using proxy variables in research, which would be applicable to the use of race in this context.
Instruction: Preventive care for women. Does the sex of the physician matter? Abstracts: abstract_id: PUBMED:8332153 Preventive care for women. Does the sex of the physician matter? Background: Emphasis on ensuring women's access to preventive health services has increased over the past decade. Relatively little attention has been paid to whether the sex of the physician affects the rates of cancer screening among women. We examined differences between male and female physicians in the frequency of screening mammograms and Pap smears among women patients enrolled in a large Midwestern health plan. Methods: We identified claims for mammography and Pap tests submitted by primary care physicians for 97,962 women, 18 to 75 years of age, who were enrolled in the health plan in 1990. The sex of the physician was manually coded, and the physician's age was obtained from the state licensing board. After identifying a principal physician for each woman, we calculated the frequency of mammography and Pap smears for each physician, using the number of women in his or her practice during 1990 as the denominator. Using unconditional logistic regression, we also calculated the odds ratio of having a Pap smear or mammogram for women patients with female physicians as compared with those with male physicians, controlling for the physician's and the patient's age. Results: Crude rates for Pap smears and mammography were higher for the patients of female than male physicians in most age groups of physicians. The largest differences between female and male physicians were in the rates of Pap smears among the youngest physicians. For the subgroup of women enrolled in the health plan for a year who saw only one physician, after adjustment for the patient's age and the physician's age and specialty, the odds ratio for having a Pap smear was 1.99 (95 percent confidence interval, 1.72 to 2.30) for the patients of female physicians as compared with those of male physicians. For women 40 years old and older, the odds ratio for having a mammogram was 1.41 (95 percent confidence interval, 1.22 to 1.63). For both Pap smears and mammography, the differences between female and male physicians in screening rates were much more pronounced in internal medicine and family practice than in obstetrics and gynecology. Conclusions: Women are more likely to undergo screening with Pap smears and mammograms if they see female rather than male physicians, particularly if the physician is an internist or family practitioner. abstract_id: PUBMED:25117521 Patient and physician gender concordance in preventive care in university primary care settings. Objective: To assess the quality of preventive care according to physician and patient gender in a country with universal health care coverage. Methods: We assessed a retrospective cohort study of 1001 randomly selected patients aged 50-80 years followed over 2 years (2005-2006) in 4 Swiss university primary care settings (Basel, Geneva, Lausanne, Zürich). We used indicators derived from RAND's Quality Assessment Tools and examined percentages of recommended preventive care. Results were adjusted using hierarchical multivariate logistic regression models. Results: 1001 patients (44% women) were followed by 189 physicians (52% women). Female patients received less preventive care than male patients (65.2% vs. 72.1%, p<0.001). Female physicians provided significantly more preventive care than male physicians (p=0.01) to both female (66.7% vs. 63.6%) and male patients (73.4% vs. 70.7%). After multivariate adjustment, differences according to physician (p=0.02) and patient gender (p<0.001) remained statistically significant. Female physicians provided more recommended cancer screening than male physicians (78.4 vs. 71.9%, p=0.01). Conclusions: In Swiss university primary care settings, female patients receive less preventive care than male patients, with female physicians providing more preventive care than male physicians. Greater attention should be paid to female patients in preventive care and to why female physicians tend to provide better preventive care. abstract_id: PUBMED:9140854 Physician gender and women's preventive services. Objective: To determine whether the gender of women's regular physicians, controlling for physician specialty, is associated with women's receiving key preventive services within recommended intervals. Design: Cross-sectional, nationally representative women's health telephone survey conducted by Louis Harris and Associates in February and March 1993 for The Commonwealth Fund. Participants: A total of 2,525 women in the continental United States, > or = 18 years old, including oversamples of African-American and Hispanic women. Main Outcome Measures: Receipt of each of five preventive services (blood pressure reading, Pap smear, cholesterol test, clinical breast examination, and mammogram) within specific periods. Results: Physician gender makes a significant difference for two specialty areas and for three preventive services. Patients of women family or general practitioners are more likely than the patients of men to have received a Pap smear or a blood cholesterol test within the last 3 years, and the patients of women internists are more likely to have received a Pap test. Physician gender is associated with a higher likelihood of mammography, but this finding was limited to patients ages 40-49 of women family or general practitioners. Physician gender does not affect receipt of blood pressure screening or breast examination. Conclusions: Analyses reveal limited evidence that physician gender affects women's receipt of preventive services. Physician specialty appears to be a more powerful predictor of preventive services received. The limited evidence for a physician gender effect, however, is relevant for those women who rely on a family or general practitioner or an internist for regular care. abstract_id: PUBMED:28437218 Patient, Physician, and Practice Characteristics Associated with Cardiovascular Disease Preventive Care for Women. Background: Cardiovascular disease (CVD) is the leading cause of death for American women. Although CVD preventive care has the potential to reduce a significant number of these deaths, the degree to which healthcare providers deliver such care is unknown. The purpose of this study was to identify patient, physician, and practice characteristics that significantly influence the provision of CVD preventive care during ambulatory care visits for female patients. Methods: The National Ambulatory Medical Care Survey datasets from 2005 to 2010 were utilized. The study sample included female patients ≥20 years of age whose healthcare provider performed CVD preventive care and who had visits for a new health problem, a routine chronic problem, management of a chronic condition, and preventive care. Binary logistic regression models estimated the association of patient, physician, and practice characteristics and CVD preventive care; cholesterol testing, body mass index (BMI) screening, and tobacco education. Results: Of the 32,009 visits, 15.9% involved cholesterol testing, 50.3% involved BMI screening, and 3.20% involved tobacco education. Obstetricians/gynecologists were less likely to perform cholesterol testing (aOR: 0.39; 95% CI: 0.25-0.61) and tobacco education (aOR: 0.56; 95% CI: 0.32-0.98) than general/family physicians. Conclusion: The delivery of CVD preventive care varied by healthcare provider type, with obstetricians/gynecologists having lower odds of providing two of the three services. The amount of time a physician spent with a patient was a significant predictor for the provision of all three services. These findings demonstrate the need to implement multifaceted approaches to address predicting characteristics of CVD preventive care. abstract_id: PUBMED:36369964 Preventive care visits with OB/GYNs and generalist physicians among reproductive-age women with chronic conditions. Objective: To examine services delivered during preventive care visits among reproductive-age women with and without chronic conditions by physician specialty. Data Sources: National Ambulatory Medical Care Surveys (2011-2018). Study Design: We examined provision of specific services during preventive care visits by physician specialty among reproductive-age female patients, overall and among women with five common chronic conditions (diabetes, hypertension, depression, hyperlipidemia, and asthma). Data Collection/extraction Methods: The sample included preventive visits to OB/GYNs or generalist physicians where the patient was female, age 18-44, and not pregnant. Principal Findings: In OB/GYN preventive visits, reproductive health services were more likely to be provided, while non-reproductive health services were less likely to be provided, both among reproductive-age female patients overall and among those with chronic conditions. For example, pap tests were provided in 44.5% of OB/GYN preventive visits (95% CI: 40.6-48.4) and in 21.4% of generalist preventive visits (95% CI: 17.2-26.6). Lipid testing was provided in 2.8% of OB/GYN preventive visits (95% CI: 1.7-3.9) and in 30.3% of generalist preventive visits (95% CI: 26.1-34.6). Conclusions: Understanding the full range of care received in preventive visits across settings could guide recommendations to optimize where reproductive-age women with chronic conditions seek care. abstract_id: PUBMED:11119167 Effect of physician and patient gender concordance on patient satisfaction and preventive care practices. Objective: To explore the role of the gender of the patient and the gender of the physician in explaining differences in patient satisfaction and patient-reported primary care practice. Design: Crosssectional mailed survey [response rate of 71%]. Setting: A large group-model Health Maintenance Organization (HMO) in northern California. Patients/participants: Random sample of HMO members aged 35 to 85 years with a primary care physician. The respondents (N = 10,205) were divided into four dyads: female patients of female doctors; male patients of female doctors; female patients of male doctors; and male patients of male doctors. Patients were also stratified on the basis of whether they had chosen their physician or had been assigned. Measurements And Main Results: Among patients who chose their physician, females who chose female doctors were the least satisfied of the four groups of patients for four of five measures of satisfaction. Male patients of female physicians were the most satisfied. Preventive care and health promotion practices were comparable for male and female physicians. Female patients were more likely to have chosen their physician than males, and were much more likely to have chosen female physicians. These differences were not seen among patients who had been assigned to their physicians and were not due to differences in any of the measured aspects of health values or beliefs. Conclusions: Our study revealed differences in patient satisfaction related to the gender of the patient and of the physician. While our study cannot determine the reasons for these differences, the results suggest that patients who choose their physician may have different expectations, and the difficulty of fulfi11ing these expectations may present particular challenges for female physicians. abstract_id: PUBMED:27212087 Preventive Care in Women's Health. Specialists in general obstetrics and gynecology are key providers of primary care in women. They diagnose and provide the initial management of many medical conditions unrelated to reproductive health. Most importantly they can impact the overall health of patients through incorporating preventive approaches in the annual well-woman visit. This article defines preventive care and identifies leading causes of mortality in women. A framework for identifying key elements of the well-woman examination is summarized. Examples of prevention are provided, which focus on major health care issues that affect adult women. abstract_id: PUBMED:11717570 Physician gender effects on preventive screening and counseling: an analysis of male and female patients' health care experiences. Background: Studies have documented that patients of female physicians receive higher levels of preventive services. However, most studies include patients of only one gender, examine mainly gender-specific screening services, and do not examine patient education and counseling. Objectives: This study tests both physician- and patient-gender effects on screening and counseling services received in the past year and considers effects of gender-matched patient-physician pairs. Research Design: Multivariate analyses are conducted to assess direct and interactive (physician x patient) gender effects and to control for important covariates. Subjects: Data are from the 1998 Commonwealth Fund Survey of Women's Health, a nationally representative sample of U.S. adults. The analytic sample includes 1,661 men and 1,288 women ages 18 and over. Measures: Dependent variables are measures of patient-reported screening and counseling services received, including gender-specific and gender-nonspecific services and counseling on general health habits and sensitive topics. Results: Female physician gender is associated with a greater likelihood of receiving preventive counseling for both male and female patients. For female patients, there is an increased likelihood of receiving more gender-specific screening (OR = 1.36, P <0.05) and counseling (OR = 1.40, P <0.05). These analyses provide no evidence that gender-matched physician-patient pairs provide an additional preventive care benefit beyond the main effect of female physician gender. Conclusions: Female physician gender influences the provision of both screening and counseling services. These influences may reflect physicians' practice and communication styles as well as patients' preferences and expectations. abstract_id: PUBMED:8764518 Physician performance of preventive care for women. Background: The purpose of this study was to examine whether academic primary care physicians recommended appropriate preventive care to their female patients, whether recommendations differed for different types of physicians, and how often these recommendations led to completion of the preventive service. Methods: A retrospective chart review was conducted in the family medicine and internal medicine outpatient practices at the Medical Center of Delaware (MCD). A total of 324 women ages 40 and over were selected from the patient panels of nine faculty and 14 residents. Physician recommendation of PAP smears, mammograms, breast exams, and influenza immunizations were measured against the guidelines of the U.S. Preventive Service Task Force. Specifically examined was whether rates of recommendation differed by specialty, gender and faculty/resident status of the physician. Also measured was completion of each preventive service and whether rates of completion differed from rates of physician recommendation. Results: Physicians recommended appropriate PAPs in 72 percent of patients, mammograms in 83 percent of patients, breast exams in 46 percent of patients and influenza immunizations in 30 percent of patients. Family physicians were more likely to recommend PAP smears (odds ratio 5.53, 95 percent confidence interval 2.26-12.98), but not more likely to recommend other services; female physicians were more likely to recommend breast exams (odds ratio 3.48, 95 percent confidence interval 2.05-5.89), but not other services. There were no significant differences between faculty and residents. Preventive services were completed in 48 percent of patients for PAPs, 68 percent for mammograms, 42 percent for breast exams, and 27 percent for flu shots. Conclusions: In academic primary care practices at MCD, physicians perform well in recommending PAPs and mammograms, but not in recommending breast exams or influenza immunizations. There were few differences in performance among different types of physicians. Because many women did not follow through with their physicians' recommendations for preventive care, physician performance is more accurately measured according to their recommendations rather than completion of services. Efforts to improve preventive care in women should differ depending on whether the problem lies in physician performance or patient behavior. abstract_id: PUBMED:28290645 Preventive Health Care for Women Who Have Sex with Women. Despite recent shifts in societal attitudes toward same-sex relationships, women who have sex with women face a variety of barriers to optimal health, including a history of negatively perceived interactions in clinical settings that lead them to delay or avoid health care. Women who have sex with women may be at disproportionate risk of obesity, tobacco use, substance use, mental health issues, intimate partner violence, sexually transmitted infections, and some cancers. Disparities can exist throughout the lifetime. Lesbian and bisexual adolescents are vulnerable to bullying, family rejection, and risky sexual behavior that may lead to sexually transmitted infections or unintended pregnancy. Sexual minority stress, which is a response to stigmatization, prejudice, and internalized homophobia, contributes to many of these conditions. Family physicians should foster trust and communication to provide a nonjudgmental, welcoming environment supportive of culturally competent health care and optimal outcomes. When indicated, clinicians should refer women who have sex with women to culturally sensitive community resources and legal advisors for assistance with medical decision making, hospital visitation, conception, and legal recognition of nonbiologic parents. Answer: The sex of the physician does appear to matter in the provision of preventive care for women. Studies have shown that female physicians tend to provide more preventive care than their male counterparts. For instance, women are more likely to undergo screening with Pap smears and mammograms if they see female rather than male physicians, especially if the physician is an internist or family practitioner (PUBMED:8332153). Additionally, female physicians provided more recommended cancer screening than male physicians in Swiss university primary care settings (PUBMED:25117521). Moreover, the gender of women's regular physicians, controlling for physician specialty, is associated with women's receiving key preventive services within recommended intervals. Patients of female family or general practitioners are more likely than those of male practitioners to have received a Pap smear or a blood cholesterol test within the last 3 years, and patients of female internists are more likely to have received a Pap test (PUBMED:9140854). In the context of cardiovascular disease (CVD) preventive care, the delivery varied by healthcare provider type, with obstetricians/gynecologists having lower odds of providing cholesterol testing and tobacco education compared to general/family physicians (PUBMED:28437218). During preventive care visits, OB/GYNs are more likely to provide reproductive health services, while generalist physicians are more likely to provide non-reproductive health services (PUBMED:36369964). Patient satisfaction and preventive care practices also show differences based on the gender concordance between the patient and physician. Female patients who chose female doctors were the least satisfied among the groups, while male patients of female physicians were the most satisfied. However, preventive care and health promotion practices were comparable for male and female physicians (PUBMED:11119167). Female physicians are associated with a greater likelihood of providing preventive counseling for both male and female patients, and female patients have an increased likelihood of receiving more gender-specific screening and counseling (PUBMED:11717570). In academic primary care practices, female physicians were more likely to recommend breast exams, but there were no significant differences in the recommendation of other services based on the gender of the physician (PUBMED:8764518). Lastly, women who have sex with women may face barriers to optimal health and may be at disproportionate risk of certain health issues.
Instruction: Is the Addiction Severity Index a reliable and valid assessment instrument among clients with severe and persistent mental illness and substance abuse disorders? Abstracts: abstract_id: PUBMED:9211041 Is the Addiction Severity Index a reliable and valid assessment instrument among clients with severe and persistent mental illness and substance abuse disorders? Objective: This study examined aspects of reliability, validity and utility of Addiction Severity Index (ASI) data as administered to clients with severe and persistent mental illness (SMI) and concurrent substance abuse disorders enrolled in a publicly-funded community mental health center. Methods: A total of 62 clients with SMI volunteered to participate in an interobserver and test-retest reliability study of the ASI. Spearman-Brown and Pearson correlation coefficients were calculated to examine the extent of agreement among client responses. Results: Overall 16% of the composite scores could not be calculated due to missing data and 31% of the clients misunderstood or confused items in at least one of the seven ASI domains. As a whole, the interobserver reliability of the ASI composite scores for those subjects where sufficient data were available was satisfactory. However, there was more variance in the stability of client responses, with four composite scores producing test-retest reliability coefficients below .65. Conclusion: Evidence from this study suggests that the ASI has a number of limitations in assessing the problems of clients with severe and persistent mental illness, and it is likely that other similar instruments based on the self-reports of persons with severe and persistent mental illness would also encounter these limitations. abstract_id: PUBMED:1588339 More data on the Addiction Severity Index. Reliability and validity with the mentally ill substance abuser. The Addiction Severity Index (ASI) is a semistructured interview that collects data from substance abusers in seven problem areas: medical, employment, legal, alcohol, other drug use, family-social functioning, and psychological status. In each area, the clients provide an estimate of the seriousness of the problem and their need for treatment. It has been demonstrated in a number of studies to be reliable, valid, and useful in monitoring treatment changes and in matching substance-abusing clients to treatments. This study investigates the usefulness of the ASI with male and female substance abusers who suffer a concurrent major psychiatric disorder (N = 152). Data on the independence of the problem scales, their internal consistency, interrater reliability, and concurrent validity are presented. It is concluded that: a) the problem areas are independent from each other, underscoring the need for multidimensional assessment; b) interviewer severity ratings provide information in addition to that provided by clients; c) the reliability of the composite scores is generally adequate, with the exception of the legal and family-social scales; and d) further examination of the employment scores is required. abstract_id: PUBMED:8113777 Severity of substance use disorders among psychiatric inpatients. This study compares the severity of DSM-III-R psychoactive substance use disorders (PSUDs) among dually diagnosed psychiatric inpatients with independent axis I mental disorders (IMDs) with the severity of PSUDs among: a) patients from the same hospitals with PSUD-related organic mental syndromes (PSUD-OMD) and b) patients from a residential drug-treatment program with PSUDs but no axis I mental disorders. The drug-use disorders among the group 1 (IMD+PSUD) patients were less severe on multiple indicators from the Structured Clinical Interview for DSM-III-R (SCID) and the Addiction Severity Index compared with the drug-use disorders among the other two groups. This did not hold for alcohol disorders. These findings carry potential implications for treatment strategies for the mentally ill with drug use. abstract_id: PUBMED:15222596 Impacts of motivation for change on the severity of alcohol use by patients with severe and persistent mental illness. Objective: This study examined the effect of motivation for change, measured by the Stages of Change Readiness and Treatment Eagerness Scale (SOCRATES), on alcohol use severity among alcoholic patients with severe and persistent mental illness. Method: At a Veterans Affairs hospital, 390 dually diagnosed patients were recruited and assessed for motivation for change, alcohol use severity, psychotic symptoms and global functioning at baseline and 9-month follow-up. Results: Regression analyses showed that patients who were highly ambivalent about their alcohol use at baseline consumed significantly more alcohol 9 months later, on the basis of Addiction Severity Index ratings (p < or = .01), than patients who felt less ambivalent. Conclusions: The findings suggest that increased awareness of alcohol-related problems is essential to reducing alcohol use severity for alcoholic patients with severe and persistent mental illness. The implications and limitations of the findings are discussed. abstract_id: PUBMED:21459985 Course of smoking and quit attempts among clients with co-occurring severe mental illness and substance use disorders. Objective: Approximately three-quarters of people with severe mental illnesses smoke cigarettes, and the rate is even higher among those with co-occurring substance use disorders. This longitudinal study explored patterns of cigarette use and cessation attempts among mental health clients with co-occurring disorders. Methods: Clients participating in a longitudinal study of co-occurring disorders (N=174) were assessed yearly over 11 years for smoking and smoking cessation behaviors, psychiatric symptoms, and other substance use, as well as social, occupational, and other characteristics. Generalized linear mixed-effects models were used to examine smoking cessation outcomes. Results: Eighty-nine percent of participants were current smokers at baseline. Seventy-five percent of participants tried to quit at least once over the 11 years of the study, although none received nicotine replacement therapy or bupropion. Only 17% were not smoking at the 11-year follow-up. The presence of a chronic general medical condition predicted a longer duration of not smoking in the past year. Being male and having a high school education or higher were associated with more attempts to quit smoking, as were higher scores on the activation subscale of the Brief Psychiatric Rating Scale, more social contact with non-substance-using friends, and more daily activities. Conclusions: Although many clients with co-occurring serious mental illness and substance use disorders tried to quit each year, few achieved sustained abstinence and none used evidence-based interventions (nicotine replacement therapy or bupropion). Treatment providers need to administer evidence-based interventions to help people quit successfully. abstract_id: PUBMED:16215196 Substance abuse relapse in a ten-year prospective follow-up of clients with mental and substance use disorders. Objectives: This study addressed the rate and predictors of substance abuse relapse among clients with severe mental illness who had attained full remission from substance abuse. Methods: In a ten-year prospective follow-up study of clients with co-occurring severe mental and substance use disorders, 169 clients who had attained full remission, defined according to DSM-III-R as at least six months without evidence of abuse or dependence, were identified. The Kaplan-Meier survival curve was developed to show the pattern of relapse, and a discrete-time survival analysis was used to identify predictors of relapse. Results: Approximately one-third of clients who were in full remission relapsed in the first year, and two-thirds relapsed over the full follow-up period. Predictors of relapse included male sex, less than a high school education, living independently, and lack of continued substance abuse treatment. Conclusions: After attaining full remission, clients with severe mental disorders continue to be at risk of substance abuse relapse for many years. Relapse prevention efforts should concentrate on helping clients to continue with substance abuse treatment as well as on developing housing programs that promote recovery. abstract_id: PUBMED:18186180 Are the responses of clients with psychiatric and addiction disorders using services for the homeless valid? Objective: To verify the validity of self-reported data on service use from clients with mental or substance abuse disorders in Montreal and Quebec services for homeless individuals. Method: To compare the self-reported data from the Enquête chez les personnes itinérantes (Fournier, 2001) on health service use with official data from Quebec health services (MEDECHO and RAMQ). Results: The analysis shows a moderate-to-high level of concordance between the self-reported and the official data. Almost every item analyzed presents moderate but significant intraclass correlation coefficients for general and psychiatric hospitalization and use of psychiatric medication, but lower and nonsignificant coefficients for medical hospitalization. Participant characteristics such as mental disorders, homeless status, and substance abuse problems do not seem to have an impact on data validity. Conclusions: The answers on health service use from individuals with mental health problems, homeless status, or substance abuse problems are generally valid in the results presented. Thus the self-reported data from these individiuals seems to be a generally valid source of data and an affordable one for research on service use or other domains. abstract_id: PUBMED:16215199 Future directions in preventing relapse to substance abuse among clients with severe mental illnesses. The authors review the literature on substance use disorders among persons with severe mental illnesses, including the other papers in this special section on relapse prevention, and suggest future directions. Although prevention of relapse to substance abuse has a well-developed theoretical and empirical base, this perspective has rarely been applied to persons with co-occurring severe mental illness. Research indicates that clients with co-occurring disorders are highly prone to relapse to substance abuse, even after they have attained full remission. Their risk factors include exacerbations of mental illness, social pressures within drug-using networks, lack of meaningful activities and social supports for recovery, independent housing in high-risk neighborhoods, and lack of substance abuse or dual diagnosis treatments. The evidence in hand suggests several steps: developing healthy and protective environments that are experienced as nurturing of recovery; helping people make fundamental changes in their lives, such as finding satisfying jobs, abstinent friends, networks of people who are in the process of recovery, and a sense of meaning; providing specific and individualized treatments for mental illnesses, substance use disorders, and other co-occurring problems; and developing longitudinal research on understanding and preventing relapse that addresses social context as well as biological vulnerabilities and cognitive strategies. abstract_id: PUBMED:10887764 Use of the AUDIT and the DAST-10 to identify alcohol and drug use disorders among adults with a severe and persistent mental illness. State Psychiatric Hospital outpatients (93 men, 69 women) diagnosed with a serious persistent mental illness (SPMI) completed the Alcohol Use Disorders Identification Test (AUDIT) and Drug Abuse Screening Test (DAST-10) by interview as part of a general health/behavior screening instrument. Responses to the AUDIT and DAST-10 were compared with criteria of current diagnosis and occurrence of symptoms in the last year for both alcohol and drug use disorders, respectively. Results showed that for both diagnosis and symptoms, AUDIT cutpoints of 7 and 8 had good sensitivity and specificity, and DAST-10 cutpoint of 2 was identified for both criteria. These and other findings suggest that both instruments have promising clinical utility when used with individuals diagnosed with an SPMI. abstract_id: PUBMED:30861142 Severity of Substance Use Disorder: Utility as an Outcome in Clinical Settings. Background: Some authors have pointed out the usefulness of the levels of substance use disorder (SUD) as a treatment outcome. However, in order to use this variable as an outcome measure, its impact needs to be addressed within a clinical context. The aim of this study was to analyze the sensitivity of SUD levels as a measure for detecting reliable changes and to make a comparison between the changes in SUD levels detected when using the number of criteria fulfilled and when using the reliable change index (RCI). Methods: The sample consisted of 206 (106 in follow-up) patients diagnosed with abuse/dependence on alcohol and cocaine, according to DSM-IV criteria. The Substance Dependence Severity Scale for DSM-5 was used to determine current alcohol use disorder (AUD) and cocaine use disorder (CUD). Number of DSM-5 criteria fulfilled and RCI were used to determine the change in SUD levels. Results: No association was found between adherence to/abandonment of treatment and AUD severity levels (χ2 = 7.029, p = 0.071) or CUD severity levels (χ2 = 2.044, p = 0.413). Statistical significant differences for levels of AUD (z = -3.870, p = 0.000) and CUD (z = -5.382, p = 0.000) were found between baseline assessment and follow-up. According to the number of DSM-5 criteria and RCI, the Kappa coefficient for the change in patient status (improved, worsened, or no change) was k = 0.61 for alcohol patients and k = 0.64 for cocaine patients. The "mild" category showed the greatest inconsistency between both procedures. Conclusions: Levels of SUD are sensitive to the impact of treatment as measured by the difference between the baseline assessment and 3-month follow-up. However, conclusions differ according to whether the DSM-5 criteria or the RCI is applied. Answer: The Addiction Severity Index (ASI) has been examined for its reliability and validity among clients with severe and persistent mental illness (SMI) and concurrent substance abuse disorders, with mixed results. Some studies have found that the ASI can be a reliable and valid tool for assessing the severity of substance abuse in this population. For instance, one study found that the ASI's problem areas are independent from each other, underscoring the need for multidimensional assessment, and that the reliability of the composite scores is generally adequate, with the exception of the legal and family-social scales (PUBMED:1588339). Another study indicated that the ASI composite scores had satisfactory interobserver reliability for subjects where sufficient data were available, although there was variance in the stability of client responses, with four composite scores producing test-retest reliability coefficients below .65 (PUBMED:9211041). However, the same study also reported that 16% of the composite scores could not be calculated due to missing data and 31% of the clients misunderstood or confused items in at least one of the seven ASI domains, suggesting limitations in the ASI's utility for this population (PUBMED:9211041). Additionally, the study comparing the severity of DSM-III-R psychoactive substance use disorders (PSUDs) among different patient groups found that the drug-use disorders among psychiatric inpatients with independent axis I mental disorders were less severe on multiple indicators from the ASI compared with other groups, which may imply that the ASI could be differentially sensitive to substance use severity depending on the population (PUBMED:8113777). In summary, while the ASI has been shown to have some reliability and validity among clients with SMI and substance abuse disorders, there are also significant limitations and variability in its application to this population. It appears that the ASI may not always capture the full complexity of substance abuse issues in clients with severe mental illness, and other factors such as the client's understanding of the items and the presence of missing data can impact the instrument's effectiveness (PUBMED:9211041).
Instruction: Is there an association between disease ignorance and self-rated health? Abstracts: abstract_id: PUBMED:32999993 Association of lifestyle elements with self-rated wellness and health status in patients with Behcet's disease. Background: Assessment of the association of various lifestyle factors and wellness and health status in patients with Behcet's disease was the main goal of this study. Methods: Demographic information, body mass index, smoking habit, mood status, sleep quality, physical activity levels, nutritional data, symptoms, signs, laboratory findings and patient reported outcome (self-rated wellness and health) in 52 patients with Behcet's disease were collected in this cross-sectional study. A multivariable linear regression model was used to assess the association of self-rated wellness and health status and lifestyle factors, adjusted for age, sex, BMI, major symptoms and signs, as well as laboratory findings. Results: Female to male ratio was 21/31, and the mean age of participants was 44 years. Mean self-rated wellness and health score was 14.6 out of 20. Oral and genital aphthous, ocular involvement, pathergy, and skin involvement were observed in 100, 52, 92, 36.5, and 9.5% of patients, respectively. The mean values of sleep, mood and nutrition quality scores were 17.7 (out of 70), 13.8 (out of 35), and 9 (out of 21), respectively. Univariable regression analysis showed a significant association between sleep quality, mood status, and disease duration, with patients' status in terms of self-rated wellness and health. In multivariable linear regression, sleep quality was the only significant predictive variable associated with self-rated wellness and health. Conclusion: Sleep quality was the most important factor associated with low self-rated wellness and health status in patients with Behcet's disease. abstract_id: PUBMED:34693255 Association of Self-Rated Health With Functional Limitations in Patients With CKD. Rationale & Objective: In patients with chronic kidney disease (CKD), self-rated health ("In general, how do you rate your health?") is associated with mortality. The association of self-rated health with functional status is unknown. We evaluated the association of limitations in activities of daily living (ADLs) with self-rated health and clinical correlates in a cohort of patients with CKD stages 1-5. Study Design: Prospective cohort study. Setting & Participants: Patients with CKD at a nephrology outpatient clinic in western Pennsylvania. Outcome: Patients participated in a survey assessing their self-rated health (5-point Likert scale) and physical (ambulation, dressing, shopping) and cognitive (executive and memory) ADLs. Adjusted analysis was performed using logistic regression models. Analytical Approach: Logistic regression was conducted to examine the adjusted association of 3 dependent variables (sum of total, physical, and cognitive ADL limitations) with self-rated health (independent variable of interest). Results: The survey was completed by 1,268 participants (mean age, 60 years; 49% females, and 74% CKD stages 3-5), of which 41% reported poor-to-fair health. Overall, 35.9% had at least 1 physical ADL limitation, 22.1% had at least 1 cognitive ADL limitation, and 12.5% had at least 3 ADL limitations. Ambulation was the most frequently reported limitation and was more common in patients reporting poor-to-fair self-rated health compared with those with good-to-excellent self-rated health (58.1% vs 17.4%, P < 0.001). In our fully adjusted model, poor-to-fair self-rated health was strongly associated with limitations in at least 3 ADLs (total ADL) [OR 8.29 (95% CI, 5.23-13.12)]. There was no significant association of eGFR with ADL limitations. Limitations: Selection bias due to optional survey completion, residual confounding, and use of abbreviated (as opposed to full) ADL questionnaires. Conclusions: Poor-to-fair self-rated health is strongly associated with physical ADL limitations in patients with CKD. Future studies should evaluate whether self-rated health questions may be useful for identifying patients who can benefit from additional evaluation and treatment of functional limitations to improve patient-centered outcomes. abstract_id: PUBMED:24354845 Self-rated health mediates the association between functional status and health-related quality of life in Parkinson's disease. Aims And Objectives: To explore whether self-rated health acts as a potential mediator in the association between functional status and health-related quality of life in Parkinson's disease. Background: Older persons (as most patients with Parkinson's disease are) who reported poor self-rated health compared with those with excellent self-rated health were two-and-a-half times more likely to have experienced a decline in functional ability. Design: Cross-sectional. Methods: Socio-demographic and clinical data of the patients (n = 176) were obtained during a structured interview and from medical records. Functional status was measured with the Unified Parkinson's Disease Rating Scale (total score), self-rated health with the first item of the Short-Form 36-item Health Survey Questionnaire and health-related quality of life with the disease-specific questionnaire called the Parkinson's Disease Quality of Life Questionnaire-39. Multiple linear regression analyses and the Sobel test were employed to assess mediation. Results: Self-rated health seems to have a mediating effect on the association between functional status and health-related quality of life. The Sobel test confirmed an indirect effect of functional status via self-rated health on health-related quality of life and showed a statistically significant indirect effect of functional status on health-related quality of life via self-rated health against the direct route without the mediator. Conclusions: Self-rated health partially mediates the deteriorating effect of functional status on health-related quality of life. Relevance To Clinical Practice: Supportive and adaptation psychosocial intervention programmes leading to restored self-rated health may enhance the quality of life regardless of disability in Parkinson's disease. abstract_id: PUBMED:37754055 Is There an Association between the Use of Social Media and Self-Rated Health? This study sought to determine the association between social media and self-rated health. This study used the 2022 Health Information National Trends Survey for American adults. A statistical analysis was conducted using Chi-square and multivariable logistic regression. Of the 6018 study participants, the majority reported that they were in excellent/very good and good health (82.9%, n = 4930). More than half (58.2%, n = 3268) of the respondents reported that they visited a social media site almost every day in the past 12 months, while 76.8% (n = 4843) reported that they never interacted with people who had similar health or medical issues on social media. There was no association between everyday visits to a social media site, interaction with people with similar health/medical issues on social media, or watching a health-related video on social media and self-rated health. Those who had full-time employment were more likely to rate their health as excellent/very good/good (AOR: 2.394, 95% Conf. Int: 1.820-3.149) compared to those with no full-time employment. Marital status, confidence in taking care of oneself, education, and current smoking were associated with self-rated health. This study showed that the use of social media which included the watching of health-related videos was not associated with self-rated health. abstract_id: PUBMED:26483912 The impact of sleep duration on self-rated health. Purpose: To review the association between sleep duration and self-rated health. Methods: A search for original and review articles focusing on sleep duration and self-rated health was performed in PubMed. The general search strategy was [("sleep duration" OR "total sleep time" OR "time in bed") AND "self-rated health"]. Results: We found 22 articles in the English language; 8 articles with no direct association between sleep duration and self-rated health were excluded. Of these articles, 14 were considered potentially relevant and examined in detail, and 9 were excluded for not having self-rated health as the primary outcome. This work was compounded by 5 papers. The extremes of sleep duration (short or long) exhibited an interaction with poor or worse self-rated health. Conclusion: The sleep duration issue should be considered when inquiring about health conditions, as this factor can lead to adverse results in global health status. abstract_id: PUBMED:35694553 Intergenerational reproduction and adult self-rated physical health in China. We used cross-sectional data from the 2017 Chinese General Social Survey to investigate whether and how Chinese parents utilize their socioeconomic resources to facilitate the acquisition of socioeconomic resources by their children that in turn affect the self-rated health of the adult children. We found that father's type of work unit (danwei) and father's membership in the Chinese Communist Party were not independently associated with the self-rated health of survey respondents. Father's education was associated with the self-rated health of women, especially younger women, and self-reported childhood social class was associated with the self-rated health of men and older women, most strongly so for younger men. Two thirds of the association between father's education and self-rated health among younger women was statistically explained by personal socioeconomic resources and almost a quarter of the association between self-reported childhood social class and self-rated health among younger men was statistically explained by personal socioeconomic resources. Our study illuminates the importance of intergenerational reproduction in fostering the good health of Chinese adults, especially for those who grew up after the Chinese economic reform of the 1970s. abstract_id: PUBMED:38205166 Associations of types of physical activity with self-rated physical and mental health in Denmark. The purpose of this article is to analyse the association between types of physical activity and self-rated physical and mental health. The analyses are based on data from a Danish 2020 survey of adults' exercise habits. The questionnaire was answered by 163,131 adults of whom 55 pct. were women and relatively evenly distributed by age. The questionnaire included questions about participation in different types of physical activity, self-rate of physical and mental health and sociodemographic and socioeconomic background. The analysis distinguishes between 16 main types of sports and exercise activities, cycling as a mode of transport, and gardening at home. The statistical analysis consists of ordinal regression, where the association between practicing a certain form of physical activity and self-rated health takes into account the adults other physical activity habits and their socio-economic and socio-demographic background. The analyses show, firstly, that the association, for most types of physical activity, is stronger for self-rated physical health than for self-rated mental health. Secondly, there are large differences in the strength of the association between the different types of physical activity. Thirdly, the association is significantly stronger for those who practise an activity type most days of the week than for those who do it less often. Fourthly, the analyses show small gender differences in the association between types of physical activity and self-rated health. Finally, the more different types of activity a person practises during a week, the better the self-rated health, but this primarily applies to self-rated physical health. abstract_id: PUBMED:34933686 Association of lifestyle and disease characteristics with self-rated wellness/health score in patients with rheumatoid arthritis. Objective: To study the relationship of self-rated wellness/health and lifestyle in patients with rheumatoid arthritis. Methods: Self-rated wellness/health, demographics, smoking, mood, sleep, physical activity, diet, symptoms/signs, body mass index and laboratory findings in 142 patients with rheumatoid arthritis were collected in the current cross-sectional study. Multivariable generalized additive model (GAM) was employed to study the association of self-rated wellness/health score and lifestyle factors. Results: Female/male ratio was 116/26 and the mean (SD) age of sample was 52 (13) years. Mean (SD) self-rated wellness/health score out of 10 was 7.2 (1.63). Mean (SD) number of tender joints and swollen joints were 4.42 (4.55) and 4.00 (4.26), respectively. The mean sleep score was 29.5 out of 70. Patients went to bed more than one hour earlier during the weekdays compared to weekends (22:45 vs. 23:52 PM, respectively, p < 0.0001). They also woke up more than one hour earlier during the weekdays compared to the weekends (6:08 vs. 7:20 AM, respectively, p < 0.0001). Their nap duration during weekdays was about half an hour shorter than the nap duration on weekends (19.75 vs. 48.02 minutes, respectively, p < 0.0001). The mean mood and diet scores were 18.5/35 and 22.5/42, respectively. By backward elimination in multivariable regression model (GAM), disease duration, mood, sleep quality, weekdays sleep characteristics (sleep duration, time to go to bed, wake-up time, time to fall asleep and nap duration), and sleep duration on weekends remained in the final model (R2 = 0.225, p = 0.01). Sleep quality, nap duration on weekdays, night sleep duration on weekends and mood status were the significant variables associated with self-rated wellness/health score. Conclusion: In patients with rheumatoid arthritis, the low self-rated wellness/health score was associated with the low sleep quality, long sleep duration on weekends, and long nap duration on weekdays. abstract_id: PUBMED:30498369 Association of mitochondrial DNA copy number with self-rated health status. Purpose: In aging adults, mitochondrial dysfunction may be an important contributor. We evaluated the association between mitochondrial DNA (mtDNA) copy number, which is a biomarker for mitochondrial function, and self-rated health status. Patients And Methods: We conducted a cross-sectional study of patients enrolled within the Mayo Clinic Biobank. We utilized the questionnaire and sequence data from 944 patients. We examined the association between mtDNA copy number and self-rated health status with 3 collapsed categories for the latter variable (excellent/very good, good, and fair/poor). For analysis, we used proportional odds models after log-transforming mtDNA copy number, and we adjusted for age and sex. Results: We found the median age at enrollment was 61 years (25th-75th percentile: 51-71), and 64% reported excellent or very good health, 31% reported good health, and 6% reported fair/poor health. Overall, the median mtDNA copy number was 88.9 (25th-75th percentile: 77.6-101.1). Higher mtDNA copy number was found for subjects reporting better self-rated health status after adjusting for age, sex, and comorbidity burden (OR =2.3 [95% CI: 1.2-4.5] for having better self-rated health for a one-unit increase in log-transformed mtDNA copy number). Conclusion: We found that a higher mtDNA copy number is associated with better self-rated health status after adjustment for age, sex, and comorbidity burden. The current study implies that mtDNA copy number may serve as a biomarker for self-reported health. Further studies, potentially including cohort studies, may be required. abstract_id: PUBMED:25617400 Self-Rated Health, Interviewer-Rated Health, and Their Predictive Powers on Mortality in Old Age. Objectives: This study examines the potential use of interviewer-rated health (IRH) as a complementary measure of self-rated health (SRH) through a systematic comparison of their components and mortality-predictive powers in the Chinese elderly population. Methods: This study used a nationwide dataset with more than 12,000 adults aged 65 or older drawn from the 2005 and 2008 waves of the Chinese Longitudinal Health Longevity Survey (CLHLS). Results: Disability, cognitive function, chronic disease conditions, psychological well-being, and health behaviors influenced both the SRH and IRH of Chinese older adults; these factors, especially disabilities, explained a large portion of the association between SRH and mortality. However, the impact of these factors on the association between IRH and mortality was limited. Furthermore, when both SRH and IRH were included in the analytical models, the association between SRH and mortality disappeared, while the association between IRH and mortality still persisted. Discussion: Although there is some difference between IRH and SRH, IRH captures similar health information as SRH and is strongly predictive of mortality independent of SRH; thus, IRH could be a good supplementary measurement for well-adopted SRH. Answer: The provided abstracts do not directly address the association between disease ignorance and self-rated health. Disease ignorance typically refers to a lack of knowledge or awareness about a particular health condition, which is not explicitly mentioned in the abstracts. The abstracts focus on various factors that are associated with self-rated health, such as lifestyle elements, functional limitations, functional status, social media use, sleep duration, intergenerational socioeconomic factors, types of physical activity, disease characteristics, mitochondrial DNA copy number, and interviewer-rated health. However, none of the abstracts discuss the concept of disease ignorance in relation to self-rated health. To answer the question about the association between disease ignorance and self-rated health, one would need to look at studies that specifically investigate the impact of health literacy, awareness, and knowledge about diseases on individuals' perceptions of their own health. The abstracts provided do not offer information on this topic.
Instruction: Minimal head trauma in children revisited: is routine hospitalization required? Abstracts: abstract_id: PUBMED:9521936 Minimal head trauma in children revisited: is routine hospitalization required? Objective: Children with a question of occult head injury are routinely hospitalized despite having both normal central nervous system (CNS) and computed tomographic (CT) scan examinations. We determined the incidence of significant CNS morbidity after occult head injury to determine whether or not hospital admission was necessary in children after minimal head trauma. Methods: We reviewed the records of children admitted to a level I trauma center with a question of closed head injury, an initial Glasgow Coma Scale equal to 15, a normal neurologic exam, and a normal head CT scan. Children with associated injuries requiring admission were excluded. The endpoints were deterioration in CNS exam, new CT findings, and the need for a prolonged hospital stay. Results: Sixty-two patients were studied with a mean age of 7 years (range, 1 month to 15 years), and 65% were male. The primary mechanisms of injury were fall (45%) and vehicular crash (23%). The mean injury severity score was 4 +/- 2. The mean length of stay was 1.2 days (range, 1 to 3 days). Prolonged hospitalization occurred in 9 patients (15%). No child developed significant CNS sequelae warranting hospital admission. Total charges for these hospitalizations were $177 874. Conclusions: Children undergoing emergency department work-up of occult head injury, who have a normal CNS exam and a normal head CT scan, do not seem to be at risk for significant CNS sequelae. These patients can be discharged home with parental supervision and avoid unnecessary and costly hospitalization. abstract_id: PUBMED:2710191 Variations in rates of hospitalization of children in three urban communities. Hospitalization accounts for a large portion of the expenditures for child health care, and differences in the rate of hospitalization may produce important variations in the cost of that care. We studied the rates of hospitalization in Boston, Rochester (N.Y.), and New Haven (Conn.) in 1982. We assigned the risk of hospitalization in Rochester a score of 1.00. Boston children were hospitalized at more than twice the rate of Rochester children for most medical diagnostic categories (relative risk, 2.65; 95 percent confidence interval, 2.53 to 2.78), and the rate for the New Haven group was intermediate (relative risk, 1.80; 95 percent confidence interval, 1.68 to 1.93). Rates of inpatient surgery differed less (Boston relative risk, 1.12; New Haven relative risk, 0.93). The relative risks of hospitalization (as compared with Rochester children) for Boston and New Haven children, respectively, were 3.8 and 2.3 for asthma, 6.1 and 2.9 for toxic ingestions, and 2.6 and 2.7 for head injuries. Fractures of the femur, appendicitis, and bacterial meningitis (conditions uniformly treated in the hospital) had similar rates of hospitalization across the three cities, but the relative risk of hospitalization for aseptic meningitis was 3.7 in Boston. The rates of hospitalization of children in all three communities were below the national averages in 1982. Although this study does not define the reasons for the variation in rates of hospitalization, it is possible that they were related in part to differences in socioeconomic status or access to primary care. The implications of these data for the cost and quality of pediatric care therefore remain to be determined. abstract_id: PUBMED:12671807 Serious winter sport injuries in children and adolescents requiring hospitalization. To describe the epidemiology of serious winter sports-related injuries resulting in hospitalization in children and adolescents, we prospectively collected and analyzed records of all winter sports-related injuries requiring hospitalization at our respective institutions from 1996 to 2000. We identified 101 patients with a mean age of 10.7 years, of whom 68% were male and 32% female. Skiers accounted for 71% and snowboarders 26% of injuries; 3% of injuries were snowmobile-or luge-related. Leading mechanisms of injury were ground level falls (50%), crashes into trees, (18%), and falls from ski lift (13%). Approximately 26% of the patients met criteria for trauma-team activation. Leading diagnoses were head injury (20%), femur fracture (18%), and concussion (11%). Five patients required admission to the intensive care unit. The median injury severity score (ISS) was 7; 8% of patients had an ISS score > 15. Abbreviated Injury Scale-1990 scores of > or = 3 were noted in 34% of the patients. We conclude that Injuries sustained through participation in winter sports may be severe, especially in the case of injuries involving the head. Injury-prevention strategies should focus on head injuries. abstract_id: PUBMED:27444293 Evaluation of the necessity of hospitalization in children with an isolated linear skull fracture (ISF). Objective: The prevalence of skull fractures after mild head trauma is 2 % in children of all ages and 11 % in children younger than 2 years. The current standard management for a child diagnosed with an isolated skull fracture (ISF), in our institute, is hospitalization for a 24-h observation period. Based on data from the literature, less than 1 % of all minor head injuries require neurosurgical intervention. The main objective of this study was to evaluate the risk of neurological deterioration of ISF cases, in order to assess the need for hospitalization. Methods: We reviewed the medical charts of 222 children who were hospitalized from 2006 to 2012 with ISF and Glascow Coma Scale-15 at the time of arrival. We collected data regarding demographic characteristics, mechanism of injury, fracture location, clinical symptoms and signs, need for hospitalization, and need for repeated imaging. Data was collected at three time points: at presentation to the emergency room, during hospitalization, and 1 month after admission, when the patients' parents were asked about the course of the month following discharge. Results: None of the 222 children included in the study needed neurosurgical intervention. All were asymptomatic 1 month after the injury. Two children underwent repeated head CT due to persistence or worsening of symptoms; these CT scans did not reveal any new findings and did not lead to any intervention whatsoever. Conclusion: Children arriving at the emergency room with a minor head injury and isolated skull fracture on imaging studies may be considered for discharge after a short period of observation. Discharge should be considered in these cases provided the child has a reliable social environment and responsible caregivers who are able to return to the hospital if necessary. Hospital admission should be reserved for children with neurologic deficits, persistent symptoms, suspected child abuse, or when the parent is unreliable or is unable to return to the hospital if necessary. Reducing unnecessary hospitalizations can prevent emotional stress, in addition to saving costs for the child's family and the health care system. abstract_id: PUBMED:32237704 Rural-urban differences in children’s musculosceletal injuries in north-eastern Poland Objective: To investigate rural-urban differences in severity of injuries, by means of age, sex and duration of hospitalization of injured children in northeastern Poland. Methods: The study included children admitted to the Department of Pediatric Orthopaedics and Traumatology in between 2002-2005 (1322 patients) and 2012-2013 (1725 patients) due to injury. We applied the rural-urban division on the base of number of inhabitants (rural – below, urban – over 25000). We divided injuries into four categories based on severity. Statistical analysis was performed to compare the risk of hospitalization due to different types of injuries among children in different age and sex groups, living in rural and urban areas Results: Children from urban areas have a higher risk of hospitalization compared to their peers living in rural areas due to two least severe categories of injuries. Children from rural areas had much higher risk of hospitalization due to most severe injuries. Results were consistent for both time intervals.. The average duration of hospitalization was significantly longer among rural populations. However, in both rural and urban areas the mean length of stay has been shortened in comparison with the years 2002-2005. The average age of urban patients has been reduced over the years and remained stable among rural population. In both groups boys comprised the majority of patients with all kinds of injuries. Conclusions: Analysis of the risk of hospitalization among children from rural and urban areas brings constructive conclusions only after applying the division based on severity of injuries. Children living in rural areas and small towns are at a higher risk of suffering severe injuries that pose a risk of permanent disability, and of longer hospitalization because of them, compared to children living in larger cities. abstract_id: PUBMED:28406787 Factors Influencing Helmet Use, Head Injury, and Hospitalization Among Children Involved in Skateboarding and Snowboarding Accidents. Context: Up to 75% of skateboarders and snowboarders admitted to the hospital sustain head injuries. It is unclear why not all children and teenagers wear helmets while snowboarding and skateboarding given the protection they afford. Objectives: To report on the prevalence of, and factors associated with, skateboarding and snowboarding in injured children and to explore factors that influence helmet use, head injury, and hospitalization in this sample. Design: A cross-sectional study of skateboard- and snowboard-associated injuries from 2003 to 2012 among individuals younger than age 18 years using National Electronic Injury Surveillance System (NEISS) data from approximately 100 hospitals. Main Outcome Measures: Helmet use, head injury, and hospitalization. Results: Of 1742 patients in the study, 852 (48.9%) and 890 (51.1%) were skateboarders and snowboarders, respectively. Overall, 907 (52.1%) did not use helmets, and 704 (40.4%) sustained head injuries. Multiple logistic regression analysis showed that age, race/ethnicity, location of boarding, and engaging in skateboarding influenced helmet use. Sex, race/ethnicity, helmet use, and skateboarding predicted head injury. Age, sex, skateboarding, and head injury predicted hospital admission. Conclusion: Statistically significant differences exist in helmet use, head injury, and hospitalization rates between skateboarders and snowboarders. Our findings suggest that injury prevention and outreach programs are needed to increase helmet use and reduce the risk of head injury and hospitalization in skateboarders and other at-risk groups. Further studies are needed to clarify the association between race/ethnicity and helmet use among skateboarders and snowboarders. abstract_id: PUBMED:22673948 Incidence of hospitalization for traumatic brain injury in children and adolescents (Valencia Community, Spain, 2002-2009) INTRODUCTION. Traumatic brain injuries (TBI) are a major cause of morbidity and mortality in children and adolescents but there are hardly any studies on the incidence and temporal evolution. AIM. To describe recent trends (2002-2009) in the incidence of hospitalization for TBI in children and adolescents in the region of Valencia. PATIENTS AND METHODS. Emergency admissions were identified in hospitals in the Valencian Health Agency from patients aged 0-19 years with a diagnosis of TBI (codes of the International Classification of Diseases 800, 801, 803, 804 and 850 to 854) during 2002 to 2009. The severity was classified using the fifth digit of these codes and the crude and standardized rates per 100,000 children were estimated stratified by age, sex and severity. RESULTS. From 2002 to 2009 a total of 5,504 TBI in children up to age of 19 years were hospitalized (mild: 92.9%; moderate to severe: 7.1%). In-hospital mortality was 0.6% for mild TBI and 15.7% for moderate-severe. Crude rates of mild head injury per 100,000 children fell from 85.9 to 55.4 in 2002-2009 (boys: 114.1 to 68.3, girls: 56.1 to 41.8), especially in the 15-19 years. For moderate-severe TBI, rates decreased from 5.73 to 2.78 per 100,000 in 2002-2009 (boys: 8.69 to 3.67; girls: 2.59 to 1.84). CONCLUSIONS. The incidence of pediatric TBI in the Valencia region has decreased significantly in the period 2002-2009, but their medical, legal, societal and family consequences still represents a substantial burden. abstract_id: PUBMED:22405478 Incidence of hospitalization due to child maltreatment in Taiwan, 1996-2007: a nationwide population-based study. Objectives: Little is known regarding the epidemiology of child maltreatment in Asian countries. This study aimed to examine the incidence of hospitalization coded as due to child maltreatment in Taiwan. Methods: We used inpatient claims data of the National Health Insurance for the years 1996 through 2007 for estimation. Hospitalization of children aged 0-12 years with an ICD-9-CM code 995.5 "maltreatment syndrome" or E960-E969 "assault" in the main or secondary discharge diagnosis in claims data was defined as hospitalization due to child maltreatment. Poisson regression was used to examine the trends. Results: A total of 1,726 children with 1,780 hospitalizations due to child maltreatment were identified during 1996-2007. The averaged annual incidence over 1996-2007 was 13.2/100,000 for infants, 3.5/100,000 for children aged 1-3 years, 2.1/100,000 for children aged 4-6 years and 3.3/100,000 for children aged 7-12 years. The incidence increased most among the infant group, from 4.8/100,000 in 1996-1997 to 25.4/100,000 in 2006-2007, a greater than 5-fold increase. The magnitude of the increase in incidence decreased with age, and no change in incidence was noted among children aged 7-12 years. The drastic increase in incidence in infants since 2003 corresponds with the introduction of use of ICD-9-CM code 995.55 "shaken infant syndrome" in Taiwan since 2003. Head injury was the main type of injury in all age groups. Conclusions: The incidence of hospitalization coded as due to child maltreatment in Taiwan differed by age group. Infants had the highest and the largest increase in incidence during the past decade. The increase might be due to improvement in awareness, competence and willingness to diagnose child maltreatment of medical staff in hospitals during the past decade after the revision of Child Welfare Law in 1993 and the new ICD-9-CM code for "shaken infant syndrome" being used since 2003. Practice Implications: Physicians and nursing staff should be aware of and provide the detailed and specific information needed for proper discharge diagnosis coding, from which valid estimation of the incidence of hospitalization due to child maltreatment can be made. Administrative data should include both the N-code and E-code for each case of child maltreatment hospitalization to provide more complete information. abstract_id: PUBMED:31251858 The regional epidemiological characteristics of craniocerebral injury in children in Russia in 2003-2014 The craniocerebral injury is a global problem of health care and society. The fatal and incapacitating aftermaths developed because of this type of trauma result in significant social and economic losses. To develop effective measures of decreasing these losses epidemiological studies are needed to be implemented considering gender, age, regional and other characteristics. The purpose of study was to analyze regional and epidemiological characteristics of craniocerebral injury in children population of Russia. The study was based on data of state statistic reports in Russia and in its regions in 2003-2004. During analyzed period, dynamics of children morbidity of craniocerebral injury are characterized by its maximal level reached in 2010 (6.3 per 1,000 of children population) and by its decreasing to initial level (5.4% per 1,000 of children population) in 2014. The two-fold increase of percentage of children of the first year of life in mortality of craniocerebral injury was established against the background of stable decrease of craniocerebral injury mortality among children aged from 0 to 17 years. In 2014, every ninth child who died because of head trauma did not survived age of 1 year. In Russia, decreasing of level of hospitalization of children with craniocerebral injury was established. At that, indices of hospitalized morbidity of children of the first year of life increased up to 30%. The actual statistical reporting in Russian Federation provides no full measure evaluation of true levels of mortality, morbidity and hospitalized morbidity. this condition occurs due to limitations of including additional nosological forms in state statistic reporting and to specificity of main disease codification or leading causes of death. The study established significant regional specificity of craniocerebral injury in children that determines necessity of development effective measures considering established epidemiological characteristics. abstract_id: PUBMED:10461069 Immediate posttraumatic seizures: is routine hospitalization necessary? Objective: A recent Internet survey of pediatric neurosurgeons showed that 86% routinely admitted children with immediate posttraumatic seizures (PTS) for a brief period of observation. We wished to determine whether certain children meeting predefined criteria could instead be safely discharged from the emergency room. Methods: We reviewed the records of children admitted during the past 5 years with a diagnosis of seizure and head injury. Children with a minor head injury, a PTS occurring within 24 h of injury and no intracranial abnormalities on admission CT scan were included. Children with previous neurological conditions, a history of prior seizures (other than PTS or febrile seizures), a prior history of anticonvulsant use, or intracranial abnormalities on the admission CT scan were excluded. Records were abstracted for child's age, gender, length of admission, previous history of PTS or febrile seizures, mechanism of injury, location of impact, time between impact and PTS, the number, length and type of PTS, Glasgow Coma Score (GCS) on admission, subsequent complications and hospital costs. Results: Seventy-one children met the inclusion criteria. Eleven children presented to the emergency room with prolonged seizures, transient apnea or persistently low GCS and required admission to the intensive care unit (ICU). Among the 60 remaining children with simple PTS, none had further seizures during the follow-up period, and none had significant complications. The average cost of hospitalization was known for 58 children; after excluding the costs for 5 patients who were admitted to the ICU, the average hospital cost amounted to USD 1,615 per patient. Conclusions: Our data suggest that children with isolated minor head injuries and simple PTS who recover fully in the emergency room, whose CT scans show no intracranial abnormalities and who have no prior history of neurological disease, epilepsy or anticonvulsant use are at low risk for recurrent seizures or neurological complications, and could potentially be sent home to a reliable caretaker and a stable home situation. However, because of the limited sample size in this study, the statistical risk of a bad outcome may be as high as 9%; we therefore suggest that much larger studies are potentially needed before this becomes a standard policy. Answer: The question of whether routine hospitalization is required for children after minimal head trauma has been revisited in the study by PUBMED:9521936. The study reviewed records of children admitted to a level I trauma center with a question of closed head injury, an initial Glasgow Coma Scale equal to 15, a normal neurologic exam, and a normal head CT scan. The results showed that no child developed significant CNS sequelae warranting hospital admission, suggesting that children with a normal CNS exam and a normal head CT scan after minimal head trauma do not seem to be at risk for significant CNS sequelae and can be discharged home with parental supervision, avoiding unnecessary and costly hospitalization. Additionally, PUBMED:27444293 evaluated the necessity of hospitalization in children with an isolated linear skull fracture (ISF). The study concluded that children with minor head injury and ISF on imaging studies may be considered for discharge after a short period of observation, provided they have responsible caregivers who can return to the hospital if necessary. Hospital admission should be reserved for children with neurologic deficits, persistent symptoms, suspected child abuse, or when the parent is unreliable. These findings suggest that routine hospitalization may not be necessary for all cases of minimal head trauma in children, especially when the child has a normal CNS exam and head CT scan, or in cases of ISF with a reliable social environment. Reducing unnecessary hospitalizations can prevent emotional stress and save costs for the child's family and the health care system. However, it is important to note that each case should be evaluated individually, and hospital admission should be considered for children with certain risk factors or complications.
Instruction: Internet gambling: an emerging concern in family practice medicine? Abstracts: abstract_id: PUBMED:16621919 Internet gambling: an emerging concern in family practice medicine? Background: Gambling problems are rarely brought to the attention of family practice physicians, but pathological gambling can be associated with poor emotional and physical health. Recent availability and easy accessibility of Internet gambling may be associated with pathological gambling status, and Internet gambling may confer health risks. Objective: This study evaluated the prevalence of Internet gambling, its association with pathological gambling, and the relationship between Internet gambling and health status among patients attending medical and dental clinics. Methods: The South Oaks Gambling Screen and Short Form 12 were administered to 1414 adults in waiting areas of clinics. Results: Only 6.9% of respondents reported ever gambling on the Internet, with 2.8% indicating frequent Internet wagering. Almost two-thirds (65.9%) of regular Internet gamblers were classified as probable pathological gamblers, compared with 29.8% of ever-Internet gamblers and 7.6% of non-Internet gamblers. Internet gambling was associated with poor mental and physical health, and this association remained significant even after controlling for age, gender, site and pathological gambling status. Conclusions: These data suggest that Internet gambling is linked to pathological gambling and is independently associated with poor health. Family practice physicians should consider referring patients who gamble on the Internet for further treatment. abstract_id: PUBMED:29331973 Family-focused practices in addictions: a scoping review protocol. Introduction: Families are significantly impacted by addictions and family involvement in treatment can reduce the harms and can also improve treatment entry, treatment completion and treatment outcomes for the individual coping with an addiction. Although the benefits of family-focused practices in addictions have been documented, services continue to have an individual focus and research on this topic is also limited. The objective of this study is to map the extent, range and nature of evidence available examining family interventions in addictions and identify gaps to guide future research, policy and practice. Methods And Analysis: This is a scoping review using the five-stage framework developed by Arksey and O'Malley. We will include published and unpublished empirical studies focusing on any type of family interventions in addiction treatment between 2000 and the present in English or French. A reviewer will search for literature that meets the inclusion criteria through the following electronic databases: MEDLINE, PsycINFO and Social Services Abstracts. For a comprehensive search, we will also hand-search reference lists, web sites and key journals. Data will be charted and sorted using a thematic analysis approach. Ethics And Dissemination: This review will be the first to examine all forms of family-focused practices for both substance use and problem gambling treatment for adults. It will provide information about existing service provisions and gaps in practice. This review can be used to start moving towards the development of best practices for families in addiction treatment. The results will be disseminated through a peer-reviewed journal and at mental health and addiction conferences. abstract_id: PUBMED:32124255 Your losses are mine: The influence of empathic concern on evaluative processing of others' outcomes. Neural responses to others' decision-making outcomes can be modulated by many social factors. Using the event-related potential (ERP) technique, we explored the neural mechanisms of empathic concern modulating evaluative processing of others' outcomes. Participants were asked to perform a gambling task for three beneficiaries: themselves and two strangers. One stranger was an economically underprivileged student requiring help (high-empathy condition); the other stranger was a student with no upsetting information to induce empathic concern (low-empathy condition). ERP results showed that the valence effect of the feedback-related negativity (FRN) was larger when participants exhibited high empathic concern than when they did not. The FRN responses to strangers' outcomes in the high-empathy condition were as strong as those to their own outcomes. The P300 showed no differences between the low- and high-empathy conditions. These findings indicate that empathic concern could modulate the early stage of outcome processing, implying empathic emotional/altruistic motivational impacts of others' outcomes. abstract_id: PUBMED:20349270 Family influences on adolescent gambling behavior: a review of the literature. Gambling, and gambling related problems, are recognized as an emerging health issue among adolescents. Adolescent gambling is associated with numerous individual, social, and family characteristics. This paper provides a review of 21 empirical studies published between 1997 and 2008 that examine family influences on adolescent gambling behavior. Family influences on gambling behavior are conceptualized in the following five domains: (1) family sociodemographic factors, (2) general family climate, (3) family members' attitudes and behaviors, (4) parenting practices, and (5) family relationship characteristics. Based on the review of extant literature, there is empirical evidence that family characteristics are associated with adolescent gambling and further examination of family system influences on gambling behavior is needed. Gaps in the current literature and recommendations for future research are discussed to help inform the study of family influences on adolescent gambling behavior. abstract_id: PUBMED:28101835 Relationships Between Perceived Family Gambling and Peer Gambling and Adolescent Problem Gambling and Binge-Drinking. The study systematically examined the relative relationships between perceived family and peer gambling and adolescent at-risk/problem gambling and binge-drinking. It also determined the likelihood of at-risk/problem gambling and binge-drinking as a function of the number of different social groups with perceived gambling. A multi-site high-school survey assessed gambling, alcohol use, presence of perceived excessive peer gambling (peer excess-PE), and family gambling prompting concern (family concern-FC) in 2750 high-school students. Adolescents were separately stratified into: (1) low-risk, at-risk, and problem/pathological gambling groups; and, (2) non-binge-drinking, low-frequency-binge-drinking, and high-frequency-binge-drinking groups. Multinomial logistic regression showed that relative to each other, FC and PE were associated with greater likelihoods of at-risk and problem/pathological gambling. However, only FC was associated with binge-drinking. Logistic regression revealed that adolescents who endorsed either FC or PE alone, compared to no endorsement, were more likely to have at-risk and problem/pathological gambling, relative to low-risk gambling. Adolescents who endorsed both FC and PE, compared to PE alone, were more likely to have problem/pathological gambling relative to low-risk and at-risk gambling. Relative to non-binge-drinking adolescents, those who endorsed both FC and PE were more likely to have low- and high-frequency-binge-drinking compared to FC alone or PE alone, respectively. Family and peer gambling individually contribute to adolescent at-risk/problem gambling and binge-drinking. Strategies that target adolescents as well as their closely affiliated family and peer members may be an important step towards prevention of harm-associated levels of gambling and alcohol use in youths. abstract_id: PUBMED:33443823 What's new in addiction medicine The development of research, clinical practice and prevention in addiction medicine is, compared to other disciplines, more influenced by socio-political factors. Research on psychedelics has thus long been hampered by prohibitive policy but has recently been revived thanks to the relaxation of these political positions. The extensive prescription of baclofen for alcohol addiction is mainly the consequence of popular support and has so far not been clearly supported by specific research. The aim of the new Gambling Act was to harmonize the regulatory provisions for lotteries and betting on the one hand and casino gambling on the other, while at the same time opening up the market for online gambling. abstract_id: PUBMED:15570330 Lifestyle screening: development of an acceptable multi-item general practice tool. Aims: To develop a short screening tool for lifestyle and mental-health risk factors that adults can self-administer, and to determine acceptability and feasibility of use of this tool in primary care settings. Methods: The multi-item tool was designed to screen patients in rural and urban New Zealand general practices for smoking, alcohol and drug misuse, problem gambling, depression, anxiety, abuse, anger, sedentary lifestyle, and weight issues. Patients were offered help for identified risk factors. Fifty consecutive adult patients per practice (n=2,543) were recruited to participate from 20 randomly-selected urban general practitioners; 20 general practice nurses and 11 rural general practitioners. Results: Patients came from diverse ethnic, geographical, and socioeconomic backgrounds. The sample prevalence of positive responses identified ranged from 2.8% (gambling) to 42.7% (depression). The number of patients requesting immediate assistance with these responses (0.5 to 13.5%) did not overwhelm clinicians. The tool was well accepted by patients, with few objections to specific questions (0.1-0.8%). Most practitioners stated they will use the screening tool once available. Conclusions: Screening for lifestyle and mental health risk factors is becoming increasingly important in primary health care. This screening tool was acceptable to patients and was not considered overly burdensome by practitioners. abstract_id: PUBMED:26527482 Problem Gambling Family Impacts: Development of the Problem Gambling Family Impact Scale. Although family members of problem gamblers frequently present to treatment services, problem gambling family impacts are under-researched. The most commonly endorsed items on a new measure of gambling-related family impacts [Problem Gambling Family Impact Measure (PG-FIM: Problem Gambler version)] by 212 treatment-seeking problem gamblers included trust (62.5 %), anger (61.8 %), depression or sadness (58.7 %), anxiety (57.7 %), distress due to gambling-related absences (56.1 %), reduced quality time (52.4 %), and communication breakdowns (52.4 %). The PG-FIM (Problem Gambler version) was comprised of three factors: (1) financial impacts, (2) increased responsibility impacts, and (3) psychosocial impacts with good psychometric properties. Younger, more impulsive, non-electronic gaming machine (EGM) gamblers who had more severe gambling problems reported more financial impacts; non-EGM gamblers with poorer general health reported more increased responsibility impacts; and more impulsive non-EGM gamblers with more psychological distress and higher gambling severity reported more psychosocial impacts. The findings have implications for the development of interventions for the family members of problem gamblers. abstract_id: PUBMED:37718195 The psychosocial rehabilitation in addiction medicine Objectives: Addictive behaviors constitute complex behaviors that are usually related to social habits, such as substance use, gambling or gaming activities, or sexual or physical activity. They progressively overrun and stifle the routine habits of the concerned individuals, for example within their occupational or family spheres of life, as well as in their hobbies or in their main physiological functions, such as sleep or eating cycles. The rehabilitation approach of care integrates the objective of restoring the altered habits and functional rhythms, to optimize the clinical outcomes and improve quality of life of the concerned persons. Methods: Using a focused and narrative literature review, we aimed to explain what psychosocial rehabilitation consists in, and why this approach is particularly relevant for the addiction care, although to date it has remained insufficiently developed in the routine practice of many facilities. Results: The "rehab" approach, is first based on a structured and comprehensive assessment of the clinical and functional aspects of the patient, which secondarily allows to frame an individualized project of care that is closely built together with the patient. This project of care can integrate classical pharmacotherapeutic and psychotherapeutic tools, but it also emphasizes wider approaches for restoring some basic social and physiological functions of the concerned person, such as sleep, eating, social functioning, physical activity, or spiritual needs. Priorities among these different dimensions have to be defined by the concerned person. Conclusions: The "rehab" approach is particularly relevant in addiction medicine. It represents a more global conception of care that conceives the recovery of social and physiological functioning as a core treatment objective for the concerned persons. The consequences for care organization are that many additional professionals should be integrated into the treatment schemes for addiction, including peer counselors, occupational therapists, physical activity coaches, or dieticians. abstract_id: PUBMED:37149767 Behavioural addictions. Problem gambling in general practice: How can we help? N/A Answer: Yes, Internet gambling is an emerging concern in family practice medicine. A study evaluated the prevalence of Internet gambling and its association with pathological gambling, as well as the relationship between Internet gambling and health status among patients attending medical and dental clinics. The study found that a significant portion of regular Internet gamblers were classified as probable pathological gamblers, and Internet gambling was associated with poor mental and physical health. This association remained significant even after controlling for age, gender, site, and pathological gambling status. The data suggest that Internet gambling is linked to pathological gambling and is independently associated with poor health, indicating that family practice physicians should consider referring patients who gamble on the Internet for further treatment (PUBMED:16621919).
Instruction: Mitrofanoff continent catheterizable conduits: top down or bottom up? Abstracts: abstract_id: PUBMED:26904419 Reconstructive techniques for creation of catheterizable channels: tunneled and nipple valve channels. Cutaneous catheterizable channels allow for continent bladder emptying when an alternate route is desired. The goals of channel creation in the neurogenic bladder population are successful urine elimination, renal preservation, continence and lastly cosmesis. In addition to a particular surgeon's comfort and experience with a given procedure, individual patient factors such as medical comorbidities, anatomic factors, and occupational function should be central to the selection of a surgical approach. An ideal channel is one that is short, straight, and well supported by associated blood supply and surrounding adventitia, so as to minimize difficulty with catheterization. Two types of channel continence mechanisms are discussed at length in this review-the tunneled channel, and the nipple valve. The appendicovesicostomy (Mitrofanoff), and reconfigured ileum (Yang-Monti) are both tunneled channels. The ileocecal valve is a commonly used nipple valve and provides continence when reinforced. The continent catheterizable ileal cecocystoplasty (CCIC) is an example of this channel technique. This method couples a tapered ileal limb as a catheterizable channel, the ileocecal valve as the continence mechanism, and the cecum and ascending colon as a bladder augmentation. While this procedure has higher perioperative complications relative to a simple tunneled channel, it has increased channel length flexibility and is also coupled with a bladder augment, which is completely performed using one bowel segment. Continent channel creation in adults can improve quality of life and minimize morbidity associated with neurogenic bladder. However, the decision to proceed with creation of a catheterizable channel should be made only after careful consideration of the patient's medical comorbidities, physical abilities social support, and surgeon experience. abstract_id: PUBMED:27707652 Bladder continent catheterizable conduit (the Mitrofanoff procedure): Long-term issues that should not be underestimated. Background: Effective bladder emptying by clean intermittent catheterization for children with severe bladder dysfunction is critical for renal preservation and social integration. Use of a continent catheterizable conduit (CCC) as urethral alternative procedure provides effective bladder drainage. However, it brings a substantive maintenance. Methods: Retrospective review of the indications and long-term outcomes of 54 patients with a Mitrofanoff procedure in a single center over a 20-year period (1995-2015). Results: Indications of CCC include 21 neurogenic bladders, 12 patients with epispadias/exstrophy, 13 bladder outlet obstruction, 6 malignancies and 2 cloaca. Median age at surgery was 8.3years (4months-20years). The appendix was used in 76% of cases. Most frequently encountered complication was stomal stenosis (n=17/34, 50%), occurring at median time of 9months (2months-13years). The other complications were: leakage in 9 (26.5%); conduit stricture in 5 (14.7%), angulation of the conduit in 2 (5.8%) and prolapse in one (3%). Operative revision was encountered by 33 (61%) patients, the majority in the first 2years. Median follow-up was 4.3years (3months-16years). Conclusions: CCC has a high incidence of complication. It has to be used only when the native urethra is not suitable for catheterization. Carers, patients and families must be prepared to deal with both the complexity of index conditions and the complications of this procedure. abstract_id: PUBMED:30819634 Long-term follow-up of continent cystostomy with the Mitrofanoff procedure: 5 years later Introduction: This study analyzed long-term functional outcome of continent catheterizable channels with the Mitrofanoff procedure, their continence, complications and the satisfaction of the patients. Material And Method: Data from patients who underwent a Mitrofanoff procedure at our institution from June 1997 to March 2015 were retrospectively collected. All patients were contacted at the end of the study, a survey was submitted to them. Results: Sixty-seven patients underwent a continent cystostomy with the Mirtrofanoff procedure. Forty-five patients had the inclusion criteria: 18 years old or older, no previous urinary diversion with a minimum of 6 months of follow-up. The cohort comprised mainly neurologic bladder (84 %) with spinal cord injuries (54 %) or spina-bifida patients (15 %). Median age was 35 years old [22-49]. Median follow-up was 64months [39-90]. The surgical procedure used an appendicular channel: 30 patients (67 %) or a continent ileal plasty: 15 patients (33 %). At the end of follow-up: 88 % patients have a full cystostomy continence, 89 % full uretral continence. Twenty-nine patients had one (41 %) or more reinterventions. Reasons for the 58 reinterventions were: stomal stenosis (31 %), uretral incontinence (29 %), cystostomy incontinence (15 %), lithiasis (9 %). Those reinterventions were done with a local surgery (31 %) or an endoscopic surgery (35 %). Overall early adverse events (<30days) or delayed (>30days) adverse events were similar (P=0.93) in appendicovesicostomy group or continent ileal plasty group. Ninety-four percent patients described a satisfactory urinary comfort. The cystostomy was considered esthetic by 71 %, its realization allowed an improvement of the quality of life for 89 % of them. Conclusion: Continent channels in adults demonstrate favorable long-term outcomes even if reinterventions could be necessary to maintain a continent and catheterizable channel. Despite reinterventions, patients remain satisfied by the Mitrofanoff procedure which facilitate the process of clean intermittent catheterization. Level Of Evidence: 4. abstract_id: PUBMED:8516983 Modification of Mitrofanoff principle for continent urinary diversion. In 1980, Mitrofanoff described a method of achieving continent urinary diversion by surgically closing the bladder neck and creating a continent catheterizable stoma from the appendix, which had been implanted in a non-refluxing manner into the bladder, or from a non-refluxing distal ureter. We describe a modification of the Mitrofanoff procedure for continent urinary diversion in 7 children in whom a standard Mitrofanoff procedure was not possible due to either body habitus or appendiceal anatomy. All 7 patients are continent both day and night. Four have required stomal revisions. Renal function has remained stable or improved in all patients. Although the revision rate was high, this modification of the Mitrofanoff principle has provided good long-term results in these patients and may be useful when patient's anatomy does not allow the creation of a standard appendicovesicostomy. abstract_id: PUBMED:34895819 Long-term usage pattern and satisfaction survey of continent catheterizable channels. Introduction: We investigated the long-term usage pattern and satisfaction of continent catheterizable channels (CCCs). Methods: From 2005 to 2018, CCCs, including Mitrofanoff and antegrade continent enema (ACE) channels, were made in 67 patients (Mitrofanoff in 21 patients, ACE channels in 43 patients, and both in three patients) in our institution. An online survey was conducted for these patients in order to assess usage pattern, continent status, difficulty in usage, and patient satisfaction. Results: Sixteen (66.7%) out of 24 patients with the Mitrofanoff channel and 39 (84.7%) out of 46 patients with the ACE channel completed the online survey. In the Mitrofanoff channel group, 10 (62.5%) patients had spina bifida, two (12.5%) had Hinman syndrome, one (6.3%) had posterior urethral valves, and three (18.8%) had urethral trauma or atresia. Additionally, the mean age of the patients at the time of surgery was 10.0 years, and the median follow-up duration was 10.9 years. All patients were using the Mitrofanoff channel to perform clean intermittent catheterization (CIC). Eleven patients (68.8%) had difficulty with catheterization, mostly at the stomal site. Most patients conducted CIC more than four times a day (13, 81.3%). Regarding urination status, seven patients (43.8%) responded that they were satisfied and nine (56.2%) responded they were neutral. In the ACE channel group, 35 patients (89.7%) had spina bifida, seven (17.9%) had cloacal anomalies, and 26 (66.7%) had anorectal malformations. The mean age of the patients at the time of surgery was 8.4 years, and the median follow-up period was 7.4 years. Two (5.1%) patients were no longer using their ACE channels, but 15 (38.5%) patients were still using their channels almost daily. Twenty-eight (71.8%) patients complained that performing enema was time-consuming, and seven (17.9%) patients reported pain when performing ACE and fecal incontinence. Most patients were satisfied with their defecation status (23, 59%), 15 (38.5%) were neutral, and one (2.6%) was dissatisfied. Conclusions: While most patients who had either Mitrofanoff or ACE channels were still using their channels effectively, approximately half of the patients with CCCs demonstrated neutral satisfaction with their current status; this shows a poor result compared to previous reports. Considering the results of our patient-based study, thorough explanations should be provided to patients who are candidates for Mitrofanoff and ACE procedures; additionally, the discomfort related to the procedures should be comprehensively assessed during follow-up consults. abstract_id: PUBMED:34635438 Diagnosis, management and recurrence rates of Mitrofanoff polyps. Background: Intraluminal polyps are a known complication following creation of a urinary continent catheterizable channel (Mitrofanoff). These polyps can lead to difficult catheterizations in addition to symptomatic bleeding. However, there is limited data available regarding management and outcomes of these polyps. We aim to describe clinical presentation and management of a large series of polyps occurring in a Mitrofanoff channel. Methods: We performed a retrospective review of all patients that were treated for polyps in a Mitrofanoff at our institution. Information was collected regarding presenting symptoms, management and recurrence rates of the polyps. Results: A total of 24 patients were identified that fulfilled inclusion criteria. The majority of these polyps developed in channels composed of appendix (87%), while only 3 patients (13%) had polyps develop in an ileal composed channel. Thirteen (54%) of these polyps were incidentally diagnosed while 11 patients presented with a variety of symptoms such as difficulty in catheterization, bleeding with catheterization or both difficulty catheterizing and bleeding. For management of the polyps, a cystoscopy was performed and snaring the polyp with stone basket was performed in 37%, energy was applied to base to remove polyps in 33%, 16% were fulgurated and only 13% were left in situ. All procedures were performed under general anesthesia and all of the pathology was benign showing chronic inflammatory tissue. Eight polyps (33%) recurred after initial treatment. Discussion: We did not observe an asymptomatic channel polyp convert to a symptomatic during our follow up period. Our experience has led us to not intervene on all asymptomatic Mitrofanoff polyps encountered during cystoscopy under assumption they will inevitably become symptomatic. Although we admit our follow up period may not be long enough to make this a universal declaration of best practice when any Mitrofanoff polyp is diagnosed. Endoscopic treatment was effective minimally invasive method to address the symptomatic polyp rather than excision and construction of new channel. Conclusions: This is the largest series to date of polyps developing in urinary continent catheterizable channels. The majority of these polyps were encountered incidentally however symptomatic polyps presented with difficulty with catheterizations. Symptomatic polyps can be managed endoscopically but recurrence of the polyp can occur. abstract_id: PUBMED:9096273 Mitrofanoff principle for continent urinary diversion. In 1980 Mitrofanoff described a method to achieve continent urinary diversion by surgically closing the bladder neck and creating a continent catheterizable stoma from the appendix which had been implanted in a nonrefluxing manner into the bladder or from a nonrefluxing distal ureter. The Mitrofanoff principle (MP) serves well as a continence mechanism for either the native bladder or intestinal reservoirs. We review 17 patients, including 10 children, who were treated with the MP and received a continent catheterizable channel. For the MP, appendix was used in 16 patients and ureter in 1. The continuity between the urinary reservoir and urethra was maintained wherever possible. Bladder neck suspension was preferred to bladder neck closure in incontinent females and boys. Complications and their management have been discussed at length. Stable renal functions, ease of catheterization and urinary continence were achieved in 93.8% of the patients. abstract_id: PUBMED:15626877 The continent bladder: indications and techniques for the continent catheterizable segment. Purpose Of Review: Continent catheterizable segments are a substantial part of the urologist's armamentarium for providing bladder drainage. It is used for a myriad of indications, and there are multiple techniques currently used for its formation. Despite refinements in these techniques significant complications still occur, and there is continued advancement and ongoing investigation. This review examines the current status of the continent catheterizable segment with regard to indications for its use, techniques in its formation, discussion of complications, and ongoing and future directions in research. Recent Findings: The continent catheterizable segment is indicated when it is not feasible to use the urethra for evacuation (e.g. bladder exstrophy, neurogenic bladder, radiation injury, and marked urethral dysfunction) or to facilitate catheterization. Compliance with catheterization and irrigation regimens is essential in patient selection. Multiple methods exist for its formation, either with or without the need for bladder augmentation. Although Mitrofanoff techniques with multiple applications predominate, "hemi" augments with efferent limbs also play a significant role. Stoma placement should be performed to best facilitate catheterization. Complications relating to catheterizable segments mainly pertain to continence, stenosis, and ability to catheterize, with more significant morbidity relating to the bladder augmentation. Ongoing research to develop more physiologic tissue substitutes and less invasive techniques may hopefully be superseded by prevention of the underlying lower urinary tract pathology. Summary: Catheterizable segments allow the patient to control bladder evacuation, and continue to be refined by ongoing investigations in terms of indication and technique, with attendent decreasing morbidity. abstract_id: PUBMED:22070895 Pilot study of the vesicocutaneous continent catheterizable stoma (mitrofanoff) in adults--high complication rates. Objective: To report our experience with an adult vesicocutaneous Mitrofanoff, an alternative to standard appendiceal or ileal Mitrofanoff. Material And Methods: Retrospective chart review was performed on 3 patients who underwent a vesicocutaneous Mitrofanoff by a single surgeon. Data were collected on etiology, complications, and number of procedures to correct the complication. Results: Indications for the procedure included neurogenic bladder and urinary retention. Mean follow-up time was 485 days with mean time to first complication of 24 days. One-hundred percent of patients saw stenosis and dehiscence of their wounds. All 3 patients required subsequent surgery. Sixty-six percent required reoperation to the standard Mitrofanoff with a mean time to this procedure of 222 days. The mean number of procedures was 4 (range 2-6). Conclusion: Continent catheterizable stomas have been a continuous challenge for adults who have had multiple abdominal surgeries, making a typical appendix, ureter, or small bowel Mitrofanoff difficult. We looked at our experience with a cutaneous Mitrofanoff (an extraperitoneal procedure often done in pediatrics but never before in adults) as an alternative but found complication rates of 100%. These included stenosis and dehiscence eventually requiring reoperation, with some even requiring conversion to an ileal or appendiceal Mitrofanoff. Although there are high complication rates in the standard Mitrofanoff, we conclude that a vesicocutaneous Mitrofanoff is not an effective alternative. abstract_id: PUBMED:26653576 Male sexuality after external continent urinary diversion type Mitrofanoff Objective: To evaluate the influence of continent external urinary diversion type Mitrofanoff on male sexuality. Material And Methods: Between 1992 and 2011, 140 patients underwent continent urinary diversion type Mitrofanoff at an academic hospital. Among 76 men, 46 were interviewed about their sexuality after this operation. This study was performed using a set of validated questionnaires (IIEF, DAN PSS and Urolife), grouped by the model of the CTMH. Patients were divided according to their marital status: group 1: patients married before surgery (15 cases), group 2: patients married after surgery (7 cases) and group 3: singles (24 cases). Results: In the first group, the functional dimension of sexuality was positive with an overall score of 81%, the sexual discomfort score was assessed at 26 % and the sexual satisfaction score was 77%. In the second group, sexual function was considered conserved in all cases with a satisfaction score estimated at 98%. These patients reported a feeling of well-being following the disappearance of urinary incontinence with integrity of their body images. In contrast, in the last group, relatively impaired sexual function was noted (65%) with a satisfaction score estimated at 59%. These disorders were multifactorial, mainly related to neurological causal pathology. Conclusion: To our knowledge, this is the first study about male sexuality in patients with a continent urinary diversion type Mitrofanoff. Marital status has a major role in the sexuality of these patients. A prospective study with pre- and postoperative evaluation will better clarify the factors affecting sexuality in these patients. Answer: The Mitrofanoff procedure is a surgical technique used to create a continent catheterizable conduit (CCC), allowing individuals with bladder dysfunction to perform clean intermittent catheterization (CIC) through a stoma rather than the urethra. The procedure typically involves using the appendix or a segment of the ileum to create a channel between the skin surface and the bladder, providing an alternative route for bladder emptying. The abstracts provided do not explicitly discuss the direction of catheterization in terms of "top down" or "bottom up." However, they do provide insights into the surgical techniques, outcomes, and complications associated with the Mitrofanoff procedure and other similar reconstructive techniques for creating catheterizable channels. For instance, the Mitrofanoff procedure often uses the appendix to create the channel, which is then tunneled into the bladder in a non-refluxing manner (PUBMED:9096273). The channel is designed to be continent, meaning it prevents urine leakage, and is catheterized at intervals to empty the bladder (PUBMED:15626877). The Mitrofanoff principle can be applied to both native bladders and intestinal reservoirs, with the goal of achieving urinary continence and ease of catheterization (PUBMED:9096273). Complications such as stomal stenosis, leakage, conduit stricture, and the need for operative revisions are not uncommon, and long-term follow-up is necessary to manage these issues (PUBMED:27707652, PUBMED:30819634). Despite these challenges, many patients report satisfaction with the procedure due to the improved quality of life and urinary comfort it provides (PUBMED:30819634). In summary, the Mitrofanoff procedure and similar techniques for creating continent catheterizable channels are primarily focused on the functional outcomes, such as achieving continence and facilitating CIC, rather than the direction of catheterization. The abstracts do not provide specific information on whether the catheterization is performed from the top down or bottom up, as the primary concern is the creation of a reliable and continent channel for bladder management.
Instruction: Health profile of the urban community members in Lithuania: do socio-demographic factors matter? Abstracts: abstract_id: PUBMED:25541270 Health profile of the urban community members in Lithuania: do socio-demographic factors matter? Background And Objective: Objective of the study was to explore self-perceived health status, health determinants and its associations with socio-demographic factors among urban community members in Lithuania. Materials And Methods: Data were obtained from a European survey on urban health, conducted as part of the EURO-URHIS 2 project. The postal questionnaire survey of 3200 adults from Kaunas and Šiauliai (Lithuania) was conducted in 2010. A total of 1407 valid questionnaires were analyzed. Statistical analysis was carried out by using SPSS 17.0 inside Complex Sample module that takes design effects into account. Results: Younger respondents (aged 19-64 years) perceived most of the health status indicators better than the older ones (65+ years), while they were less likely to report healthy lifestyle and less often perceived their neighborhood as being socially cohesive than the older ones. Men less frequently experienced psychological problems, indicated regular contacts with friends and/or family and had a greater tendency to be overweighed and obese, daily smokers and drinkers compared to women. Those having secondary or lower educational level perceived most of the health status indicators worse than those with university educational level. Respondents living with a partner less often experienced psychological problems than those living alone. Respondents who indicated having enough money for daily expenses more often perceived their health and health determinants better. Conclusions: The results of this study demonstrate associations between socio-demographic factors and self-perceived health status, lifestyle and factors of living environment among urban community members in Lithuania. abstract_id: PUBMED:32669947 Socio-demographic factors of cesarean births in Nha Trang city, Vietnam: a community-based survey. Background: The cesarean section rate in Vietnam has been increasing especially in urban area. However, limited evidence identified regarding socio-demographic factors of the cesarean section birth. The objective of this study was to determine the current cesarean birth rate and the associated socio-demographic factors among mothers in Nha Trang city, south-central Vietnam. Methods: A community-based cross-sectional study was conducted between October and November in 2016 as part of a Streptococcus pneumoniae carriage survey conducted in 27 communes of Nha Trang city. From each commune, 120 mothers and their children less than 2 years old were randomly selected. Mothers were asked to answer standardized questions regarding socio-demographic information and mode of birth. Multivariate logistic regression was adopted to examine associations between socio-demographic variables and mode of birth. Results: Of 3148 participants, the number of cesarean births was 1396 (44.3 %). Older maternal age (≥ 30 years old), having another child going to school or kindergarten, monthly income more than 644 USD, gestational weeks at birth over 42 weeks, and low (< 2500 g) or high (≥ 3500 g) birth weight were associated with higher likelihood of cesarean births. Conclusion: The CS rate obtained in this study was more than twice of what is recommended by the World Health Organization, which is consistent with the previous nation-wide study in Viet Nam. Further monitoring is suggested to examine the non-medical reason for the increased CS rate. abstract_id: PUBMED:8439362 Urban community health volunteers. An evaluation was made of the effectiveness of an urban community health volunteer programme in Pokhara, Nepal, in sensitizing and motivating people for the improvement of health knowledge and skills and the use of services. Despite weaknesses and inadequate management support, there has been a sharp increase in coverage by the health services. If management support were properly developed the programme could undoubtedly play a vital role in raising health standards. It was clearly demonstrated that the utilization of services can be increased if access to them is improved. abstract_id: PUBMED:9269100 Demographic profile and health conditions of the elderly in a community in an urban area of southeastern Brazil Some specific characteristics of the aging of the Brazilian population in different areas, states and communities all over the country, have shown significant variations. Historical series of demographic and health indicators for the population in their sixties and over in Brazil, state of S. Paulo and in the municipal district of Araraquara are listed as follows: level of education and urban population growth rate, income distribution, mortality rates and main causes of death. In 1991 the aged constituted were 7.8% of the Brazilian population and 9.7% in Araraquara community. The elderly population (of 70 years of aged and above) as a proportion of the whole, has increased and already stands for 40%. The same trend holds good for both the proportion of aged within the urban population and their level of education which increased to 90% in 1991. The main causes of death are chronic degenerative diseases which have replaced the infectious illness: first, the diseases of the circulatory system (which account for more than 40% of all deaths) and the neoplasms (which let to 15% of the deaths). On the basis of these health and demographic data relating to people of 60 years of age and over, this study suggests some procedures for the improvement of the quality of the assistance given to the target population: a) the assistance give to the aged should be improved by providing gerontological training for general physicians and nurses, both of public and private clinics; b) the already existing educational activities for the aged, for health workers and for teachers of secondary education should be further developed; c) the number of day-hospitals should be increased for the purpose of avoiding unnecessary confinement so as maintain the low rate of institutionalization in homes for the elderly (0.7% in Araraquara). It is reported that at least 35% of the aged population in this area is entitled to private health assistance, which brings out the importance of including such services in the local health programs for this group. abstract_id: PUBMED:28597864 Motivating and demotivating factors for community health workers: A qualitative study in urban slums of Delhi, India. Background Community health workers play an important role in delivering health-care services, especially to underserved populations in low- and middle-income countries. They have been shown to be successful in providing a range of preventive, promotive and curative services. This qualitative study investigated the factors motivating or demotivating community health workers in urban settings in Delhi, India. Methods In this sub-study of the ANCHUL (Ante Natal and Child Healthcare in Urban Slums) implementation research project, four focus-group discussions and nine in-depth interviews were conducted with community health workers and medical officers. Utilizing a reflexive and inductive qualitative methodology, the data set was coded, to allow categories of motivating and demotivating factors to emerge. Results Motivating factors identified were: support from family members for their work, improved self-identity, job satisfaction and a sense of social responsibility, prior experiences of ill health, the opportunity to acquire new skills and knowledge, social recognition and status conferred by the community, and flexible work and timings. Negative experiences in the community and at health centres, constraints in the local health system in response to the demand generated by the community health workers, and poor pay demotivated community health workers in this study, even causing some to quit their jobs. Conclusion Community-health-worker programmes that focus on ensuring the technical capacity of their staff may not give adequate attention to the factors that motivate or discourage these workers. As efforts get under way to ensure universal access to health care, it is important that these issues are recognized and addressed, to ensure that community health worker programmes are effective and sustainable. abstract_id: PUBMED:28420189 Demographic and Environmental Factors Associated with Mental Health: A Cross-Sectional Study. Relevant demographic and environmental conditions need to be understood before tailoring policies to improve mental health. Using community health survey data from 25 communities in Seoul, 2013, cross-sectional associations between mental health and community level environments were assessed. Mental health outcomes (self-rated stress levels (SRS) and depressive symptoms (DS)) were analyzed. Community environmental factors included green space, green facilities, and annual PM10 level (AnnPM10); socio-demographic factors included sex, age, education, labor market participation, comorbidity, sleep hours, physical activity, smoking, and drinking. A total of 23,139 people with the following characteristics participated: men (44.2%); age groups 19-39 (36.0%), 40-59 (39.4%), 60-74 (19.2%), and 75+ (5.4%). Women had higher odds ratios (OR) for SRS [OR 1.22, 95% Confidence interval (CI) 1.17-1.27] and DS [OR 1.55, 95% CI 1.42-1.71]. Regular physical activity predicted SRS [OR 0.90, 95% CI 0.84-0.95] and DS [OR 0.98, 95% CI 0.88-1.10]; current smoking and drinking were adversely associated with both SRS and DS. Higher accessibility to green space (Q4) was inversely associated with DS [OR 0.89, 95% CI 0.81-0.97] compared to lower accessibility (Q1). AnnPM10, annual levels for particles of aerodynamic diameter <10 µm (PM10), among communities was associated with poorer SRS [OR 1.02, 95% CI 1.00-1.04] by 10 μg/m³ increases. Therefore, both demographic and environmental factors should be considered to understand mental health conditions among the general population. abstract_id: PUBMED:33804550 Factors Affecting Obesity in Urban and Rural Adolescents: Demographic, Socioeconomic Characteristics, Health Behavior and Health Education. In this study, we aimed to analyze the demographic factors, socioeconomic factors, health behavior factors, and health education of urban and rural adolescents and their impact on obesity in rural and urban youths. We analyzed data from 60,040 students from 800 schools using the 14th Youth Health Behavior Web-based Survey data for 2018. The mean age of the participants was 15.01 ± 1.77; 30,463 (50.7%) males and 29,577 (49.3%) females. The data were analyzed using a chi-square test and multinominal logistic regression analysis. Rural adolescents had significantly lower parental income, parental education, and academic performance levels. Urban adolescents tended to have a higher rate of breakfast fasting, ate more fast-food, exercised less, had a higher rate of sleep dissatisfaction, and had significantly higher current alcohol intake. Rural adolescents reported more stress and obesity levels than their urban counterparts. Factors influencing the obesity of urban adolescents were grade level, father's education, mother's education, academic performance level, fast-food, exercise, current alcohol intake, and sleep satisfaction. Factors influencing the obesity of rural adolescents were parents' income level, academic performance level, exercise, and sleep satisfaction. To effectively prevent and manage adolescent obesity, which is an indicator of health equity among adolescents, an effort must be made to improve health education and the local environment in both urban and rural areas. abstract_id: PUBMED:20225747 The levels of community involvement in health (CIH): a case of rural and urban communities in Kwazulu-Natal. The study aimed to describe the practice of community involvement in health programmes. The study therefore explored the nature and practice of community involvement in health programmes in the two communities in KwaZulu Natal. The study was guided by the conceptual framework adapted from Arnstein's,(1969) Ladder of Citizen Participation. This framework shows different levels and steps in community participation. A case study method was used to conduct the study. The two cases were one urban based and one rural based community health centers in the Ilembe health district, in Kwa Zulu Natal. A sample of 31 persons participated in the study. The sample comprised 8 registered nurses, 2 enrolled nurses 13 community members and 8 community health workers. Data was collected using structured individual interviews and focus group interviews, and was guided by the case protocol. Community involvement in health largely depended on the type of community, with rural community members being in charge of their health projects and urban community members helping each other as neighbours in times of need. abstract_id: PUBMED:23551642 Rural-urban differences in consumer governance at community health centers. Context: Community health centers (CHCs) are primary care clinics that serve mostly low-income patients in rural and urban areas. They are required to be governed by a consumer majority. What little is known about the structure and function of these boards in practice suggests that CHC boards in rural areas may look and act differently from CHC boards in urban areas. Purpose: To identify differences in the structure and function of consumer governance at CHCs in rural and urban areas. Methods: Semistructured telephone interviews were conducted with 30 CHC board members from 14 different states. Questions focused on board members' perceptions of board composition and the role of consumers on the board. Findings: CHCs in rural areas are more likely to have representative boards, are better able to convey confidence in the organization, and are better able to assess community needs than CHCs in urban areas. However, CHCs in rural areas often have problems achieving objective decision-making, and they may have fewer means for objectively evaluating quality of care due to the lack of patient board member anonymity. Conclusions: Consumer governance is implemented differently in rural and urban communities, and the advantages and disadvantages in each setting are unique. abstract_id: PUBMED:17132597 Rural-urban differences in health and health behaviour: a baseline description of a community health-promotion programme for the elderly. Study Objective: To (1) describe the setting and design of the Good Ageing in Lahti Region (GOAL) programme; (2) by using the baseline results of the GOAL cohort study, to examine whether living in urban, semi-urban, or rural communities is related to risk factors for chronic diseases and functional disability in ageing individuals. Design: The baseline data of a cohort study of ageing individuals living in three community types (urban, semi-urban, rural). Data were collected by two questionnaires and laboratory assessments. Setting: Fourteen municipalities in the Lahti region (Päijät-Häme County) in Finland. Participants: A regionally and locally stratified random sample of men and women born in 1946-50, 1936-40, and 1926-30. A total of 4,272 were invited and 2,815 (66%) participated. Main Results: Elevated serum cholesterol, obesity, disability, sedentary lifestyle (<2 times/week walking), and high fat intake were more prevalent in rural vs. urban and semi-urban communities. After adjustment for sex, age, education, obesity, diet, physical activity, smoking, and alcohol use, rural communities remained the only community type with increased (p<0.05) probability for high BMI (OR 1.33) and high waist circumference (OR 1.43). Conclusions: The unfavourable health and lifestyle profile, together with an old population, makes health promotion for elderly citizens a special challenge for rural communities such as those in Päijät-Häme County, Finland. Most, if not all, of the differences in health between the three community types were explained by educational background, physical activity, and smoking. Answer: Yes, socio-demographic factors do matter in the health profile of urban community members in Lithuania. The study conducted as part of the EURO-URHIS 2 project, which included a postal questionnaire survey of 3200 adults from Kaunas and Šiauliai, found that younger respondents perceived most health status indicators better than older ones, but were less likely to report a healthy lifestyle and perceived their neighborhood as less socially cohesive. Men were less likely to experience psychological problems and had more regular contacts with friends and/or family, but also had a greater tendency to be overweight, obese, daily smokers, and drinkers compared to women. Those with secondary or lower educational levels perceived most health indicators worse than those with university education. Living with a partner was associated with fewer psychological problems compared to living alone. Additionally, having enough money for daily expenses was linked to better perceptions of health and health determinants (PUBMED:25541270).
Instruction: Does isolation of the left atrial posterior wall improve clinical outcomes after radiofrequency catheter ablation for persistent atrial fibrillation? Abstracts: abstract_id: PUBMED:34258794 The role of posterior wall isolation in catheter ablation of persistent atrial fibrillation. The left atrial posterior wall has many embryologic, anatomic, and electrophysiologic characteristics, that are important for the initiation and maintenance of persistent atrial fibrillation. The left atrial posterior wall is a potential target for ablation in patients with persistent atrial fibrillation, a population in whom pulmonary vein isolation alone has resulted in unsatisfactory recurrence rates. Published clinical studies report conflicting results on the safety and efficacy of posterior wall isolation. Emerging technologies including optimized use of radiofrequency ablation, pulse field ablation, and combined endocardial/epicardial ablation may optimize approaches to posterior wall isolation and reduce the risk of injury to nearby structures such as the esophagus. Critical evaluation of future and ongoing clinical studies of posterior wall isolation requires careful scrutiny of many characteristics, including intraprocedural definition of posterior wall isolation, concomitant extrapulmonary vein ablation, and study endpoints. abstract_id: PUBMED:33693191 Is Incomplete Left Atrial Posterior Wall Isolation Associated With Recurrence of Atrial Fibrillation After Radiofrequency Catheter Ablation? Background: Complete left atrial posterior wall isolation (LAPI) is not always achieved. We examined whether incomplete LAPI has an effect on outcomes after catheter ablation (CA). Methods and Results: This study enrolled 75 consecutive patients (mean [±SD] age 62.6±8.9 years, 74.7% male) who underwent LAPI by radiofrequency CA for persistent atrial fibrillation (AF). The median follow-up period was 541 days (interquartile range 338-840 days). Incomplete LAPI was defined as the presence of a successfully created roof or floor linear lesion. The rate of complete LAPI was 41.3% (31/75). Either a roof or floor linear lesion was created in 38 patients, whereas neither was created in 6. Multivariate Cox proportional hazards regression analysis revealed that female sex (hazard ratio [HR] 5.29; 95% confidence interval [CI] 1.81-16.8; P=0.002) and complete or incomplete LAPI (HR 0.17; 95% CI 0.03-0.79; P=0.027) were independent predictors of AF recurrence. Kaplan-Meier curves indicated that better outcome was associated with at least one rather than no successful linear lesion (86.5% vs. 50.0% at 1 year; P=0.043). There were no significant differences in outcomes between the complete LAPI and incomplete LAPI groups. Conclusions: Complete LAPI is unachievable in a significant percentage of patients with persistent AF. However, incomplete LAPI, as a result of aiming for complete LAPI, may have a benefit comparable to that of complete LAPI. abstract_id: PUBMED:33022816 Multiple procedure outcomes for nonparoxysmal atrial fibrillation: Left atrial posterior wall isolation versus stepwise ablation. Objective: To compare multiple-procedure catheter ablation outcomes of a stepwise approach versus left atrial posterior wall isolation (LA PWI) in patients undergoing nonparoxysmal atrial fibrillation (NPAF) ablation. Background: Unfavorable outcomes for stepwise ablation of NPAF in large clinical trials may be attributable to proarrhythmic effects of incomplete ablation lines. It is unknown if a more extensive initial ablation strategy results in improved outcomes following multiple ablation procedures. Methods: Two hundred twenty two consecutive patients with NPAF underwent first-time ablation using a contact-force sensing ablation catheter utilizing either a stepwise (Group 1, n = 111) or LA PWI (Group 2, n = 111) approach. The duration of follow-up was 36 months. The primary endpoint was freedom from atrial arrhythmia >30 s. Secondary endpoints were freedom from persistent arrhythmia, repeat ablation, and recurrent arrhythmia after repeat ablation. Results: There was similar freedom from atrial arrhythmias after index ablation for both stepwise and LA PWI groups at 36 months (60% vs. 69%, p = .1). The stepwise group was more likely to present with persistent recurrent arrhythmia (29% vs. 14%, p = .005) and more likely to undergo second catheter ablation (32% vs. 12%, p < .001) compared to LA PWI patients. Recurrent arrhythmia after repeat ablation was more likely in the stepwise group compared to the LA PWI group (15% vs. 4%, p = .003). Conclusions: Compared to a stepwise approach, LA PWI for patients with NPAF resulted in a similar incidence of any atrial arrhythmia, lower incidence of persistent arrhythmia, and fewer repeat ablations. Results for repeat ablation were not improved with a more extensive initial approach. abstract_id: PUBMED:34621427 Repeat procedures for recurrent persistent atrial fibrillation: A propensity-matched score comparison between left atrial linear ablation with radiofrequency and posterior wall isolation with the cryoballoon. Aims: To evaluate the clinical outcome in patients undergoing repeat procedures for recurrent persistent atrial fibrillation following an index cryoballoon (CB-A) pulmonary vein isolation ablation on a mid-term follow-up of 12 months. Methods: In this propensity score-matched comparison, 50 patients undergoing left atrial posterior wall isolation (LAPWI) with the CB-A were matched to 50 patients treated with additional linear ablation using radiofrequency catheter ablation (RFCA). Results: Meantime to repeat the procedure was 9.74 ± 4.36 months. At 12 months follow-up freedom from atrial tachyarrhythmias (ATas) was achieved in 82% of patients in the LAPWI group and in 62% of patients in the linear ablation group (P = .03). Regression analysis demonstrated that relapses during the blanking period and LA dimensions were independent predictors of ATas recurrences following the repeat procedure. Conclusion: LAPWI using CB-A is associated with a significantly higher freedom from atrial arrhythmias when compared with the RFCA mediated left atrial linear lesions on a mid-term follow-up of 12 months in patients with persAF undergoing a redo procedure. abstract_id: PUBMED:32477812 Posterior Wall Isolation in Atrial Fibrillation Ablation. Catheter ablation is widely utilized for the management of atrial fibrillation (AF), particularly in patients who are refractory to medical therapy. The left atrium appears to play a dominant role in the condition of most patients with AF and, in particular, the posterior wall and pulmonary veins frequently harbor sources of fibrillation. Currently, the role of posterior wall isolation during catheter ablation of AF is controversial. In this review, we will examine the mechanistic role of the posterior left atrium, discuss the technical challenges of ablating in the posterior wall and the evolution of strategies to achieve isolation with catheter approaches, and review the relevant literature to date. abstract_id: PUBMED:27663412 Challenges and Outcomes of Posterior Wall Isolation for Ablation of Atrial Fibrillation. Background: The left atrial posterior wall (PW) often contains sites required for maintenance of atrial fibrillation (AF). Electrical isolation of the PW is an important feature of all open surgeries for AF. This study assessed the ability of current ablation techniques to achieve PW isolation (PWI) and its effect on recurrent AF. Methods And Results: Fifty-seven consecutive patients with persistent or high-burden paroxysmal AF underwent catheter ablation, which was performed using an endocardial-only (30) or a hybrid endocardial-epicardial procedure (27). The catheter ablation lesion set included pulmonary vein antral isolation and a box lesion on the PW (roof and posterior lines). Success in creating the box lesion was assessed as electrical silence of the PW (voltage <0.1 mV) and exit block in the PW with electrical capture. Cox proportional hazards models were used for analysis of AF recurrence. PWI was achieved in 21 patients (36.8%), more often in patients undergoing hybrid ablation than endocardial ablation alone (51.9% versus 23.3%, P=0.05). Twelve patients underwent redo ablation. Five of 12 had a successful procedural PWI, but all had PW reconnection at the redo procedure. Over a median follow-up of 302 days, 56.1% of the patients were free of atrial arrhythmias. No parameter including procedural PWI was a statistically significant predictor of recurrent atrial arrhythmias. Conclusions: PWI during catheter ablation for AF is difficult to achieve, especially with endocardial ablation alone. Procedural achievement of PWI in this group of patients was not associated with a reduction in recurrent atrial arrhythmias, but reconnection of the PW was common. abstract_id: PUBMED:29178497 How to perform posterior wall isolation in catheter ablation for atrial fibrillation. Catheter ablation has become standard of care in patients with symptomatic atrial fibrillation (AF). Although there have been significant advances in our understanding and technology, a substantial proportion of patients have ongoing AF requiring repeat procedures. Pulmonary vein isolation (PVI) is the cornerstone of AF ablation; however, it is less effective in patients with persistent as opposed to paroxysmal atrial fibrillation. Left atrial posterior wall isolation (PWI) is commonly performed as an adjunct to PVI in patients with persistent AF with nonrandomized studies showing improved outcomes. Anatomical considerations and detailed outline of the various approaches and techniques to performing PWI are detailed, and advantages and pitfalls to assist the clinical electrophysiologist successfully and safely complete PWI are described. abstract_id: PUBMED:35138376 Does isolation of the left atrial posterior wall using cryoballoon ablation improve clinical outcomes in patients with persistent atrial fibrillation? A prospective randomized controlled trial. Aims: Posterior wall isolation (PWI) of the left atrium (LA) adjunct to pulmonary vein isolation (PVI) by radiofrequency catheter ablation has shown favourable outcomes in patients with persistent atrial fibrillation (PeAF). This study was sought to investigate the efficacy and safety of additional PWI by cryoballoon ablation (CBA) alone in patients with PeAF. Methods And Results: Patients who underwent de novo CBA for PeAF (n = 100) were randomly assigned (1 : 1) to the PVI only group and PVI combined with PWI (PVI+PWI) group. Procedural and clinical outcomes were prospectively compared over a 12-month follow-up. Baseline characteristics, including mean AF duration (56.2 ± 43.2 months) and LA size (48.2 ± 7.7 mm), were well-balanced between the groups. Successful PVI was achieved in all patients. In the PVI+PWI group, complete PWI by CBA was achieved in 31 (62%) patients. The LA indwelling and procedure times were significantly longer in the PVI+PWI group. The complication rates were not different between groups. During a mean follow-up of 457.9 ± 61.8 days, the recurrence rate of atrial tachyarrhythmia was significantly lower in the PVI+PWI group than in the PVI only group (24% vs. 46%; P = 0.035). The recurrence-free survival rate was significantly higher in the PVI+PWI group compared with the PVI only group, irrespective of complete PWI (log-rank P = 0.013). Multivariate analysis showed that adjunctive PWI [hazard ratio (HR) 0.255; P = 0.003] and LA size (HR 1.079; P = 0.014) were independent predictors of clinical recurrence. Conclusion: Compared with PVI only, adjunctive PWI achieved exclusively by CBA resulted in better clinical outcomes without increasing complications in patients with PeAF. abstract_id: PUBMED:30091199 The use of an esophageal catheter to check the results of left atrial posterior wall isolation in the treatment of atrial fibrillation. Background: Left atrial posterior wall isolation (LAPWI) via catheter, surgical, and hybrid techniques is a promising treatment for persistent atrial fibrillation (PersAF). We investigated whether confirmation of LAPWI can be achieved using an esophageal pacing and recording electrode. Methods: Patients undergoing PersAF ablation with the intention to achieve LAPWI were enrolled. Two approaches to LAPWI were tested: (1) ablation using endocardial catheter ablation only, and (2) "Staged Hybrid" ablation with thoracoscopic epicardial ablation, followed by endocardial left atrial electrophysiological study and catheter ablation where necessary. Patients enrolled in the study all required further catheter ablation to achieve LAPWI in this group. In both the groups, esophageal recording and esophageal pacing was performed at the start of mapping and electrophysiological study and compared with endocardial electrophysiological findings. This was repeated at the end of the procedure. Results: Twenty patients (16 M, four F) were studied. Endocardial electrophysiological study showed that in none of the cases was the posterior left atrial wall electrically isolated at the start of the study. One patient with Barrett's esophagus failed to sense or pace from the esophagus at any point in the study. In the remaining 19/19, esophageal pacing captured the atrial rhythm at the start of the procedure. LAPWI was then achieved in 17/19 using endocardial catheter ablation; retesting at this point showed sensing and capture of the atrium from the esophagus was abolished. In the remainder sensing and capture persisted. Conclusions: Esophageal pacing can be used to confirm or refute electrical isolation of the left atrial posterior wall. abstract_id: PUBMED:26971331 Left atrial posterior wall isolation reduces the recurrence of atrial fibrillation: a meta-analysis. Purpose: The posterior wall of left atrium plays an important role in atrial fibrillation (AF) recurrence, but the benefit of left atrial posterior wall isolation (PWI) remains still unclear. The objective was to evaluate the benefit of PWI in radiofrequency ablation. Methods: PubMed and the Web of Science were searched in September 2015. Studies comparing catheter ablation with PWI [PWI(+)] vs. ablation without PWI [PWI(-)] were included. We performed a meta-analysis to assess atrial arrhythmia recurrence, procedure-related complications, and procedural time. Results: Five studies with 594 AF patients were included. Compared with PWI(-), PWI(+) resulted in a significantly lower atrial arrhythmia recurrence rate (relative risk [RR] 0.81, 95 % confidence interval [CI] 0.68-0.97, p = 0.02), which was largely driven by the decreased AF recurrence (RR 0.55, 95 % CI 0.35-0.86, p = 0.009). Recurrence rates of atrial tachycardia/flutter (AT/AFL) were comparable between two groups (RR 1.16, 95 % CI 0.85-1.58, p = 0.34). There were no significant differences in procedure-related complications (RR 1.07, 95 % CI 0.45-2.53, p = 0.89) and procedural times (weight mean difference 0.88, 95 % CI -7.29-9.06, p = 0.83). Conclusions: This meta-analysis shows that with comparable procedure-related complications and procedural time, ablation with PWI reduces AF recurrence. Answer: Isolation of the left atrial posterior wall (LAPWI) has been investigated as a potential strategy to improve clinical outcomes after radiofrequency catheter ablation for persistent atrial fibrillation (AF). The left atrial posterior wall is a significant target for ablation due to its role in the initiation and maintenance of persistent AF, and pulmonary vein isolation alone has shown unsatisfactory recurrence rates in this patient population (PUBMED:34258794). Clinical studies have reported mixed results regarding the safety and efficacy of posterior wall isolation. Some studies suggest that incomplete LAPWI may still confer a benefit comparable to complete LAPWI, indicating that even partial isolation resulting from the attempt to achieve complete isolation could be beneficial (PUBMED:33693191). However, achieving complete isolation of the posterior wall can be challenging, and procedural success does not always correlate with a reduction in recurrent atrial arrhythmias (PUBMED:27663412). Comparisons between different ablation strategies, such as stepwise ablation versus LAPWI, have shown that LAPWI may result in a lower incidence of persistent arrhythmia and fewer repeat ablations, although the incidence of any atrial arrhythmia was similar between the two approaches (PUBMED:33022816). Additionally, a propensity-matched score comparison found that LAPWI using cryoballoon ablation was associated with a higher freedom from atrial arrhythmias compared to radiofrequency catheter ablation with linear lesions in patients undergoing repeat procedures for recurrent persistent AF (PUBMED:34621427). A meta-analysis also indicated that ablation with posterior wall isolation resulted in a significantly lower atrial arrhythmia recurrence rate, particularly for AF recurrence, without significant differences in procedure-related complications and procedural times (PUBMED:26971331). Furthermore, a prospective randomized controlled trial showed that adjunctive PWI achieved exclusively by cryoballoon ablation resulted in better clinical outcomes without increasing complications in patients with persistent AF (PUBMED:35138376). In summary, the evidence suggests that isolation of the left atrial posterior wall can improve clinical outcomes after radiofrequency catheter ablation for persistent atrial fibrillation, although the degree of benefit may vary depending on the completeness of isolation and the specific ablation techniques used.
Instruction: Rotator Cuff Fatty Infiltration Are Coronal Images More Helpful for Characterization than Sagittal Images? Abstracts: abstract_id: PUBMED:27281317 Rotator Cuff Fatty Infiltration Are Coronal Images More Helpful for Characterization than Sagittal Images? Objective: To assess for a difference in the characteriza - tion of rotator cuff (RC) muscle fatty infiltration (FI) between the sagittal and coronal planes in the setting of a large or massive RC tear. Materials And Methods: Retrospective review of pa - tients with MRIs demonstrating large or massive RC tears (study group-SG) and no tearing (control group-CG) was conducted. Sagittal T1W and coronal PD images of the supraspinatus and infraspinatus muscles from each patient were selected, separated, and placed in random order. Three MSK readers reviewed the images blindly and independently and qualified the FI based on the Goutallier classification. Results: The SG included 15 men and 15 women (mean age: 65; range: 45 to 87 years); CG included 10 men and 10 women (mean age: 60; range 45 to 76 years). Statistically significant (SS) discrepancies in the assessment of the FI between the sagittal and coronal images occurred 62% in SG and 33% in CG, (p < 0.001). In SG, each reader demon - strated a SS tendency (p < 0.04) to assign a higher value to each muscle when grading the sagittal images compared to coronal images. In SG, 33% of the time (37% supraspinatus, 29% infraspinatus), there was a grade greater than or equal to 3 assigned on the sagittal images but less than 3 grade on corresponding coronal images. Conclusion: There was a statistically significant differ - ence in the characterization of RC FI between the sagittal and coronal planes in the setting of a large or massive tear, likely related to incomplete visualization of the musculature on sagittal imaging secondary to muscle and tendon retrac - tion and more complete visualization on coronal imaging. Awareness of this difference may allow for a more accurate characterization of the degree of fatty infiltration. abstract_id: PUBMED:11407717 Angled oblique sagittal MR imaging of rotator cuff tears: comparison with standard oblique sagittal images. Objective: To compare the accuracy for diagnosing rotator cuff tears of oblique coronal images supplemented with standard oblique sagittal images versus thinner-section angled oblique sagittal images. Design And Patients: The study included 75 consecutive patients who had a shoulder MR scan followed by arthroscopy. MR images included oblique coronal, oblique sagittal (4 mm thick, 1 mm interslice gap), and angled oblique sagittal (3 mm/0.2 mm) images perpendicular to the lateral cuff. A musculoskeletal staff radiologist and fellow separately evaluated the cuff for tears on the oblique coronal images supplemented with either the oblique sagittal or the angled sagittal images. Results: For distinguishing a cuff tear from no tear, the staff radiologist had an accuracy of 0.76 (95% confidence interval: 0.67, 0.85) with the standard sagittal set, and 0.88 (0.80, 0.95) with the angled set (P=0.04). There was a nonsignificant improvement in accuracy for the fellow, calculated as 0.73 (0.63, 0.83) on the standard sagittal set and 0.76 (0.67, 0.85) on the angled set. Both readers also improved their diagnostic accuracy for partial-thickness tears with the angled set, although the improvement was statistically significant only for the staff radiologist. Conclusion: There is a slight improvement in accuracy for diagnosing rotator cuff tears, particularly partial-thickness tears, for the more experienced radiologist using thinner-section angled oblique sagittal images. These images may be useful as a supplemental sequence in patients where it is important to identify partial-thickness tears accurately. abstract_id: PUBMED:35566785 Coracoid Impingement and Morphology Is Associated with Fatty Infiltration and Rotator Cuff Tears. This study describes measurements between the coracoid, glenoid, and humerus; characterizes coracoid shape, rotator cuff fatty infiltration, and quantitatively evaluates coracoid impingement and its association with anterosuperior rotator cuff tears (ASCT). 193 shoulder magnetic resonance imaging (MRI) scans demonstrating: rotator cuff tear; isolated tear of the supraspinatus; tear of supraspinatus and subscapularis, were included. MRI measurements included coracohumeral interval (CHI), coracoid overlap (CO), coracoid recess (CR), coracoglenoid angle (CGA), and coracoglenoid interval (CGI) on axial slices; acromiohumeral interval (AHI) on coronal slices; and coracohumeral interval (CHI) and coracoacromial ligament (CAL) thickness on sagittal slices. The coracoid shape was classified as flat, curved, or hooked. An Independent T-test was used to compare the MRI measurements and the different rotator cuff tear groups. In 79% of the patients with ASCT tears, the coracoid was curved. Axial CHI, CGA, sagittal CHI, and AHI were decreased in ASCT when compared to no tears and isolated supraspinatus tears (p < 0.05). CO was increased in ASCT compared to no tears and isolated supraspinatus tears (p < 0.05). Patients with an ASCT had a significantly increased subscapularis and supraspinatus Goutallier fatty infiltration score when compared to no tear and isolated supraspinatus tears (p < 0.05). These quantitative measurements may be useful in identifying patients at risk for ASCT. Level of Evidence III. abstract_id: PUBMED:8710459 Use of oblique sagittal T2-weighted images in diagnosing rotator cuff tears The correlation between oblique sagittal T2-weighted images and oblique coronal T2-weighted images was determined in 86 patients who had symptoms and signs suggesting rotator cuff tears. Abnormal high signal lesions suggesting full and partial thickness tear of the supraspinatus tendon were observed on the oblique coronal images in 20 and 27 of 86 patients, respectively. The oblique sagittal images showed the lesions just the same as the oblique coronal images. In the other 39, abnormal high signal lesions were not observed on oblique coronal images or on oblique sagittal images. The oblique sagittal images were proved not useful to detect tears of the supraspinatus tendon. However, the oblique sagittal T2-weighted images showed posterior extension of high signal lesions and clarified abnormal high signal lesions near the long head of the biceps tendon in two patients. These results suggested that oblique coronal images should always be taken at the first step to identify supraspinatus tendon tear, and oblique sagittal images can be taken as an adjust to the former images. abstract_id: PUBMED:34116193 Does statin-treated hyperlipidemia affect rotator cuff healing or muscle fatty infiltration after rotator cuff repair? Background: Hyperlipidemia is linked to poor tendon-to-bone healing and progression of fatty infiltration after rotator cuff repair. Statins effectively treat hyperlipidemia, but it is unknown if they have any potential detrimental effects following rotator cuff repair. The aim of this study was to evaluate the effect of statins on rotator cuff healing and fatty infiltration following repair. Methods: A total of 77 patients undergoing arthroscopic rotator cuff repair were recruited prospectively, 38 patients who were prescribed a statin for hyperlipidemia (statin group) and 39 patients who were not taking a statin (control group). Patients who did not have both preoperative and 1-year postoperative magnetic resonance imaging (MRI) scans were excluded from the study. Patient-reported outcome measures, namely the Western Ontario Rotator Cuff (WORC) index, Constant-Murley score, American Shoulder and Elbow Surgeons (ASES) score, and Disabilities of the Arm, Shoulder and Hand (DASH) score, were collected preoperatively and at 1 year. Fatty infiltration was assessed on MRI according to the Goutallier grade preoperatively and at 12 months; rotator cuff healing was assessed at 12 months according to the Sugaya classification. Following propensity score weighting to adjust for baseline imbalances, 12-month outcomes were compared between the 2 groups. Results: At 12 months, all patient-reported outcome measures had improved significantly compared with baseline (WORC score, 85.9 vs. 32.5, P < .001; ASES score, 87.3 vs. 37.5, P < .001; Constant-Murley score, 77 vs. 31, P < .001; and DASH score, 13.6 vs. 61.4, P < .001). There was no significant difference in postoperative scores in the statin group vs. the control group (WORC score, 84.9 vs. 89.6, P = .94; ASES score, 87.5 vs. 86.6, P = .40; Constant-Murley score, 77 vs. 81, P = .90; and DASH score, 14.4 vs. 11.4, P = .14), and for 3 of these scores, the 95% confidence intervals excluded a clinically meaningful difference. Similarly, rotator cuff healing at 12 months and Goutallier fatty infiltration grades were comparable between the 2 groups. Retears were seen in 6 patients (15.8%) in the statin group and 8 (20.5%) in the control group. Progression of fatty infiltration was seen in 4 patients (10.5%) in the statin and 4 (10.3%) in the control group. Statin use did not demonstrate a significant association with either retear risk (P = .41) or progression of fatty atrophy (P = .69). Conclusion: Patient-reported outcomes, rotator cuff retear rate, and fatty infiltration on MRI at 12 months after rotator cuff repair in patients with hyperlipidemia treated with statins are similar to those in a control group. abstract_id: PUBMED:8144344 Diagnostic performance of magnetic resonance imaging for the diagnosis of rotator cuff tears using supplemental images in the oblique sagittal plane. Rationale And Objectives: The authors evaluated the diagnostic utility of supplemental imaging in the oblique sagittal (OS) plane for the magnetic resonance imaging (MRI) diagnosis of rotator cuff tears. Methods: Two radiologists with varying levels of MRI experience blindly reviewed shoulder MR examinations of 50 patients for rotator cuff tears. Shoulder examinations were interpreted twice, initially using only double-echo spin-echo images obtained in the oblique coronal (OC) plane and later using double-echo spin-echo images obtained in the both the OC and OS planes. Tears were characterized according to size, location, and extent, and levels of diagnostic confidence were evaluated. Results: Sensitivity and specificity of MR for the diagnosis of rotator cuff tear was 85% and 80%, respectively, with the OC series increasing to 95% and 93%, respectively with OCOS scans. However, these increases were not statistically significant. Receiver operating characteristic (ROC) curves suggest a trend toward increased diagnostic confidence when supplemental OS scans are available, especially for the less experienced reader. Characterization of rotator cuff tears was not improved with additional OS images. Conclusions: Supplemental OS scans did not demonstrate a definite improvement in diagnostic accuracy for rotator cuff tears compared to OC scans alone. Estimations based on sample size calculations indicate that a much larger population of patients would be needed to show a statistically significant difference. abstract_id: PUBMED:33675976 Does the Walch type B shoulder have a transverse force couple imbalance? A volumetric analysis of segmented rotator cuff muscles in osteoarthritic shoulders. Background: The etiology of the Walch type B shoulder remains unclear. We hypothesized that a scapulohumeral muscle imbalance, due to a disturbed transverse force couple (TFC) between the anterior and posterior rotator cuff muscles, may have a role in the pathogenesis of the type B morphology. The purpose of this study was to determine whether there is a TFC imbalance in the Walch type B shoulder using an imaging-based 3-dimensional (3D) volumetric and fatty infiltration assessment of segmented rotator cuff muscles. Methods: Computed tomography images of 33 Walch type A and 60 Walch type B shoulders with the complete scapula and humerus including the distal humeral epicondyles were evaluated. The 3D volumes of the entire subscapularis, supraspinatus, and infraspinatus-teres minor (Infra-Tm) were manually segmented and analyzed. Additionally, anthropometric parameters including glenoid version, glenoid inclination, posterior humeral head subluxation, and humeral torsion were measured. The 3D muscle analysis was then compared with the anthropometric parameters using the Wilcoxon rank sum and Kruskal-Wallis tests. Results: There were no significant differences (P > .200) in muscle volume ratios between the Infra-Tm and the subscapularis in Walch type A (0.93) and type B (0.96) shoulders. The fatty infiltration percentage ratio, however, was significantly greater in type B shoulders (0.94 vs. 0.75, P < .001). The Infra-Tm to subscapularis fatty infiltration percentage ratio was significantly larger in patients with >75% humeral head subluxation than in those with 60%-75% head subluxation (0.97 vs. 0.74, P < .001) and significantly larger in patients with >25° of retroversion than in those with <15° of retroversion (1.10 vs. 0.75, P = .004). The supraspinatus fatty infiltration percentage was significantly lower in Walch type B shoulders than type A shoulders (P = .004). Walch type A shoulders had mean humeral retrotorsion of 22° ± 10° whereas Walch type B shoulders had humeral retrotorsion of only 14° ± 9° relative to the epicondylar axis (P < .001). Conclusion: The TFC is in balance in the Walch type B shoulder in terms of 3D volumetric rotator cuff muscle analysis; however, the posterior rotator cuff does demonstrate increased fatty infiltration. Posterior humeral head subluxation and glenoid retroversion, which are pathognomonic of the Walch type B shoulder, may lead to a disturbance in the length-tension relationship of the posterior rotator cuff, causing fatty infiltration. abstract_id: PUBMED:26261789 Muscle atrophy and fatty infiltration after an acute rotator cuff repair in a sheep model. Introduction: rotator cuff tears (RCTs) are the most common tendon injury seen in orthopedic patients. Muscle atrophy and fatty infiltration of the muscle are crucial factors that dictate the outcome following rotator cuff surgery. Though less studied in humans, rotator cuff muscle fibrosis has been seen in animal models as well and may influence outcomes as well. The purpose of this study was to determine if the rotator cuff would develop muscle changes even in the setting of an acute repair in a sheep model. We hypothesized that fatty infiltration and fibrosis would be present even after an acute repair six months after initial surgery. Methods: twelve female adult sheep underwent an acute rotator cuff tear and immediate repair on the right shoulder. The left shoulder served as a control and did not undergo a tear or a repair. Six months following acute rotator cuff repairs, sheep muscles were harvested to study atrophy, fatty infiltration, and fibrosis by histological analysis, western blotting, and reverse transcription polymerase chain reaction (RT-PCR). Results: the repair group demonstrated an increase expression of muscle atrophy, fatty infiltration, and fibrosis related genes. Significantly increased adipocytes, muscle fatty infiltration, and collagen deposition was observed in rotator cuff muscles in the tendon repair group compared to the control group. Conclusions: rotator cuff muscle undergoes degradation changes including fatty infiltration and fibrosis even after the tendons are repair immediately after rupture. Level Of Evidence: Basic Science Study. abstract_id: PUBMED:26261790 Bone morphogenetic protein signaling in rotator cuff muscle atrophy and fatty infiltration. Background: reduced mass (atrophy) and increased fat content (fatty infiltration) of rotator cuff muscles are common complications of large or massive rotator cuff (RC) tears, and are believed to be irreversible even after tendon repairs. Clinically, both muscle atrophy and fatty infiltration are important factors contributing to poor functional outcomes after tendon repairs. The molecular mechanism of RC muscle atrophy and FI remains undefined. In this study, we investigated the role of bone morphogenetic proteins (BMP) signaling in RC muscle atrophy and fatty infiltration using a rat model. Methods: unilateral massive RC tears was induced in adult rats. RC muscles were harvested at 2 and 6 weeks after injury for BMP signaling analysis. In a separate experiment, BMP inhibitor (LDN-193189) was injected to rats through daily intraperitoneal injection. RC muscles from rats in the treated and control groups were harvested at 6 weeks after injury for biochemistry and histology analysis. Results: we found significantly increased BMP-14 and BMP-7 expression in rotator cuff muscles after RCT. Inhibiting BMP signaling resulted in increased muscle atrophy and reduced fatty infiltration in rotator cuff muscle after RC tears. Conclusion: this result suggests that BMP signaling inhibits RC muscle atrophy but promotes fatty infiltration. abstract_id: PUBMED:31019560 The effect of obesity on fatty infiltration of the rotator cuff musculature in patients without rotator cuff tears. Background: The modified Goutallier classification system describes the fatty infiltration of rotator cuff musculature (RCM) seen on magnetic resonance imaging (MRI) to assist with surgical decision-making for patients with rotator cuff tears (RCT). We describe the relationship between body mass index (BMI) and fatty infiltration in patients without RCT. Methods: Twenty-five patients from each of four different BMI ranges (< 25 kg/m2; 25 kg/m2 to 30 kg/m2; 30 kg/m2 to 35 kg/m2; > 35 kg/m2) were randomly selected from 1088 consecutive shoulder MRI scans (T1 parasagittal series). Four physician-readers evaluated MRI scans and assigned modified Goutallier grades (0 to 4) in each of the four rotator cuff muscles, as well as two adjacent muscles. Results: Grade distributions varied significantly based on BMI category for infraspinatus (p = 0.001), teres minor (p < 0.001), subscapularis (p = 0.025), teres major (p < 0.001) and deltoid (p < 0.001). Higher grades were evident with a diagnosis of diabetes mellitus in three of six muscles (p < 0.05), hyperlipidaemia in one muscle (p = 0.021) and greater patient age in three muscles (p < 0.05). Conclusions: Obese and severely obese patients without RCT have more fatty infiltration seen on MRI. Patient factors (older age and diagnosis of diabetes mellitus) can be predictive of fatty infiltration in RCM. Fatty infiltration of RCM is not solely attributable to the presence of a RCT. Answer: Based on the provided abstracts, it appears that coronal images may be more helpful for the characterization of rotator cuff (RC) muscle fatty infiltration (FI) than sagittal images, particularly in the setting of a large or massive RC tear. A study found statistically significant discrepancies in the assessment of FI between sagittal and coronal images, with a tendency to assign a higher grade of FI when grading the sagittal images compared to coronal images (PUBMED:27281317). This difference was attributed to incomplete visualization of the musculature on sagittal imaging due to muscle and tendon retraction, whereas coronal imaging provided a more complete visualization. Awareness of this difference may allow for a more accurate characterization of the degree of fatty infiltration. In contrast, another study suggested that angled oblique sagittal MR imaging might slightly improve accuracy for diagnosing rotator cuff tears, particularly partial-thickness tears, for more experienced radiologists (PUBMED:11407717). However, this study focused on the diagnosis of tears rather than the characterization of fatty infiltration specifically. Another study found that certain measurements related to coracoid morphology and impingement were associated with fatty infiltration and rotator cuff tears, indicating that various imaging planes and measurements can be relevant in the context of rotator cuff pathology (PUBMED:35566785). Overall, while sagittal images can provide valuable information and may be useful in certain contexts, the evidence from the abstract (PUBMED:27281317) suggests that coronal images might be more helpful for characterizing rotator cuff fatty infiltration, especially in the presence of large or massive tears.
Instruction: Is olestra consumption associated with changes in dietary intake, serum lipids, and body weight? Abstracts: abstract_id: PUBMED:12921885 Is olestra consumption associated with changes in dietary intake, serum lipids, and body weight? Objectives: There is considerable controversy regarding the effects of fat substitutes (such as the non-caloric fat substitute, olestra) on Americans' diet and health. This report gives associations of olestra consumption (in savory snacks) with changes in nutrient intake, serum lipid concentrations, and body weight 1 y after these snacks became available nationally in the United States. Methods: Participants were 1178 adults recruited from three large U.S. cities. At baseline (before the availability of olestra), participants attended a clinic visit and completed questionnaires (including a food-frequency questionnaire), provided fasting blood samples, and had height and weight measured. The clinic visit was repeated about 1 y later, after the introduction of olestra-containing snacks in the marketplace. Olestra consumption was categorized as "none," "very low" (>0 to <0.4 g/d), "low" (>/=0.4 to <2.0 g/d), and "moderate/high" (>/=2.0 g/d). Results: Twenty-five percent of participants reported consuming olestra-containing savory snacks in the preceding month, but only 2% were categorized as moderate/high consumers. Men reported nearly twice as much olestra consumption as women (1.22 versus 0.68 g/d, P = 0.01). Among moderate/high olestra consumers, total energy and carbohydrate intakes increased by 209 kcal/d and 37 g/d, compared with decreases of 87 kcal/d and 14 g/d, respectively, among non-consumers (both Ps for trend = 0.01), corresponding to non-significant changes in percentages of energy from carbohydrate and total fat. Olestra consumption was not associated with statistically significant changes in serum lipids or body weight. Conclusions: This study found very modest changes in total diet, but no changes in serum lipids or body weight, associated with consumption of olestra-containing savory snacks. However, the low use of olestra by the study sample limited the ability of this study to detect significant effects. abstract_id: PUBMED:10999973 Changes in diet, weight, and serum lipid levels associated with olestra consumption. Background: Specially manufactured low-fat and nonfat foods have become increasingly available over the past 2 decades and controversy has surrounded the issue of whether these products have beneficial or adverse effects on the health and nutritional status of Americans. Methods: This study examines the association of olestra consumption with changes in dietary intakes of energy, fat, and cholesterol and changes in weight and serum lipid concentrations. Data are from a cohort of 335 participants in the Olestra Post-Marketing Surveillance Study sentinel site in Marion County (Indianapolis, Ind). Diet, weight, and serum lipid levels were assessed before the market release of olestra and 1 year later, after olestra-containing foods were widely available. Olestra intake at the 1-year follow-up was categorized as none, low (>0 to 0.4 g/d), moderate (0.4 to 2.0 g/d), and heavy (>2.0 g/d). Results: Participants in the heavy olestra consumption category significantly reduced dietary intake of percentage of energy from fat (2.7 percentage points, P for trend,.003) and saturated fat (1.1 percentage points, P for trend,.02). Consumers in the highest category of olestra consumption had statistically significantly reduced total serum cholesterol levels of -0.54 mmol/L (-21 mg/dL)compared with -0.14 mmol/L (-5 mg/dL) among olestra nonconsumers (P for trend,.03). Conclusions: These results indicate that introduction of a new fat substitute (olestra) in the US market was associated with healthful changes in dietary fat intake and serum cholesterol concentrations among consumers who chose to consume olestra-containing foods. abstract_id: PUBMED:12074254 Effect on body weight of replacing dietary fat with olestra for two or ten weeks in healthy men and women. Objective: To examine in two separate studies the effects of replacing dietary fat with Olestra on body composition and weight change in healthy young men and women. Methods: Ten healthy, lean young men participated in Study One that was a 22-day single blind, within-subject design. After a control diet (40% fat) for eight days Study One subjects received an Olestra-substituted diet (31% metabolizable fat) for 14 days. Study Two was a randomized parallel-arm clinical trial with 15 healthy, lean and overweight young women. These subjects were randomly assigned to receive a control diet (40% fat), an Olestra-containing diet (31% metabolizable fat) or a reduced-fat diet (31% fat) for 10 weeks. All foods were provided to the subjects, and energy intakes were not restricted. The primary endpoint in both studies was change from baseline in body weight. In Study Two, body composition was measured by dual energy x-ray absorptiometry. In both studies, food intake and nutrient compensation were assessed. Results: In Study One fat substitution by Olestra resulted in a significant 1.7 kg weight loss from baseline. In Study Two, change in body weight and body fat from baseline were statistically significant in all groups, but the group with Olestra lost significantly more weight from baseline (-5.0 kg) than the other two groups. In Study One there was partial compensation for the decreased energy intake, while in Study Two, compensation was seen only for those on the reduced-fat diet. Conclusion: Replacement of 1/3 of dietary fat with Olestra in periods of up to 10 weeks results in weight loss in men and women. abstract_id: PUBMED:11093293 Randomized controlled trial of changes in dietary carbohydrate/fat ratio and simple vs complex carbohydrates on body weight and blood lipids: the CARMEN study. The Carbohydrate Ratio Management in European National diets. Objective: To investigate the long-term effects of changes in dietary carbohydrate/fat ratio and simple vs complex carbohydrates. Design: Randomized controlled multicentre trial (CARMEN), in which subjects were allocated for 6 months either to a seasonal control group (no intervention) or to one of three experimental groups: a control diet group (dietary intervention typical of the average national intake); a low-fat high simple carbohydrate group; or a low-fat high complex carbohydrate group. Subjects: Three hundred and ninety eight moderately obese adults. Measurements: The change in body weight was the primary outcome; changes in body composition and blood lipids were secondary outcomes. Results: Body weight loss in the low-fat high simple carbohydrate and low-fat high complex carbohydrate groups was 0.9 kg (P < 0.05) and 1.8 kg (P < 0.001), while the control diet and seasonal control groups gained weight (0.8 and 0.1 kg, NS). Fat mass changed by -1.3kg (P< 0.01), -1.8kg (P< 0.001) and +0.6kg (NS) in the low-fat high simple carbohydrate, low-fat high complex carbohydrate and control diet groups, respectively. Changes in blood lipids did not differ significantly between the dietary treatment groups. Conclusion: Our findings suggest that reduction of fat intake results in a modest but significant reduction in body weight and body fatness. The concomitant increase in either simple or complex carbohydrates did not indicate significant differences in weight change. No adverse effects on blood lipids were observed. These findings underline the importance of this dietary change and its potential impact on the public health implications of obesity. abstract_id: PUBMED:25886710 High dietary protein intake is associated with an increased body weight and total death risk. Background & Aims: High dietary protein diets are widely used to manage overweight and obesity. However, there is a lack of consensus about their long-term efficacy and safety. Therefore, the aim of this study was to assess the effect of long-term high-protein consumption on body weight changes and death outcomes in subjects at high cardiovascular risk. Methods: A secondary analysis of the PREDIMED trial was conducted. Dietary protein was assessed using a food-frequency questionnaire during the follow-up. Cox proportional hazard models were used to estimate the multivariate-adjusted hazard ratio (HR) and 95% confidence intervals (95%CI) for protein intake in relation to the risk of body weight and waist circumference changes, cardiovascular disease, cardiovascular death, cancer death and total death. Results: Higher total protein intake, expressed as percentage of energy, was significantly associated with a greater risk of weight gain when protein replaced carbohydrates (HR: 1.90; 95%CI: 1.05, 3.46) but not when replaced fat (HR: 1.69; 95%CI: 0.94, 3.03). However, no association was found between protein intake and waist circumference. Contrary, higher total protein intake was associated with a greater risk of all-cause death in both carbohydrate and fat substitution models (HR: 1.59; 95%CI: 1.08, 2.35; and HR: 1.66; 95%CI: 1.13, 2.43, respectively). A higher consumption of animal protein was associated with an increased risk of fatal and non-fatal outcomes when protein substituted carbohydrates or fat. Conclusions: Higher dietary protein intake is associated with long-term increased risk of body weight gain and overall death in a Mediterranean population at high cardiovascular risk. abstract_id: PUBMED:12119656 A role for olestra in body weight management. Olestra is a fat substitute made from fatty acids esterified to sucrose and can be used in the preparation of virtually any food made with fat. Foods made with olestra retain the mouthfeel, palatability and satiating effects of their full-fat counterparts without providing any digestible energy. Because olestra provides no energy, it has the potential to be a useful tool in weight loss and weight maintenance. Short-term studies of olestra replacement in foods demonstrate that fat replacement leads to a net reduction in fat intake. When excess total energy is available, fat replacement also reduces total energy intake in lean and obese men and women. In longer-term studies in which olestra is incorporated into the daily diet, there is an incomplete compensation for the fat energy replaced by olestra. When overweight men consumed olestra as part of a varied diet over nine months, weight loss continued for the duration of the study, whereas individuals receiving a typical low-fat diet regained most of the initial weight lost. Other studies are underway to examine the usefulness of olestra in long-term weight maintenance following weight loss. Post-marketing surveillance of olestra foods in the United States indicates that substitution of olestra for only 1-2 g of fat d-1 may be sufficient to prevent the average weight gain reported in adults of 0.5-1.0 kg year-1. abstract_id: PUBMED:22222110 Review of the effects of dilution of dietary energy with olestra on energy intake. The non-absorbable substitute for dietary triacylglycerol, olestra, has been marketed in the United States for fifteen years. Olestra is comprised of sucrose with six to eight of its hydroxyl groups forming ester links with long-chain fatty acids. Because olestra is not hydrolyzed by fat-splitting enzymes in the small intestine, it is not absorbed from the small intestine into blood and tissues, and therefore provides no energy that can be utilized by the body. The hedonic properties of olestra with a specific fatty acid composition are similar to those of a triacylglycerol with the same fatty acid composition. Its use by consumers has been restricted by federal regulation to the commercial preparation of savory snack food items, principally as a frying medium for potato chips. An important question about the substitution of olestra for absorbable fat in the diet is whether the consumer will sense that a smaller amount of energy has been ingested. If it is sensed, thereby providing no satiation, then consuming additional energy in later meals will compensate for the removal of absorbable energy from the diet. If it is not sensed at all, then there is no compensation, and the person reduces caloric intake. This review first summarizes studies with olestra that have focused on its effect on the physiology of appetite. In general these studies have demonstrated that olestra does not influence signals of satiation including cholecystokinin and stomach emptying. The review then discusses studies of food consumption in experimental animals in which olestra was substituted for fat in the diet. Rodents have been repeatedly observed to compensate completely for the substitution of olestra for normal fat by eating more total diet. Most studies of the effect of olestra on human satiation have found incomplete or no compensation through additional energy consumption when olestra was substituted for dietary fat. In two clinical studies, however, complete compensation was observed, suggesting that experimental conditions and individual variability influence the ability to sense the substitution of olestra for absorbable fat. There is no evidence that dietary olestra causes consumption of more energy than would have been consumed without olestra in the diet. The data from animals and humans strongly suggest that the rodent is not a satisfactory model for the human in the determination of the extent of compensation by substitution of olestra for dietary fat. abstract_id: PUBMED:1436763 The influence of dietary fat on food intake and body weight. Excessive intake of dietary fat is associated with a number of nutrition-related disorders, including obesity, heart disease, and cancer. The over-consumption of fat may be related to its palatability, high energy density, or physiological effects. This article reviews possible reasons why fat intake is high, examines the relationship between diet composition and body weight, and explores potential fat reduction strategies. It is concluded that low-fat or fat-free products could be useful in reducing the percentage of calories derived from fat, although this assertion needs to be further tested in controlled laboratory experiments and validated on a population basis. abstract_id: PUBMED:20157326 Effects of increasing dietary protein and fibre intake with lupin on body weight and composition and blood lipids in overweight men and women. Background: Lupin kernel flour (LKF) is a novel food ingredient that is high in protein and fibre. We have previously shown that partial substitution of refined wheat-derived carbohydrate in bread with protein and fibre from LKF can reduce appetite and energy intake acutely. In addition, several studies have suggested that lupin may reduce cholesterol concentrations and benefit glucose and insulin metabolism. Aim: The aim of this study was to investigate the effects on body weight and composition and blood lipids, glucose and insulin of an ad libitum LKF-enriched diet higher in dietary protein and fibre. Subjects And Methods: A total of 88 overweight and obese men and women were recruited for a 16-week parallel-design randomized controlled trial. Participants replaced 15-20% of their usual daily energy intake with white bread (control) or LKF-enriched bread (lupin) in an ad libitum diet. Measurements of body weight and composition, and fasting blood biochemical measurements were performed at baseline and 16 weeks. The primary analysis included 74 participants (37 per group) who completed the intervention. Results: At baseline, mean (+/-s.d.) body mass index and total cholesterol were 30.6+/-3.5 kg m(-2) and 5.37+/-0.94 mmol l(-1), respectively. Estimated (mean between-group difference (95% confidence interval)) protein (13.7 (2.28, 25.0) g per day) and fibre (12.5 (8.79, 16.2) g per day) intakes were higher during the intervention with lupin than with control. For lupin relative to control, the net effects on body weight (-0.4 (-1.3, 0.6) kg), fat mass (-0.5 (-1.1, 0.2) kg) and percentage (-0.5 (-1.1, 0.1)%), plasma leptin (-1.66 (-4.91, 1.59) ng ml(-1)) and adiponectin (0.20 (-0.73, 1.13) mg l(-1), as well as serum total cholesterol (-0.08 (-0.38, 0.22) mmol l(-1)), triglycerides (0.09 (-0.10, 0.21) mmol l(-1)), glucose (0.10 (-0.11, 0.30) mmol l(-1)) and insulin (0.40 (-1.20, 2.00) mU l(-1)) were not significant. Conclusions: This study does not support the proposal that an ad libitum diet enriched in LKF resulting in moderate changes in both protein and fibre intakes can benefit body weight and composition or fasting blood lipids, glucose and insulin concentrations in overweight men and women with mildly elevated total cholesterol concentrations. abstract_id: PUBMED:32698457 Effects of Pistachio Consumption in a Behavioral Weight Loss Intervention on Weight Change, Cardiometabolic Factors, and Dietary Intake. Epidemiological studies have linked regular nut consumption with lower body mass index and reduced likelihood of weight gain in adulthood. Nuts can displace other foods in the diet, and thus, promote a healthier dietary pattern. The purpose of this study was to examine the effect of pistachio nut consumption in overweight/obese adults. This randomized controlled study enrolled non-diabetic overweight/obese adults (n = 100) assigned to a 4-month behavioral weight loss intervention only group (controls) or also prescribed 1.5 oz/day (42 g/day) of pistachios (pistachio group). Outcomes were change in body weight, cardiometabolic factors, and dietary intake. Percent weight change was similar in the two groups (-5.1 [0.5] (mean [SE])% in the control group and -4.9 [0.6]% in the pistachio group, and body mass index (BMI) and waist circumference were reduced in both groups (time effect p ≤ 0.05). The pistachio group (but not the control group) exhibited a significant reduction in both systolic and diastolic blood pressure (time effect p = 0.01). Plasma alpha-carotene, beta-carotene, and lutein concentrations increased significantly in the pistachio group (time effect p < 0.05). Pistachio consumption was associated with increased dietary fiber intake and decreased consumption of sweets. Regular consumption of pistachios was associated with a comparable degree of weight loss, and similar reductions in BMI and waist circumference, in overweight/obese men and women compared to controls, and favorable changes in the diet, in the context of a behavioral weight loss intervention. Answer: Olestra consumption has been studied to determine its association with changes in dietary intake, serum lipids, and body weight, with varying results across different studies. One study found that olestra consumption was not associated with statistically significant changes in serum lipids or body weight. Among moderate/high olestra consumers, total energy and carbohydrate intakes increased, but these changes did not correspond to significant changes in percentages of energy from carbohydrate and total fat (PUBMED:12921885). However, another study reported that participants in the heavy olestra consumption category significantly reduced dietary intake of percentage of energy from fat and saturated fat. Additionally, consumers in the highest category of olestra consumption had statistically significantly reduced total serum cholesterol levels (PUBMED:10999973). In a study examining the effects of replacing dietary fat with olestra for two or ten weeks in healthy men and women, it was found that replacement of dietary fat with olestra resulted in significant weight loss from baseline in both short-term and longer-term settings (PUBMED:12074254). Another study suggested that olestra could be a useful tool in weight loss and weight maintenance, as short-term studies demonstrated that fat replacement leads to a net reduction in fat intake and, when excess total energy is available, also reduces total energy intake. In longer-term studies, there was an incomplete compensation for the fat energy replaced by olestra, and overweight men consuming olestra as part of a varied diet over nine months continued to lose weight (PUBMED:12119656). A review of the effects of dilution of dietary energy with olestra on energy intake concluded that most studies found incomplete or no compensation through additional energy consumption when olestra was substituted for dietary fat. In two clinical studies, however, complete compensation was observed, suggesting that experimental conditions and individual variability influence the ability to sense the substitution of olestra for absorbable fat (PUBMED:22222110). In summary, the association of olestra consumption with changes in dietary intake, serum lipids, and body weight appears to be complex and may vary depending on the amount of olestra consumed, individual differences, and the duration of consumption. Some studies have found beneficial effects on serum cholesterol levels and weight loss, while others have not found significant changes.
Instruction: "PHC leadership: are health centres in good hands? Abstracts: abstract_id: PUBMED:37427184 Barriers and facilitators influencing access to and utilization of primary healthcare services in Kurdistan-region, Iraq: a cross-sectional study. The primary healthcare (PHC) idea is intended to become the first line of interaction for members of the public with care needs, as well as a system that views health as a complete state of being rather than only a state of wellbeing. This study aimed to examine the barriers and facilitators influencing the access and utilization of PHC services in Erbil governorate, Kurdistan-region, Iraq, through assessing populations' practice, and satisfaction. Also, explore the relationship between the socioeconomic, demographic, and cultural characteristics of the study population and utilization of PHC services. Methods: This was a cross-sectional study. A questionnaire-based survey was used for data collection. Totally, 2400 individuals have been selected in 6 different districts and the centre of Erbil through the multi-cluster random sampling method. The χ2 test was used for categorical variables, and a one-way ANOVA was used for numerical variables. A P value less than 0.05 was considered statistically significant. Results: The main reason for utilizing PHC centres was preventive purpose (68.1%), then poverty was the second reason (11.33%), and the participants reported that during the presence of urgent cases when they cannot use other health facilities, they use PHC centres (9%). In terms of barriers for utilizing and visiting PHC centres, the participated people stated that most of them, due to inadequate services, did not use and visit PHC centres (83.21%); the second reason was the presence of chronic diseases such as hypertension, which makes them visit private clinics (7.79%) and generally, (31.4%) of the participants were satisfied with the health services nearby. Conclusion: In conclusion, it appears that many people visit PHC facilities, but most of them only do so as a preventative measure, and very few go there to obtain basic medical treatment. Most patients go to private clinics and/or hospitals since those facilities have better access to specialists, better quality and quantity of medications, and laboratory testing. Additionally, combining and strengthening service quality aspects that prioritize a patient-centred environment and an effective service delivery system is a key strategy for the health sector to increase patient satisfaction. abstract_id: PUBMED:23008544 Evaluation of Antenatal Referrals from Health Centres to the Maternity and Children's Hospital in Dammam city, Saudi Arabia. Objective: To evaluate antenatal referral rates from Primary Health Care (PHC) centres to the Maternity and Children's Hospital in Dammam and factors affecting it. Methodology: Four randomly Selected Primary Health Care Centres in Dammam city. A sample of 135 pregnant women referred to the hospital (cases) and 135pregnant women not referred (controls) were randomly selected. A double phase sampling scheme was adopted. Questionnaires regarding the characteristics of the general practitioners (GPs) were distributed to those GPs caring for antenatal patients in the 4 health centres. Results: There was a variation in the referral rates of the five generalpractitioners from 11.5 to 21.2 per 100 antenatal consultation. It was found thatS certain characteristics of the GPs had an important influence on the referral rates. High risk factors in pregnancy were present in some of the controls but they were not referred. CONCLUSION #ENTITYSTARTX00026; Recommendations: There is a variation in the referral rates between the GPs. There is a need to monitor and improve the quality of the antenatal care and the referral process. Better training of the GPs and implementing shared obstetric care is recommended. abstract_id: PUBMED:23638273 "PHC leadership: are health centres in good hands? Perspectives from 3 districts in Malawi". Aim: The study aimed to document the kinds of leadership styles are practiced at health centres (H/C) and how these styles can be explained by the contexts, characteristics of the health centre in charge (IC) and subordinate trained health staff (STHS). Methods: A well-researched leadership style model was applied, which included task, relation and change styles. This is a cross-sectional study using self-administered questionnaires in 47 H/C in 3 districts. 347 STHSs (95%) and 46 ICs (98%) responded. Questions explored background data and perceived leadership behaviour. Style items were factor analysed, and bivariate analyses and hierarchical regressions determined how styles could be explained. Results: Two leadership styles were revealed: "Trans" style contained all relation and the majority of task and change items; "Control" style focused on health statistics (Health Management Information System), reporting and evaluation. STHS and IC had a median age/median work experience of 34/5 years and 38,5/2 years, respectively. 48% of IC reported having no management training. CHAM H/Cs had the lowest score on "Control" style. Distance to referral hospital had no impact on style scores. No contexts or STHS characteristics predicted any leadership styles. For ICs, young age and increasing work experience were significant predictors for both styles, while Nurse ICs were negative predictors for "Control style". Management training was not a significant predictor for any style. Conclusion: Frontline PHC leadership may be forced by situation and context to use a comprehensive style which could lack the diversity and flexibility needed for effective leadership. The missing associations between staff characteristics and leadership styles might indicate that this group is not sufficiently considered and included in leadership processes in the PHC organization. Leadership competency for the ICs seems not to be based on formal training, but substituted by young age and work experience. Health centre organization could also influence the options for leadership behaviour. In conclusion this calls for a reassessment of H/C leadership and formal leadership training. abstract_id: PUBMED:35525972 Mental health stigma at primary health care centres in Lebanon: qualitative study. Background: Mental health-related stigma is a global public health concern and a major barrier to seeking care. In this study, we explored the role of stigma as a barrier to scaling up mental health services in primary health care (PHC) centres in Lebanon. We focused on the experiences of Healthcare Providers (HCPs) providing services to patients with mental health conditions (MHCs), the views of policy makers, and the perceptions of stigma or discrimination among individuals with MHCs. This study was conducted as part of INDIGO-PRIMARY, a larger multinational stigma reduction programme. Methods: Semi-structured qualitative interviews (n = 45) were carried out with policy makers (n = 3), PHC management (n = 4), PHC staff (n = 24), and service users (SUs) (n = 14) between August 2018 and September 2019. These interviews explored mental health knowledge, attitudes and behaviour of staff, challenges of providing treatment, and patient outcomes. All interviews were coded using NVivo and a thematic coding framework. Results: The results of this study are presented under three themes: (1) stigma at PHC level, (2) stigma outside PHC centres, and (3) structural stigma. SUs did not testify to discrimination from HCPs but did describe stigmatising behaviour from their families. Interestingly, at the PHC level, stigma reporting differed among staff according to a power gradient. Nurses and social workers did not explicitly report incidents of stigma but described patients with MHCs as uncooperative, underscoring their internalized negative views on mental health. General practitioners and directors were more outspoken than nurses regarding the challenges faced with mental health patients. Mental health professionals revealed that HCPs still hold implicitly negative views towards patients with MHCs however their attitude has improved recently. Our analysis highlights five layers of stigma affecting SUs. Conclusion: This qualitative study reveals that stigma was still a key concern that affects patients with MHC. SUs reported experiencing overt stigmatising behaviour in the community but less explicit discrimination in a PHC setting. Our findings emphasise the importance of (1) combatting structural stigma through legal reform, (2) addressing interpersonal stigma, (3) committing PHC management to deliver high quality mental health integrated services, and (4) reducing intrapersonal stigma by building public empathy. abstract_id: PUBMED:32800012 Role of laboratory services in primary health center (PHC) outpatient department performance: an Indian case study. Background: In resource-constrained settings, primary health centers (PHCs) are critical for universal health coverage. Laboratory service is one of its important components. While PHC and its performance are focused, its laboratory service has been neglected in developing countries like India. Aim: To determine the role of different level of PHC laboratory services on the overall PHC performance. Methods: Cross-sectional study based on 42 PHCs of Osmanabad District, Maharashtra, India was performed. The study used levels of laboratory services in PHC as independent parameter and PHC outpatient department (OPD) visits per day (≤ 80 versus > 80) as dependent parameter. The control parameters used in the study were number of medical doctors, availability of laboratory technicians (LTs) and population coverage by PHC. Field visit was done to collect data on levels of laboratory services, but secondary source was used for other parameters. The logistic regression analysis was performed in study. Findings: The study found variation in PHC population coverage (10 788-74 702) and OPD visits per day (40-182) across PHC. Strong positive association was observed between levels of laboratory services and number of OPD visits per day in PHC. PHC offering both malaria and tuberculosis in-house testing had higher odds (4.81) of getting more OPDs (≥ 80 OPD visits per day) as compared to PHC not offering in-house testing facility for malaria and tuberculosis. This association was stronger in PHCs with lower population coverage (0-75 quartile) as compared to PHCs with higher population coverage (75-100 quartile). Conclusion: Focus on laboratory services is needed to enhance the existing PHCs performance. Skill-up gradation of existing LT could help in improving the contribution of the existing laboratories in PHC functioning. abstract_id: PUBMED:23012190 Perceptions of primary health care (phc) doctors on the prevention of coronary heart disease (chd) in riyadh, saudi arabia. Background: Coronary heart disease (CHD) is the leading cause of death throughout the world. PHC doctors are in a unique position to prevent CHD and promote health in the population. However, the perception of PHC doctors on CHD prevention has not been well documented. Objectives: To explore and examine the perception of PHC doctors on the prevention of CHD. Subject And Method: A questionnaire survey of all PHC doctors attending a continuing medical education (CME) activity in Riyadh city. The questionnaire was designed and piloted with local PHC doctors before being used in this survey. Results: All the 77 PHC doctors responded (100%) and almost all the them (97.4%) agreed that the primary prevention of CHD was an essential task. Fifty-two participants (67.53%) confirmed that little attention had been paid to the primary prevention of CHD. While the majority of respondents (71.43%) felt that the primary prevention of CHD was an easy task, a significant minority (23.37%) disagreed. Interestingly, 70 (90.91%) respondents were not aware of any local literature on how to achieve primary prevention of CHD and would like to have the literature made available to them. Finally, participants indicated that the percentage prevalence of CHD risk factors among their patients was high. Conclusion: the findings of this survey confirm a general feeling that the primary prevention of CHD is not being given enough attention. Participants accepted that the primary prevention of CHD was an essential part of their work, but the lack of local literature and research on this vital area was a major concern. abstract_id: PUBMED:30458377 Petroleum hydrocarbon (PHC) uptake in plants: A literature review. Crude oil and its constituents can have adverse effects on ecological and human health when released into the environment. The Canadian Council of Ministers of the Environment (CCME) has developed remedial guidelines and a risk assessment framework for both ecological and human exposure to PHC. One of the assumptions used in the derivation of these guidelines is that plants are unable to take up PHC from contaminated soil and therefore subsequent exposure at higher trophic levels is not a concern. However, various studies suggest that plants are indeed able to take up PHC into their tissues. Consumption of plants is a potential exposure pathway in both ecological (e.g., herbivorous and omnivorous birds, and mammals) and human health risk assessments. If plants can uptake PHC, then the current approach for risk assessment of PHC may underestimate exposures to ecological and human receptors. The present review aims to assess whether or not plants are capable of PHC uptake and accumulation. Twenty-one articles were deemed relevant to the study objective and form the basis of this review. Of the 21 primary research articles, 19 reported detectable PHC and/or its constituents in plant tissues. All but five of the 21 articles were published after the publication of the CCME Canada-Wide Standards. Overall, the present literature review provides some evidence of uptake of PHC and its constituents into plant tissues. Various plant species, including some edible plants, were shown to take up PHC from contaminated soil and aqueous media in both laboratory and field studies. Based on the findings of this review, it is recommended that the soil-plant-wildlife/human pathway should be considered in risk assessments to avoid underestimating exposure and subsequent toxicological risks to humans and wildlife. abstract_id: PUBMED:36196081 ESHRE certification of ART centres for good laboratory and clinical practice. Study Question: Three years after the start of the ESHRE ART Centre Certification (ARTCC) programme, what is the current state of the system, in terms of the interest expressed in it and experiences during the assessment of ART services? Summary Answer: As of 1 December 2021, 25 European ART centres have been involved in the various stages of certification and the most common recommendations from inspectors were the need for documented training, verification of competencies for all staff members, verification of laboratory and clinical performance indicators, implementation of a quality management system and avoidance of overusing ICSI and add-ons. What Is Known Already: European Union (EU) legislation has included ART activities in the EU Tissue and Cells Directives (EUTCDs). Following inspections by national EUTCD authorities, many details regarding documentation, laboratory environment, handling of reproductive cells and tissues, traceability, coding and patient testing have become standardized. However, the EUTCDs do not cover all ART-specific aspects. For this reason, the ARTCC was established to focus on peculiar areas, including relevant staff qualifications, training, continuing professional development, workload, equipment suitability, (non)-evidence-based laboratory and clinical methods used, treatment approaches according to ESHRE guidelines, recommendations and laboratory and clinical key performance indicators. Study Design Size Duration: The article reviews the state-of-the-art of the ESHRE certification of ART centres for good clinical and laboratory practice over an initial 3-year period of operation, including the number of ART centres involved in the different stages of certification and the most common recommendations by inspectors. Participants/materials Setting Methods: In 2016, the ARTCC working group began to establish a new ESHRE ARTCC programme. Since then, the working group has organized 4 preparatory courses and appointed 37 inspectors (19 clinicians, 17 embryologists and one paramedical). A tool to verify compliance with ESHRE recommendations for good laboratory and clinical practice was developed. The ARTCC has been open for applications since September 2018. In Step 1, the applicant enters basic information about the ART centre, staff and ART activities into the application platform. After review and approval, the applicant is given the opportunity to enter Step 2 and provide detailed online checklists on general, laboratory, clinical services and clinical outcomes. Two inspectors (one clinician and one embryologist) independently evaluate the submitted checklists. The condition to proceed to evaluation is a positive mean score (at least 66%) from each of the four checklists. In Step 3, a live site visit (or virtual owing to the coronavirus disease 2019 (COVID-19) pandemic) is organized and the inspectors prepare a final report with appropriate recommendations. The application may be rejected at any time if the criteria required to advance to the next stage are not met. The ARTCC programme is currently available for European countries listed in ESHRE internal rules, available on the ESHRE website. The certificate is valid for 3 years, after which an application for renewal can be submitted. Main Results And The Role Of Chance: Over a 3-year period (until 1 December 2021), 63 ART centres from 25 countries started applying through an online platform. So far, 38 applications did not progress owing to lack of completion of the initial application within a 1-year period or because applications came from non-European countries. Of the remaining 25 applications, 8 centres have been inspected and 7 centres have been certified. The most common recommendations given by inspectors to assessed centres were the need for documented training, verification of competencies, skills and continuing professional development for all staff members, verification of laboratory and clinical performance indicators and implementation of a quality management system. The inspectors identified some recurring areas of medically assisted reproduction that deviate from good practice: the overuse of ICSI, preimplantation genetic testing for aneuploidies, freeze-all and other add-ons. They often reported that the clinical outcomes could not be objectively assessed because of non-inclusion of the started cycles or the frequent use of freeze-all cycles. Limitations Reasons For Caution: No major modifications have been made to the application platform and checklists since the early stages of the certification programme. However, in this short time, quite a few changes in clinical practice have occurred, especially concerning the more frequent use of the 'freeze-all' strategy. As a result, problems arose in the evaluation of clinical outcomes. In addition, because of the COVID-19 pandemic, site visits were substituted by the implementation of virtual visits. While this enabled the certification programme to continue, it is possible that certain critical details that would have been noticed during a traditional site visit may have been overlooked. Wider Implications Of The Findings: Regular monitoring of the observations of ARTCC inspectors and analysis of their reports is certainly useful to harmonize inspectors' criteria in the assessment process and to identify chronic deficiencies in clinical and laboratory practice. Non-conformities can be addressed by ESHRE through guidelines and recommendations, as well as through discussion with EU institutions and competent authorities. Study Funding/competing Interests: The ARTCC programme was developed and funded by ESHRE, covering expenses associated with the meetings. The Steering Committee members who are the authors of this article did not receive payments for the completion of this study. The inspectors were remunerated for their work with an honorarium. The authors have no conflicts of interest to declare. abstract_id: PUBMED:34199716 Field Test and Numerical Simulation on the Long-Term Thermal Response of PHC Energy Pile in Layered Foundation. Investigation on the long-term thermal response of precast high-strength concrete (PHC) energy pile is relatively rare. This paper combines field experiments and numerical simulations to investigate the long-term thermal properties of a PHC energy pile in a layered foundation. The major findings obtained from the experimental and numerical studies are as follows: First, the thermophysical ground properties gradually produce an influence on the long-term temperature variation. For the soil layers with relatively higher thermal conductivity, the ground temperature near to the energy pile presents a slowly increasing trend, and the ground temperature response at a longer distance from the center of the PHC pile appears to be delayed. Second, the short- and long-term thermal performance of the PHC energy pile can be enhanced by increasing the thermal conductivity of backfill soil. When the thermal conductivities of backfill soil in the PHC pile increase from 1 to 4 W/(m K), the heat exchange amounts of energy pile can be enhanced by approximately 30%, 79%, 105%, and 122% at 1 day and 20%, 47%, 59%, and 66% at 90 days compared with the backfill water used in the site. However, the influence of specific heat capacity of the backfill soil in the PHC pile on the short-term or long-term thermal response can be ignored. Furthermore, the variation of the initial ground temperature is also an important factor to affect the short-and-long-term heat transfer capacity and ground temperature variation. Finally, the thermal conductivity of the ground has a significant effect on the long-term thermal response compared with the short-term condition, and the heat exchange rates rise by about 5% and 9% at 1 day and 21% and 37% at 90 days as the thermal conductivities of the ground increase by 0.5 and 1 W/(m K), respectively. abstract_id: PUBMED:33955048 Providers' perspectives on the performance of primary healthcare centres in India: The missing link. Background: Primary healthcare centres (PHCs) form the foundation of the Indian public health system, and thus their effective functioning is paramount in ensuring the population's health. The World Health Organisation (WHO) has set six aspects of performance assessment for general health systems, which are hardly applicable to the PHC setup in a low- and middle-income country. The Primary Health Care Performance Initiative (PHCPI) has prescribed a framework with five domains consisting of 36 indicators for primary healthcare performance assessment from a policy point of view. For the assessment to be realistic, it should include inputs from stakeholders involved in care delivery, so this study examines the perspectives of healthcare providers at PHCs in India. Methodology: The authors used qualitative research methodology in the form of responsive evaluations of healthcare provider's interviews to understand the indicators of PHC performance. Results And Conclusion: The study results showed that healthcare providers considered efficient teamwork, opportunities for enhancing provider skills and knowledge, job satisfaction, effective PHC administration, and good community relationship as PHC performance assessment. These domains of performance could be considered the 'missing link' in PHC assessment, since they are deemed important by providers and did not coincide with the WHO aspects and the PHCPI performance assessment framework. Answer: The study "PHC leadership: are health centres in good hands? Perspectives from 3 districts in Malawi" (PUBMED:23638273) aimed to document the types of leadership styles practiced at health centres and how these styles can be explained by the contexts, characteristics of the health centre in charge (IC), and subordinate trained health staff (STHS). The study revealed two leadership styles: "Trans" style, which included relation and the majority of task and change items, and "Control" style, which focused on health statistics, reporting, and evaluation. The study found that 48% of ICs reported having no management training, which suggests a potential gap in formal leadership preparation. The leadership competency for ICs seemed not to be based on formal training but was instead substituted by young age and work experience. The study concluded that frontline PHC leadership may be forced by situation and context to use a comprehensive style, which could lack the diversity and flexibility needed for effective leadership. The missing associations between staff characteristics and leadership styles might indicate that this group is not sufficiently considered and included in leadership processes in the PHC organization. This calls for a reassessment of health centre leadership and formal leadership training. In summary, the study suggests that while health centres may be in the hands of individuals who are capable of leading, there is a need for more structured and formal training in leadership to ensure that these centres are indeed in good hands. The reliance on young age and work experience over formal training could be a concern for the effectiveness and adaptability of leadership in PHCs.
Instruction: Is there a relationship between plasma phenylalanine and cholesterol in phenylketonuric patients under dietary treatment? Abstracts: abstract_id: PUBMED:11522273 Is there a relationship between plasma phenylalanine and cholesterol in phenylketonuric patients under dietary treatment? Objectives: To study the lipid profile in a group of treated phenylketonuric patients (PKU; n = 61) compared with a group of inborn error of intermediary metabolism patients (IEM; n = 22), a group of hyperphenylalaninemic children (HPA; n = 37), and a control group without dietary restriction (n = 41). Design And Methods: Phenylalanine was analyzed by ion exchange chromatography and triglycerides, cholesterol and HDL were determined by standard procedures with the Cobas Integra analyzer. Results: Serum total cholesterol concentrations were significantly lower in PKU patients compared with IEM patients (whose cholesterol daily intake was similar to those of PKU patients), HPA children and the control group. A negative correlation was observed between cholesterol and phenylalanine concentrations in the PKU patients. Conclusions: Our findings support the hypothesis of a relationship between high plasma phenylalanine levels and an inhibition of cholesterogenesis, although the low cholesterol intake of the special diets may also decrease serum cholesterol values. abstract_id: PUBMED:9089802 Plasma arachidonic acid and serum thromboxane B2 concentrations in phenylketonuric children negatively correlate with dietary compliance. The study addresses the relationship of plasma arachidonic acid and thromboxane production with the dietary compliance in treated phenylketonuric patients, whose vegan-like dietary pattern makes them a useful model to evaluate the effects of the near-total avoidance of animal fats. Thirteen treated phenylketonuric children were compared with twelve healthy controls for arachidonic acid intake, plasma fatty acids and platelet thromboxane B2 production, assessed as accumulation of this eicosanoid in serum. The calculated intake of arachidonic acid was lower in phenylketonurics than in controls and this was associated with lower levels in plasma lipids. Plasma arachidonic acid concentrations and serum thromboxane B2 levels correlated with the last 12 months phenylalanine levels, taken as negative indicator of dietary compliance. A direct relationship between plasma arachidonic acid concentration and thromboxane B2 production was observed only in phenylketonuric patients (r = 0.74, P = 0.01). While well-compliant PKU subjects have low arachidonic acid and thromboxane concentrations in plasma, the low compliance with animal food avoidance, evoking higher phenylalanine levels, results in elevation of both plasma arachidonic acid and serum thromboxane B2. This gives support to the hypothesis that the consumption of animal fats may affect the production of arachidonic acid-derived platelet eicosanoids. abstract_id: PUBMED:11486901 Plasma phenylalanine is associated with decreased serum ubiquinone-10 concentrations in phenylketonuria. Decreased serum ubiquinone-10 concentrations is a common condition in patients with phenylketonuria (PKU) under dietary treatment. Our aim was to investigate the implication of the metabolic abnormalities of PKU (low concentrations of tyrosine and high concentrations of phenylalanine) and the effect of treatment with phenylalanine-restricted diets in decreased ubiquinone-10 concentrations in PKU patients. We studied 30 PKU patients (age range 5 months to 35 years; median age 7 years) under dietary treatment. Correlation between plasma tyrosine or phenylalanine and serum ubiquinone-10 concentrations was investigated. Daily cholesterol intake was calculated from the data obtained through a dietary questionnaire. The index of dietary control (IDC) was calculated as the average of the medians of plasma phenylalanine concentrations obtained every 6 months in the metabolic control of patients. Negative correlations were observed between serum ubiquinone and the IDC (r=-0.46; p<0.01) in PKU patients. No correlation was observed between tyrosine or daily cholesterol intake and serum ubiquinone concentrations. After adjustment for daily cholesterol intake by multiple linear regression analysis, for each 113 units of increase in IDC values serum ubiquinone decreased 0.1 micromol/L. According to our results, the main factor associated with the decreased serum ubiquinone concentrations was high plasma phenylalanine values. Although daily cholesterol intake seems to be associated with ubiquinone concentrations, it may not be relevant if we take into account the low intake of cholesterol in treated PKU patients. abstract_id: PUBMED:3756210 Phenylalanine ammonia-lyase immobilized in microcapsules for the depletion of phenylalanine in plasma in phenylketonuric rat model. Microencapsulation of the enzyme phenylalanine ammonia-lyase was developed for in vivo depletion of systemic phenylalanine in phenylketonuric rats. Compared to normal rats, systemic phenylalanine blood levels in phenylketonuric rats was increased by 15-20-fold. Daily oral administration of 1 unit of phenylalanine ammonia-lyase-loaded artificial cells to phenylketonuric rats lowered the systemic phenylalanine level to 58% +/- 18% (mean + S.D.) in 7 days (P less than 0.010), while 5 units lowered the systemic phenylalanine level to 25% +/- 8%. 5 units of the immobilized enzyme lowered the systemic phenylalanine level to normal levels within 6 days. Phenylketonuric treated rats showed no signs of abnormal behavior and weight loss compared to phenylketonuric non-treated rats. The immobilized enzyme within artificial cells is therefore protected against low gastrointestinal pH and proteolytic enzymes. abstract_id: PUBMED:9558743 Plasma arachidonic acid and serum thromboxane B2 concentrations in phenylketonuric children are correlated with dietary compliance. Phenylketonurics display lower arachidonic acid levels in plasma lipids, reflecting the lower intake due to the dietary treatment poor in phenylalanine-rich animal foods. Plasma arachidonic acid levels and serum thromboxane B2 concentrations have been measured in 13 treated phenylketonuric children and compared with those of 12 healthy controls. A direct relationship between plasma arachidonic acid and thromboxane B2 concentrations has been observed only in phenylketonurics, whose plasma arachidonic acid status correlated negatively with their dietary compliance. These results suggest that the low consumption of animal fats may contribute to modulate the production of arachidonic acid-derived platelet eicosanoids. abstract_id: PUBMED:10077654 A model of human phenylalanine metabolism in normal subjects and in phenylketonuric patients. The derivation of a quantitative model of phenylalanine metabolism in humans is described. The model is based on the kinetic properties of pure recombinant human phenylalanine hydroxylase and on estimates of the in vivo rates of phenylalanine transamination and protein degradation. Calculated values for the steady-state concentration of blood phenylalanine, rate of clearance of phenylalanine from the blood after an oral load of the amino acid, and dietary tolerance of phenylalanine all agree well with data from normal as well as from phenylketonuric patients and obligate heterozygotes. These calculated values may help in the decision about the degree of restriction of phenylalanine intake that is necessary to achieve a satisfactory clinical outcome in classical patients and in those with milder forms of the disease. abstract_id: PUBMED:1617863 Effect of dietary aspartame on plasma concentrations of phenylalanine and tyrosine in normal and homozygous phenylketonuric patients. Six normal subjects each ingested a single 12-oz can of a diet cola (Diet Coke) providing 184 mg aspartame (APM), of which 104 mg is phenylalanine (Phe), and, on another occasion, a single 12-oz can of regular cola (Coke Classic). Neither cola significantly affected plasma concentrations of Phe or tyrosine over the three-hour postingestion study period. Each of five homozygous phenylketonuric (PKU) subjects (ages 11, 16, 17, 21, and 23 years) ingested a single 12-oz can of the same diet cola. In these five subjects (three with classic PKU and two with hyperphenylalinemia), the increase in plasma Phe concentrations varied from 0.26 mg/dL to 1.77 mg/dL two or three hours after ingestion (baseline levels, 5.04 to 17.2 mg/dL). Tyrosine concentrations did not differ significantly from baseline levels. The data indicate that ingestion of dietary Phe, as supplied in a single can of diet cola, is readily handled in both normal and PKU subjects. The small increases in plasma Phe concentrations in the homozygous PKU patients are not considered clinically significant. abstract_id: PUBMED:19429389 Effect of short- and long-term exposition to high phenylalanine blood levels on oxidative damage in phenylketonuric patients. Phenylketonuria is the most frequent disturbance of amino acid metabolism. Treatment for phenylketonuric patients consists of phenylalanine intake restriction. However, there are patients who do not adhere to treatment and/or are not submitted to neonatal screening. These individuals are more prone to develop brain damage due to long-lasting toxic effects of high levels of phenylalanine and/or its metabolites. Oxidative stress occurs in late-diagnosed phenylketonuric patients, probably contributing to the neurological damage in this disorder. In this work, we aimed to compare the influence of time exposition to high phenylalanine levels on oxidative stress parameters in phenylketonuric patients who did not adhere to protein restricted diet. We evaluated a large spectrum of oxidative stress parameters in plasma and erythrocytes from phenylketonuric patients with early and late diagnosis and of age-matched healthy controls. Erythrocyte glutathione peroxidase activity and glutathione levels, as well as plasma total antioxidant reactivity were significantly reduced in both groups of patients when compared to the control group. Furthermore, protein oxidative damage, measured by carbonyl formation and sulfhydryl oxidation, and lipid peroxidation, determined by malondialdehyde levels, were significantly increased only in patients exposed for a long time to high phenylalanine concentrations, compared to early diagnosed patients and controls. In conclusion, exposition to high phenylalanine concentrations for a short or long time results in a reduction of non-enzymatic and enzymatic antioxidant defenses, whereas protein and lipid oxidative damage only occurs in patients with late diagnosis. abstract_id: PUBMED:8834980 Early breastfeeding is linked to higher intelligence quotient scores in dietary treated phenylketonuric children. Strict control of phenylalanine intake is the main dietary intervention for phenylketonuric children. Whether other dietary-related factors improve the clinical outcome for treated phenylketonuric children in neurodevelopmental terms, however, remains unexplored. We retrospectively compared the intelligence quotient (IQ) score of 26 school-age phenylketonuric children who were either breastfed or formula fed for 20-40 days prior to dietary intervention. Children who had been breastfed as infants scored significantly better (IQ advantage of 14.0 points, p = 0.01) than children who had been formula fed. A 12.9 point advantage persisted also after adjusting for social and maternal education status (p = 0.02). In this sample of early treated term infants with phenylketonuria there was no associated between IQ scores and the age at treatment onset and plasma phenylalanine levels during treatment. We conclude that breastfeeding in the prediagnostic stage may help treated infants and children with phenylketonuria to improve neurodevelopmental performance. abstract_id: PUBMED:2791436 Triglycerides, cholesterol, HDL, LDL, and VLDL cholesterol in serum of phenylketonuric children under dietary control. Phenylketonuria is currently treated by a special diet to avoid elevated blood concentration of the essential amino acid phenylalanine. The authors examined 20 phenylketonuric children in the Inborn Error of Metabolism Department of Institute of Child Health in Athens. The children had simultaneous clinical and laboratory investigations. Blood cholesterol levels of these children were low in comparison with the levels of healthy children of the same age, but the triglyceride levels were higher as a result of a special diet containing a large amount of carbohydrates. Because of the triglyceride elevation the authors recommend supplementary vegetable fat intake, such as in olive oil, for their patients' dietary control. Answer: Yes, there is a relationship between plasma phenylalanine and cholesterol in phenylketonuric (PKU) patients under dietary treatment. A study found that serum total cholesterol concentrations were significantly lower in PKU patients compared with other groups, including patients with inborn errors of intermediary metabolism (IEM), hyperphenylalaninemic children (HPA), and a control group without dietary restriction. Additionally, a negative correlation was observed between cholesterol and phenylalanine concentrations in the PKU patients. This supports the hypothesis that high plasma phenylalanine levels may inhibit cholesterogenesis, although the low cholesterol intake from the special diets PKU patients follow may also contribute to decreased serum cholesterol values (PUBMED:11522273).
Instruction: Should imaging at stress always be followed by imaging at rest in Tc-99m MIBI SPECT? Abstracts: abstract_id: PUBMED:26072421 Early detection of radiation-induced heart disease using (99m)Tc-MIBI SPECT gated myocardial perfusion imaging in patients with oesophageal cancer during radiotherapy. Background And Purpose: The primary aim of this prospective study was to investigate the value of (99m)Tc-methoxyisobutylisonitrile (MIBI) single photon emission computed tomography (SPECT) gated myocardial perfusion imaging (GMPI) in the detection of radiation-induced heart disease (RIHD) as early as during radiotherapy (RT) for oesophageal cancer (EC). The second aim was to analyse the correlation between cardiac toxicity and the dose-volume factors. Materials And Methods: The (99m)Tc-MIBI SPECT GMPI was performed both pre-RT and during RT (40Gray). The results of the SPECT were quantitatively analysed with QGS/QPS software and read by two experienced nuclear medicine physicians. The correlation between the changes in the SPECT parameters and the RT dosimetric data was analysed. Results: Eighteen patients with locally advanced EC were enrolled in the study. Compared with the baseline, the imaging during RT showed not only significant decreases in the wall motion (WM) (1/20 segments), wall thickening (WT) (2/20 segments), end-diastolic perfusion (EDP) (5/20 segments) and end-systolic perfusion (ESP) (8/20 segments) (p<0.05) but also a significant increase in the heart rate (74.63±7.79 vs 81.49±9.90, p=0.036). New myocardial perfusion defects were observed in 8 of the 18 patients. The V37-V40 was significantly higher (p<0.05) in the patients with the new perfusion defects during RT than in the patients who did not exhibit these defects. Conclusions: Radiotherapy for EC induces cardiac damage from an early stage. (99m)Tc-MIBI SPECT GMPI can detect the occurrence of cardiac impairment during RT. The WM, WT, EDP and ESP may be valuable as early indicators of RIHD. The percentage of the heart volume that receives a high dose is an important factor that is correlated with RIHD. abstract_id: PUBMED:9306146 Should imaging at stress always be followed by imaging at rest in Tc-99m MIBI SPECT? A proposal for a selective referral and imaging strategy. Objectives: We addressed the question whether in patients with cardiac chest pain referred for stress myocardial perfusion scintigraphy, Tc-99m MIBI SPECT stress imaging should always be followed by a rest imaging procedure. Background: Using Tc-99m MIBI imaging a stress rest sequence is usually performed implying that the resting study always follows the stress study irrespective of the results of the stress study. As a normal stress study would eliminate a subsequent resting study, it appears desirable to potentially define certain subsets of patients in whom a normal stress study can be expected in order to determine a more selective referral approach to the nuclear medicine department. The consequences of such a more streamlined approach would less impose on the logistics of the department of nuclear medicine, with decrease of investigation time, radiation dose, and costs in a time of retrenchment in the medical sector. Methods: A consecutive series of 460 patients (mean age 58.2 years) was studied who were stratified to 269 patients without prior myocardial infarction, and to 191 patients with documented evidence of a previously sustained myocardial infarction. All patients underwent Tc-99m MIBI SPECT imaging according to a two-day stress-rest protocol. Results: Patients with and without a previous myocardial infarction showed suboptimal overall predictive accuracies for the exercise electrocardiograms (58% and 60%, respectively). In the total group of 460 patients, 94 (20%) patients showed a normal stress-rest Tc-99m MIBI SPECT; this occurred in 86/269 (32%) patients without a previous myocardial infarction and in only 8/191 (4%) patients with a previous myocardial infarction. Conclusions: Patients with a stress defect at Tc-99m MIBI SPECT imaging should always undergo a resting SPECT study irrespective of the clinical and stress electrocardiographic findings. As patients without a previous myocardial infarction had a normal stress SPECT study in almost one-third (32%) of patients compared to only 4% in patients with a previously myocardial infarction, it may be useful to employ different referral and imaging strategies i.e., a stress-only versus a stress-rest procedure. To schedule referring patients differently according to the presence or absence of a previously sustained myocardial infarction may be cost-saving, less demanding for the nuclear medicine personnel, and patient-convenient. In addition, a stress-only imaging procedure reduces radiation exposure to the individual patient. abstract_id: PUBMED:11062096 Usefulness of the quantification of (99M)Tc-MIBI uptake in breast neoplasms in the preoperative assessment of tumor aggressiveness Unlabelled: (99m)Tc-MIBI has been proposed as an imaging diagnostic method in a large variety of human malignant tumors. At present, the mechanism by which (99m)Tc-MIBI is uptaken and concentrated by the malignant cells is not totally known. Some mammary neoplasms do not show any uptake of (99m)Tc-MIBI. This study aims to determine if there is any correlation between the uptake of (99m)Tc-MIBI by the tumor and the different histopathological parameters involved in tumoral aggressiveness. To do so, we have studied 100 patients with breast cancer. All of them underwent a breast scintimammography with (99m)Tc-MIBI with semiquantitative analysis by means of a tumor-to-background ratio calculated in every projection. After surgery, an experienced pathologist determined tumor size, axillary lymph node metastases, histological grade (Scarff Bloom Richardson) (SCBR), nuclear grade, mitotic index, presence of cellular atypia and estrogen and progesterone receptor expression. Results: A statistically significant correlation (p < 0.005) has been found between tumor-to-background (T/B) ratios of (99m)Tc-MIBI uptake and tumor SCBR histological grade. A correlation between (99m)Tc-MIBI uptake and the mitotic index, cellular atypia and nuclear grade has also been found. No correlation was found in our study with tumor size, hormone receptor expression or axillary lymph node metastases. Conclusions: (99m)Tc-MIBI uptake in breast cancer is correlated with the tumoral differentiation grade: the smaller the tumoral cellular differentiation (greater aggressiveness), the greater the uptake. On the other hand, no correlation was found between the uptake of (99m)Tc-MIBI and the classical pathological parameters that define tumoral aggressiveness, such as size and axillary lymph node metastasis. abstract_id: PUBMED:37335313 99m Tc-MIBI SPECT/CT Evaluation of a Renal Collision Tumor. Abstract: Preoperative differentiation of oncocytomas from renal cell carcinoma (RCC) is often challenging. 99m Tc-MIBI imaging could play a potential role in differentiating oncocytoma from RCC, which in turn could guide surgical decision-making. We present the use of 99m Tc-MIBI SPECT/CT to characterize a renal mass in a 66-year-old man with a complex medical history, including history of bilateral oncocytomas. 99m Tc-MIBI SPECT/CT showed features suspicious of a malignant tumor, which was confirmed postnephrectomy as a chromophobe and papillary RCC collision tumor. This case supports 99m Tc-MIBI imaging for preoperative differentiation of benign versus malignant renal tumors. abstract_id: PUBMED:18040614 Delineation of target volume for radiotherapy of high-grade gliomas by 99m Tc-MIBI SPECT and MRI fusion. Background And Purpose: Computed tomography (CT) and magnetic resonance imaging (MRI) are traditionally used for treatment planning of high-grade glioma. 99m Tc-methoxy-isobutyl-isonitrile (MIBI) single-photon emission computed tomography (SPECT) showed high sensitivity and specificity in literature series. In the present study, it was investigated how the information provided by 99m Tc-MIBI SPECT and MRI fusion could affect target delineation for radiotherapy of high-grade glioma. Patients And Methods: 21 patients with high-grade glioma were studied by MRI and 99m Tc-MIBI SPECT imaging. The gross tumor volume (GTV) was outlined on MRI (MRI-GTV) and SPECT images (SPECT-GTV). Three additional volumes were analyzed: the (MRI+SPECT)-GTV representing the whole amount of MRI-GTV plus SPECT-GTV, the (MRI&SPECT)-GTV identified by the overlapping region of MRI-GTV and SPECT-GTV, and the (SPECT/MRI)-GTV identified by the extension of SPECT-GTV outside MRI-GTV. Results: MRI contrast-enhanced and 99m Tc-MIBI SPECT-positive lesions were found in all 21 patients. The average SPECT-GTV was slightly larger than the average MRI-GTV, with greater difference for resected than for unresected cases. The average increment of (MRI+SPECT)-GTV compared to MRI-GTV was 33%, being significantly higher for resected than for unresected cases (p = 0.006). Conclusion: The fusion of 99m Tc-MIBI SPECT and MRI significantly affected the delineation of the target volume identified by MRI alone. abstract_id: PUBMED:9306147 Implementation of technetium-99m MIBI SPECT imaging guidelines: optimizing the two day stress-rest protocol. Objectives: In a previous study in 460 patients, we found that in patients with suspected or known coronary artery disease undergoing stress-rest technetium-99m sestamibi (MIBI) SPECT myocardial perfusion imaging, rest SPECT imaging could be withhold in approximately 20% of patients because of a completely normal stress study. The present study was set up to evaluate the consequences of the implementation of this finding in a subsequent population of patients, and to set standards for the variety of protocols now used for MIBI SPECT imaging. Methods: Within a period of 4 months, 235 consecutive patients referred for MIBI SPECT scintigraphy were studied. All patients had stable cardiac chest pain and underwent symptom-limited exercise MIBI SPECT perfusion imaging. The stress SPECT images were reconstructed and evaluated immediately after acquisition of the images. In case of a clearly normal stress SPECT study, rest imaging was cancelled. Results: Twenty-six of 235 patients (11%) had a completely normal stress MIBI SPECT study and the rest SPECT imaging procedure could be subsequently cancelled. In 20 patients (9%) the stress SPECT was inconclusive, and in 189 (80%) of patients stress imaging was clearly abnormal. In the first month of the study, the nuclear medicine physicians and cardiologists would interprete only 6% of the stress images as normal, while this number increased to 13% after 9 weeks, with a mean of 11% for the whole investigation period of 4 months. Conclusion: In patients undergoing stress MIBI SPECT imaging, it was found justified to cancel rest MIBI SPECT imaging in at least 11% of patients because of a completely normal stress SPECT. As 9% of the images were inconclusive, the number of normal stress images could theoretically increase to 20% if reliable measures are taken to improve reading accuracy. This number is in close agreement with the number of normal stress studies previously reported by our institution and would lead to a considerable reduction of radiation dose, costs, and increased convenience for an important subset of patients. abstract_id: PUBMED:24335008 Strategy of operative treatment of hyperparathyroidism using US scan and (99m)Tc-MIBI SPECT/CT. We evaluated the efficacy of technetium-sestamibi ((99m) Tc-MIBI) SPECT/CT for planning parathyroidectomy in cases with primary hyperparathyroidism (pHPT), comparing with planar scintigraphy and ultrasound (US), in an aim to establish the proper surgical strategy according to the preoperative imaging studies. A retrospective review of consecutive 75 pHPT patients who had been operated on was conducted. The results of preoperative imaging modalities and the operative finding were analyzed. Seven cases were found to have multiple hyperplastic glands, and no responsible gland was found in three cases. Four cases underwent only US scan for preoperative imaging. Remaining 61 cases were found to have single adenoma, and were included in the evaluation of localization imaging. US scan, (99m) Tc-MIBI planar scan and (99m) Tc-MIBI SPECT/CT showed accurate localization in 77.0% (47/61), 75.4% (46/61) and 88.5% (46/52) of the evaluable cases, respectively. US and (99m) Tc-MIBI planar scan demonstrated consistent result in 42 cases (68.9%), and those cases showed accurate localization in 90.5% (38/42). When both US and (99m) Tc-MIBI SPECT/CT was consistent, all 37 lesions had been correctly indicated. No clinico-pathological features were suggested to influence in demonstrating the localization, other than only (99m) Tc-MIBI SPECT/CT exhibited 100% sensitivity in ectopic glands. Combination of US and (99m) Tc-MIBI SPECT/CT certainly contributes to the planning of minimally invasive operation in cases with pHPT by indicating correct localization of single adenoma. abstract_id: PUBMED:22152075 Technetium-99m-labelled HL91 and technetium-99m-labelled MIBI SPECT imaging for the detection of ischaemic viable myocardium: a preliminary study. Purpose: The assessment of myocardial viability has become an important aspect of the diagnostic and prognostic work-up of patients with coronary artery disease. Technetium-99m labelled sestamibi ((99m)Tc-MIBI) myocardial perfusion imaging may underestimate the viability of ischaemic myocardium. Technetium-99m labelled 4,9-diaza-3,3,10,10-tetramethyldodecan-2,11-dione dioxime ((99m)Tc-HL91) is a hypoxia-avid agent which can identify acutely ischaemic viable myocardium in a canine model using a standard gamma camera. The aim of this study was to evaluate uptake character of ischaemic viable myocardium and diagnostic performance of single-photon emission computed tomography (SPECT) imaging by (99m)Tc-HL91 and (99m)Tc-MIBI in detecting ischaemic viable myocardium in coronary heart disease. Methods: A total of 41 patients with coronary artery disease were recruited from March 2008 to May 2009. For detecting ischaemic viable myocardium, SPECT imaging by (99m)Tc-HL91 and (99m)Tc-MIBI were performed in all patients before coronary revascularization. Six patients with single ischaemic myocardial segment received a 2-day SEPCT/CT imaging protocol and the uptake of (99m)Tc-HL91 in ischaemic myocardium was quantitatively analysed. The remaining 35 patients received a 1-day (99m)Tc-HL91 and (99m)Tc-MIBI SPECT imaging protocol. Resting (99m)Tc-MIBI myocardial perfusion imaging in 3-18 months after revascularization was used as the standard methodology to evaluate the myocardial viability. Results: In 41 patients, 66 ischaemic myocardial segments were proven to be viable and 12 to be necrotic by resting (99m)Tc-MIBI myocardial perfusion imaging after coronary revascularization. Furthermore, 60 viable segments with negative uptake of (99m)Tc-MIBI showed positive uptake of (99m)Tc-HL91. The remaining six viable segments and 12 necrotic segments showed both negative uptake of (99m)Tc-HL91 and (99m)Tc-MIBI. The sensitivity, specificity, accuracy, Younden Index, positive predictive value and negative predictive value for evaluating ischaemic viable myocardium were 90·9%, 100%, 92·3%, 90·9%, 100% and 66·7%, respectively. Ischaemic viable myocardium had the negative (99m)Tc-MIBI uptake and positive (99m)Tc-HL91 uptake, which demonstrated a mismatched uptake character. Quantitative analysis indicated the uptake of (99m)Tc-HL91 in viable myocardium was increasing in the first 1-3 h and remained stable at the 3-4 h after injection. Conclusion: Functional SPECT imaging with (99m)Tc-HL91 and (99m)Tc-MIBI can be used to detect the seriously ischaemic but viable myocardium with a mismatched uptake character. The uptake of (99m)Tc-HL91 in the viable myocardium reached a stable level at 3-4 h after injection. abstract_id: PUBMED:12609070 Evaluation of (99m)Tc-MIBI myocardial perfusion imaging with intravenous infusion of adenosine triphosphate in diagnosis of coronary artery disease. Objective: To evaluate the feasibility, safety and diagnostic accuracy of pharmacologic stress of (99m)Technetium-MIBI single-photon emission computed tomography (SPECT) with intravenous adenosine triphosphate (ATP) in patients with suspected coronary artery disease. Methods: The study group included 263 patients who were suspected of having coronary artery disease. All patients underwent (99m)Tc-MIBI myocardial perfusion imaging with ATP infusion (0.16 mg/kg body weight per min for 5 min). 20 mCi of (99m)Tc-MIBI were injected 3 minutes after the start of ATP infusion. Myocardial SPECT images were obtained 60 minutes later. Then, two days later, 20 mCi of (99m)Tc-MIBI were administered at rest and myocardial SPECT was repeated. 51 patients also underwent coronary angiography within two weeks for evaluation of sensitivity and specificity of ATP-myocardial perfusion imaging in detection of coronary artery disease. The occurrence of cardiac and non-cardiac adverse effects was carefully monitored during and after intravenous ATP infusion. Results: The ATP infusion protocol was completed in all patients. Although 59% of the patients had various kinds of adverse effects, most of them were mild. No patient required aminophyline. The most severe adverse effect was second degree type II atria-ventricular block (4/263), but all events were transient. The sensitivity and specificity of ATP-myocardial perfusion imaging were 97% and 82%, respectively. Conclusions: It is shown that (99m)Technetium-MIBI SPECT with intravenous ATP is a safe and feasible technique for detecting coronary artery disease in patients unable to perform the exercise test. abstract_id: PUBMED:29871117 Research for intraoperative localization of parathyroid gland usingnanocarbon imaging combined with 99m Tc-MIBI Objective:To explore the clinical application of nanocarbon(NC) suspension liquid combined with low dose of 99m Tc-MIBI for parathyroid localization in the hyperparathyroidism surgery.Method:Forty-four patients with secondary hyperparathyroidism(SHPT) in the department of head-neck surgery of Renji Hospital, affiliated to Shanghai Jiaotong University, School of Medicine, were enrolled and randomized into 3 groups including low-dose 99m Tc-MIBI+NC groupⅠ, high dose99m Tc-MIBI+NC group Ⅱ,and control group Ⅲ.We compared the level of PTH,serum calcium before and after operation,and the intraoperative amount of radioactive isotopes of parathyroid gland, etc.Use t-test and Anova for statistical processing. Result:After operation,the PTH levels in group Ⅰ,Ⅱ,Ⅲ were(23.8±32.4)ng/L,(15.8±18.2)ng/L,(90.1±139.4)ng/L,respectively(groupⅠvs Ⅲ,P<0.05;groupⅡvs Ⅲ,P<0.05,groupⅠvsⅡ,P>0.05).The intraoperative amount of radioactive isotopes of parathyroid gland had no significant difference among group Ⅰ,group Ⅱ and group Ⅲ. Conclusion:Nanocarbon combined with 99m Tc-MIBI for SHPT surgery has important localization value.And low dose of 99m Tc MIBI can reduce the cost and radiation, which can achieve the same outcome for patients.. Answer: No, imaging at stress does not always need to be followed by imaging at rest in Tc-99m MIBI SPECT. A study addressed this question by examining whether a rest imaging procedure should always follow stress myocardial perfusion scintigraphy in patients with cardiac chest pain referred for Tc-99m MIBI SPECT stress imaging. The study found that a normal stress study could eliminate the need for a subsequent resting study. It was suggested that certain subsets of patients could be identified in whom a normal stress study can be expected, which would allow for a more selective referral approach to the nuclear medicine department. This approach could reduce investigation time, radiation dose, and costs, and be less demanding on the logistics of the department of nuclear medicine (PUBMED:9306146). The study found that in patients without a previous myocardial infarction, a normal stress SPECT study occurred in almost one-third (32%) of patients, compared to only 4% in patients with a previous myocardial infarction. Therefore, it may be useful to employ different referral and imaging strategies, such as a stress-only versus a stress-rest procedure, based on the presence or absence of a previously sustained myocardial infarction. This could be cost-saving, less demanding for nuclear medicine personnel, and more convenient for patients. Additionally, a stress-only imaging procedure reduces radiation exposure to the individual patient (PUBMED:9306146). In a subsequent study, it was found justified to cancel rest MIBI SPECT imaging in at least 11% of patients because of a completely normal stress SPECT. The number of normal stress images could theoretically increase to 20% if reliable measures are taken to improve reading accuracy. This would lead to a considerable reduction of radiation dose, costs, and increased convenience for an important subset of patients (PUBMED:9306147).
Instruction: Is intraorbital fat extraorbital? Abstracts: abstract_id: PUBMED:19608067 Is intraorbital fat extraorbital? Results of cross-sectional anatomy of the lower eyelid fat pads. Background: Prominent fat pads of the lower eyelids are a frequent complaint prompting patients to seek cosmetic surgery. Previous studies have stated that these fat pads exist as three compartments in the lower eyelids. An accepted anatomic concept is that these lower eyelid fat pads are intraorbital. Objective: This study evaluates the possibility of distinct compartments of lower eyelid fat pads not being different from each other, but rather being separate from posterior intraorbital fat. Methods: This study used eight hemifacial cadaver dissections. Methylene blue injections were used to stain the middle and medial fat pads on each side. Sagittal cross-sections were obtained to visualize the area of fat staining. An axial cross-section of one additional specimen was obtained as confirmatory evidence. Results: Each specimen showed that lower eyelid fat pads stain as discrete fat compartments that are not in continuity with the posterior intraorbital fat. Their anterior boundary is the orbital retaining ligament at the anterior aspect of the inferior orbital rim. Their posterior boundary is an insertion point on the floor of the orbit in line with the midpoint of the globe, called the circumferential intraorbital retaining ligament. Conclusions: Lower eyelid fat pads are not in continuity with posterior intraorbital fat. They can accurately be described as partially intraorbital and partially extraorbital in location. This information augments our previous understanding of the anatomy of the lower eyelid fat pads and is important for studies that attempt to determine their etiology. abstract_id: PUBMED:18330428 Frontozygomatic approach to intraorbital tumors. We removed 12 intraorbital tumors (5 schwannomas, 3 meningiomas, 2 cavernomas, 1 pleomorphic adenoma, and 1 neuroblastoma) using the frontozygomatic approach. No patients died. Postoperatively, 1 patient developed transient ptosis, and 3 patients had mild enophthalmos. Two patients with a meningioma developed transient worsening of their visual acuity and visual field. The frontozygomatic approach for surgical treatment of intraorbital tumors provides a wide visual field exposing the entire optic nerve. This approach is indicated for large intraorbital tumors, tumors affecting the optic nerve or orbital apex, intraorbital tumors that have extended into the intracranial cavity, and intracranial tumors that have extended into the orbit. The operative procedure for intraorbital tumor is determined by the location of the lesion and by the direction of its growth. The procedure is applicable to all intraorbital tumors. It reduces discomfort for surgeons while providing a relatively wide surgical field. abstract_id: PUBMED:24350735 Fat necrosis appearing as intraorbital tumour: case report. A 33-year-old male presented with an extremely rare case of intraorbital fat necrosis. A magnetic resonance imaging scan showed a 10-mm mass lesion within the right lateral rectal muscle. Surgical removal was performed. Histological analysis showed diffuse adipose cells surrounded by macrophage cells. Fat necrosis was diagnosed. abstract_id: PUBMED:24799275 Intraorbital and intracanalicular ophthalmic artery aneurysms. Literature review and report of a case. This paper reviews literature about intraorbital ophthalmic artery aneurysms discussing presentation, aetiology and treatment options. In addition we report on a case of intraorbital ophthalmic artery aneurysm with acute onset of headache, visual loss and right eye ophthalmoplegia. abstract_id: PUBMED:19096667 Spontaneous intraorbital hemorrhage: a case report. Intraorbital hemorrhage is a rare clinical condition caused by orbital trauma, surgery around the orbit, intraorbital vascular abnormalities, and neoplasm. It was reported to occur spontaneously without any known causes and in association with orbital pseudotumor in a very few cases. A 59-year-old, female patient admitted with sudden onset of severe exophthalmos and pain on the left eye. Orbital CT and MR imaging suggested hemorrhage in the upper part of retrobulbar area of the left orbit. Cerebral angiography was taken to rule out any possible vascular abnormalities. On the left carotid cerebral angiography, the run-off of the distal ophthalmic artery was not seen and the engorgement of the supraophthalmic artery was noted. Systemic administration of corticosteroid did not improve the clinical status and craniectomy was done and retrobulbar hematoma was removed, and the clinical symptoms and signs were improved. Authors report a case of spontaneous intraorbital hemorrhage with the clinical features similar to those of orbital pseudotumor, requiring surgical decompression. abstract_id: PUBMED:29560121 Efficacy and safety of carbon-ion radiotherapy for lacrimal gland carcinomas with extraorbital extension: a retrospective cohort study. Purpose: To evaluate the efficacy and safety of carbon-ion radiotherapy (CIRT) for patients with lacrimal gland carcinomas (LGCs) with extraorbital extension. Results: The median follow-up period was 53.7 months. The 5-year local control and overall survival rates were 62% and 65%, respectively. Regarding late toxicities, 12 patients (36.4%) developed Grade 4 optic nerve disorders, including visual losses of the diseased side (N = 8; 66.7%), and 1 patient (3.0%) developed a Grade 3 optic nerve disorder. Three patients (9.0%) developed Grade 3 cataracts, 3 (9.0%) developed glaucoma, and 1 (3.0%) developed retinopathy. Two patients (6.1%) had Grade 4 central nervous system necrosis. No Grade 5 late toxicities were observed. The 5-year preservation rate of the ipsilateral eyeball was 86%. Conclusion: Definitive CIRT is effective for LGCs with extraorbital extension with acceptable toxicity. Methods: Thirty-three patients treated with CIRT at our institution were analyzed. Sixteen patients (48.5%) had adenoid cystic carcinoma, 8 (24.2%) had adenocarcinoma not otherwise specified, and 9 (27.3%) had other types of the disease. Thirty patients (90.9%) had T4c tumors. The prescribed doses were 57.6 Gy (relative biological effectiveness [RBE]) (N = 18; 54.5%) and 64.0 Gy (RBE) (N = 15; 45.5%) in 16 fractions. abstract_id: PUBMED:23120045 Interesting case of intraorbital foreign body. Penetration of intraorbital foreign bodies into the nose and paranasal sinuses although reported in the past are uncommon. Here we present a case of a 26 years old male who came to us with a seemingly trivial injury of his right eye due to a foreign body. Which on further investigation revealed the major part of a large foreign body impacted into the nose and bilateral sinuses, removed successfully. abstract_id: PUBMED:29414502 Intraorbital hemorrhage following a secondary intervention at integrated zygomatic implants: A case report. Introduction: Zygomatic implant placement can be the best option for restoring masticatory function of an extremely atrophic upper jaw, but the procedure is more invasive than conventional implant placement and can be associated with complications. Presentation Of Case: We report a complication that occurred during a secondary corrective surgical procedure four years after zygomatic implant placement. The patient was a 54-year-old female who had been edentulous for 25 years. Four zygomatic implants were placed. Subsequent prosthetic rehabilitation was successful. Four years later, the patient complained of discomfort. It was found that the tips of the implants on the right side were subcutaneously palpable and surrounded by granulomatous tissue. Intraoral surgery was performed to remove the protruding tips of the two implants. Post-operatively, the patient developed severe orbital pain on the right side with proptosis and diffuse swelling of the eyelids. Emergency surgery was performed to drain the intraorbital hemorrhage. The patient healed uneventfully without loss of visual acuity. Discussion: Scarce prior reports describe trauma to the orbit during zygomatic implant surgery, mostly involving orbital penetration during zygoma implant placement. To our knowledge, the present case report is the first to describe an intraorbital hemorrhage that led to an orbital compression syndrome necessitating emergency surgery. Conclusion: In our case, corrective surgery in a patient with zygomatic implants resulted in an intraorbital hemorrhage, followed by an orbital compression syndrome. Emergency surgery was immediately performed, allowing hematoma drainage and eliminating compression of the intraorbital content. Symptoms quickly resolved and eyesight was not compromised. abstract_id: PUBMED:26447725 Different Intraorbital Implant Situations and Ocular Prosthetic Rehabilitation. Enucleation is the removal of the entire globe of the eye and a portion of the optic nerve, while evisceration involves the removal of the contents of the globe leaving the sclera, extraocular muscles, and optic nerve. Following enucleation or evisceration, intraorbital implants are routinely placed to enhance the prosthetic outcome in addition to restoring the lost orbital volume. Current practice employs intraorbital implants made of nonporous silicone, hydroxyapatite, or porous polyethylene. Intraorbital implant selection and placement, being a highly demanding procedure in terms of knowledge, skill, and expertise, may be associated with a multiplicity of technical and surgical errors. Complications are usually minimal with these implants, but they do occur. The literature reveals many articles related to intraorbital implants, their benefits, and complications; however, the literature regarding the effect of various intraorbital implant situations on the subsequent prosthetic rehabilitation is markedly scarce. Moreover, the need for interdisciplinary surgical and prosthetic interventions required for successful rehabilitation in cases of compromised implant situations has been underemphasized. Hence, this review aimed to evaluate the effect of different intraorbital implant situations on ocular rehabilitation and the required interdisciplinary surgical and prosthetic treatment approach for rehabilitation of enucleated/eviscerated sockets with compromised implant situations, to provide a critical appraisal, and to present a simplified management strategy. abstract_id: PUBMED:26018971 Intraorbital meningioma: resection through modified orbitozygomatic craniotomy. Intraorbital meningiomas are challenging lesions to excise because of their location and the restricted surgical corridor available due to the presence of important neighboring structures. Lesions located in the posterior one-third of the orbit require skull base approaches for their exposure and safe resection. Frontoorbital and modified orbitozygomatic (OZ) craniotomies may facilitate the exposure and resection of masses in the posterior intraorbital space. Specifically, the one-piece modified OZ craniotomy provides many advantages of the "full" OZ craniotomy (which includes a more extensive zygomatic osteotomy). The modified OZ approach minimizes the extent of frontal lobe retraction and provides ample amount of space for the surgeon to exploit all the working angles to resect the tumor. The following video presentation discusses the nuances of technique for resection of an intraorbital meningioma through modified OZ approach and optic nerve decompression. The nuances of technique will be discussed. The video can be found here: http://youtu.be/fP5X2QNr5qk . Answer: No, intraorbital fat is not entirely extraorbital. According to a study that evaluated the anatomy of the lower eyelid fat pads, it was found that these fat pads are not in continuity with the posterior intraorbital fat. The study used methylene blue injections to stain the middle and medial fat pads in cadaver dissections and observed that the lower eyelid fat pads stained as discrete fat compartments. Their anterior boundary is the orbital retaining ligament at the anterior aspect of the inferior orbital rim, and their posterior boundary is an insertion point on the floor of the orbit in line with the midpoint of the globe, called the circumferential intraorbital retaining ligament. Therefore, the lower eyelid fat pads can be described as partially intraorbital and partially extraorbital in location (PUBMED:19608067).
Instruction: Depression and all-cause mortality in persons with diabetes mellitus: are older adults at higher risk? Abstracts: abstract_id: PUBMED:24823259 Depression and all-cause mortality in persons with diabetes mellitus: are older adults at higher risk? Results from the Translating Research Into Action for Diabetes Study. Objectives: To compare the strength of the association between depression and mortality between elderly and younger individuals with diabetes mellitus. Design: A survival analysis conducted in a longitudinal cohort study of persons with diabetes mellitus to test the association between depression and mortality in older (≥ 65) and younger (18-65) adults. Setting: Managed care. Participants: Persons aged 18 and older with diabetes mellitus who participated in the Wave 2 survey of the Translating Research Into Action for Diabetes (TRIAD) Study (N = 3,341). Measurements: The primary outcome was mortality risk, which was measured as days until death using linked data from the National Death Index. Depression was measured using the Patient Health Questionnaire. Results: After controlling for age, sex, race and ethnicity, income, and other comorbidities, mortality risk in persons with diabetes mellitus was 49% higher in those with depression than in those without, although results varied according to age. After controlling for the same variables, mortality risk in persons aged 65 and older with depression was 78% greater than in those without. For those younger than 65, the effect of depression on mortality was smaller and not statistically significant. Conclusion: This analysis suggests that the effect of depression on mortality in persons with diabetes mellitus is most significant for older adults. Because there is evidence in the literature that treatment of depression in elderly adults can lead to lower mortality, these results may suggest that older adults with diabetes mellitus should be considered a high-priority population for depression screening and treatment. abstract_id: PUBMED:37435051 High genetic risk for depression as an independent risk factor for mortality in patients referred for coronary angiography. Background: Different observations have suggested that patients with depression have a higher risk for a number of comorbidities and mortality. The underlying causes have not been fully understood yet. Aims: The aim of our study was to investigate the association of a genetic depression risk score (GDRS) with mortality [all-cause and cardiovascular (CV)] and markers of depression (including intake of antidepressants and a history of depression) in the Ludwigshafen Risk and Cardiovascular Health (LURIC) study involving 3,316 patients who had been referred for coronary angiography. Methods And Results: The GDRS was calculated in 3,061 LURIC participants according to a previously published method and was found to be associated with all-cause (p = 0.016) and CV mortality (p = 0.0023). In Cox regression models adjusted for age, sex, body mass index, LDL-cholesterol, HDL-cholesterol, triglycerides, hypertension, smoking, and diabetes mellitus, the GDRS remained significantly associated with all-cause [1.18 (1.04-1.34, p = 0.013)] and CV [1.31 (1.11-1.55, p = 0.001)] mortality. The GDRS was not associated with the intake of antidepressants or a history of depression. However, this cohort of CV patients had not specifically been assessed for depression, leading to marked underreporting. We were unable to identify any specific biomarkers correlated with the GDRS in LURIC participants. Conclusion: A genetic predisposition for depression estimated by a GDRS was independently associated with all-cause and CV mortality in our cohort of patients who had been referred for coronary angiography. No biomarker correlating with the GDRS could be identified. abstract_id: PUBMED:35622126 Depression, diabetes, comorbid depression and diabetes and risk of all-cause and cause-specific mortality: a prospective cohort study. Aims/hypothesis: The aim of this study was to investigate the risks of all-cause and cause-specific mortality among participants with neither, one or both of diabetes and depression in a large prospective cohort study in the UK. Methods: Our study population included 499,830 UK Biobank participants without schizophrenia and bipolar disorder at baseline. Type 1 and type 2 diabetes and depression were identified using self-reported diagnoses, prescribed medication and hospital records. Mortality was identified from death records using the primary cause of death to define cause-specific mortality. We performed Cox proportional hazards models to estimate the risk of all-cause mortality and mortality from cancer, circulatory disease and causes of death other than circulatory disease or cancer among participants with either depression (n=41,791) or diabetes (n=22,677) alone and with comorbid diabetes and depression (n=3597) compared with the group with neither condition (n=431,765), adjusting for sociodemographic and lifestyle factors, comorbidities and history of CVD or cancer. We also investigated the interaction between diabetes and depression. Results: During a median of 6.8 (IQR 6.1-7.5) years of follow-up, there were 13,724 deaths (cancer, n=7976; circulatory disease, n=2827; other causes, n=2921). Adjusted HRs of all-cause mortality and mortality from cancer, circulatory disease and other causes were highest among people with comorbid depression and diabetes (HRs 2.16 [95% CI 1.94, 2.42]; 1.62 [95% CI 1.35, 1.93]; 2.22 [95% CI 1.80, 2.73]; and 3.60 [95% CI 2.93, 4.42], respectively). The risks of all-cause, cancer and other mortality among those with comorbid depression and diabetes exceeded the sum of the risks due to diabetes and depression alone. Conclusions/interpretation: We confirmed that depression and diabetes individually are associated with an increased mortality risk and also identified that comorbid depression and diabetes have synergistic effects on the risk of all-cause mortality that are largely driven by deaths from cancer and causes other than circulatory disease and cancer. abstract_id: PUBMED:37593998 Mediating effect of depression on the association between cardiovascular disease and the risk of all-cause mortality: NHANES in 2005-2018. Background: Cardiovascular disease (CVD) patients are more likely to have depression than general populations, and meanwhile, depression increased all-cause mortality. However, the interaction effect of depression on CVD and all-cause mortality has not been reported yet. Hypothesis: Herein, we speculate that depression may play an intermediate role in the association of CVD and all-cause mortality. Methods: Demographic and clinical data of 33,156 adults (≥20 years old) were extracted from the National Health and Nutrition Examination Survey (NHANES) database in 2005-2018 in this retrospective cohort study. Weighted univariate and multivariate COX regression analyses were used to screen the covariates and to explore the relationship of CVD and depression. Distribution-of-product method was used to assess the mediating effect of depression on the association between CVD and all-cause mortality. The mediating effect of depression was also explored in age, gender, diabetes mellitus (DM), and dyslipidemia subgroups. The evaluation indexes were odds ratios (ORs), hazard ratios (HRs), and 95% confidence intervals (CIs). Results: Among the participants, 11 514 had CVD, 5844 had depression, and 4759 were died. After adjusting for covariates, CVD was related to high odds of depression (OR = 1.94). Depression played an intermediate role in CVD and all-cause mortality (HR = 1.23) with a mediational percentage of 9.13%. Subgroup analyses also showed this mediating effect existed in adults of different age, gender, DM and dyslipidemia status (all p < .05). Conclusion: The intermediate effect of depression may help clinicians to early identify high-risk populations and provide some reference for disease management and mortality reduction. abstract_id: PUBMED:38341158 Association of serum 25-hydroxyvitamin d concentrations with all-cause and cause-specific mortality among individuals with depression: A cohort study. Background: Limited evidence exists on the relationship between vitamin D status and mortality in depressed patients. Methods: This study investigates serum 25-hydroxyvitamin D [25(OH)D] concentrations in 8417 adults with depression among the National Health and Nutrition Examination Survey (NHANES, 2005-2018). Mortality outcomes were assessed through National Death Index records up to December 31, 2019. Cox proportional risk models estimated risk ratios (HR) and 95 % confidence intervals (CI) for all-cause, cardiovascular disease (CVD), and cancer mortality. Restricted cubic spline analyses explored the nonlinear association of serum 25(OH)D levels with mortality, using the likelihood ratio test for nonlinearity. Results: The weighted mean serum 25(OH)D level was 66.40 nmol/L (95 % CI: 65.8, 67.0), with 36.3 % having deficient vitamin D (<50 nmol/L [20 ng/mL]). Over an average 7.16-year follow-up, 935 deaths were documented, including 296 CVD deaths and 191 cancer deaths. Higher serum 25(OH)D levels were associated with reduced all-cause mortality (HRs 0.55-1.00, p trend = 0.006) and cancer-specific mortality (HRs 0.36-1.00, p trend = 0.015) after multivariate adjustment. The relationship between serum 25(OH)D and all-cause mortality exhibited a nonlinear pattern (P for nonlinearity <0.001), with a 34 % lower risk for each unit increase in natural log-transformed 25(OH)D levels. Significant interactions were observed with age, antidepressant use, and diabetes status. Conclusions: Higher serum 25(OH)D levels were associated with decreased all-cause and cancer-specific mortality in depressed adults, particularly among younger individuals and those using antidepressants or without diabetes. Further research is essential to understand mechanisms and interventions related to vitamin D in depression. abstract_id: PUBMED:25697585 Associations between depression and all-cause and cause-specific risk of death: a retrospective cohort study in the Veterans Health Administration. Objective: Depression may be associated with increased mortality risk, but there are substantial limitations to existing studies assessing this relationship. We sought to overcome limitations of existing studies by conducting a large, national, longitudinal study to assess the impact of depression on all-cause and cause-specific risk of death. Methods: We used Cox regression models to estimate hazard ratios associated with baseline depression diagnosis (N=849,474) and three-year mortality among 5,078,082 patients treated in Veterans Health Administration (VHA) settings in fiscal year (FY) 2006. Cause of death was obtained from the National Death Index (NDI). Results: Baseline depression was associated with 17% greater hazard of all-cause three-year mortality (95% CI hazard ratio [HR]: 1.15, 1.18) after adjusting for baseline patient demographic and clinical characteristics and VHA facility characteristics. Depression was associated with a higher hazard of three-year mortality from heart disease, respiratory illness, cerebrovascular disease, accidents, diabetes, nephritis, influenza, Alzheimer's disease, septicemia, suicide, Parkinson's disease, and hypertension. Depression was associated with a lower hazard of death from malignant neoplasm and liver disease. Depression was not associated with mortality due to assault. Conclusions: In addition to being associated with suicide and injury-related causes of death, depression is associated with increased risk of death from nearly all major medical causes, independent of multiple major risk factors. Findings highlight the need to better understand and prevent mortality seen with multiple medical disorders associated with depression. abstract_id: PUBMED:34090390 Relationship between multimorbidity, disease cluster and all-cause mortality among older adults: a retrospective cohort analysis. Background: Previous studies have evaluated the association of multimorbidity with higher mortality, but epidemiologic data on the association between the disease clusters and all-cause mortality risk are rare. We aimed to examine the relationship between multimorbidity (number/ cluster) and all-cause mortality in Chinese older adults. Methods: We conducted a population-based study of 50,100 Chinese participants. Multiple logistic regression analysis was used to estimate the impact of long-term conditions (LTCs) on all-cause mortality. Results: The prevalence of multimorbidity was 31.35% and all-cause mortality was 8.01% (50,100 participants). In adjusted models, the odds ratios (ORs) and 95% confidence intervals (CIs) of all-cause mortality risk for those with 1, 2, and ≥ 3 LTCs compared with those with no LTCs was 1.45 (1.32-1.59), 1.72 (1.55-1.90), and 2.15 (1.85-2.50), respectively (Ptrend < 0.001). In the LTCs ≥2 category, the cluster of chronic diseases that included hypertension, diabetes, CHD, COPD, and stroke had the greatest impact on mortality. In the stratified model by age and sex, absolute all-cause mortality was higher among the ≥75 age group with an increasing number of LTCs. However, the relative effect size of the increasing number of LTCs on higher mortality risk was larger among those < 75 years. Conclusions: The risk of all-cause mortality is increased with the number of multimorbidity among Chinese older adults, particularly disease clusters. abstract_id: PUBMED:26277576 High Serum Uric Acid Levels Are Associated with All-Cause and Cardiovascular, but Not Cancer, Mortality in Elderly Adults. Objectives: To estimate the all-cause, cardiovascular, and cancer mortality risks associated with serum uric acid levels in elderly adults. Design: Cohort study. Setting: The Annual Geriatric Health Examination Program database from 2006 to 2010. Paticipants: Community-dwelling Taipei citizens aged 65 and older (N = 77,541). Measurements: Stratified according to sex, serum uric acid levels were analyzed in quartiles and as normal versus high categories. Mortality was determined by matching cohort identifications with national death files. Results: Men had significantly higher uric acid levels than women (P < .001), and mean levels increased with age (P < .001). Serum uric acid levels (normal vs high) were an independent risk factor for all-cause and cardiovascular mortality in men and women, with the strongest association found for cardiovascular mortality. The association between serum uric acid levels and cardiovascular mortality was independent of other cardiovascular risk factors such as hypertension, diabetes mellitus, hyperlipidemia, and glomerular filtration rate levels. When levels were analyzed as quartiles, men with levels in the second quartile had the lowest hazard ratios for all-cause and cardiovascular mortality. Conclusion: High serum uric acid levels are associated with greater risk of all-cause and cardiovascular mortality but not cancer mortality in elderly adults. Further studies are warranted to investigate the prognostic implications and potential utility in the monitoring of therapy. abstract_id: PUBMED:37967814 Periodontitis and the risk of all-cause and cause-specific mortality among US adults with diabetes: A population-based cohort study. Aim: To evaluate the association between periodontitis, all-cause and cause-specific mortality, and its prognostic utility among adults with diabetes. Materials And Methods: Periodontal health records were retrieved from the NHANES database for 4297 participants with diabetes aged >30 years at baseline during 1988-1994, 1999-2004 and 2009-2014. Multivariable Cox proportional hazards regression model was applied to calculate the hazard ratios (HRs) and 95% confidence intervals (CIs) for moderate/severe periodontitis with all-cause and cause-specific mortality in participants with diabetes. Area under the curve (AUC) was used to assess predictive value. Results: During a median follow-up of 15.41 years, 1701 deaths occurred. After multivariate adjustments, moderate/severe periodontitis was significantly associated with increased risk of all-cause (HR: 1.27; 95% CI: 1.07-1.50; p = .005) and cardiovascular disease (CVD)-related (HR: 1.35, 95% CI: 1.03-1.76, p = .031) mortality in participants with diabetes. The absolute risk difference based on the cumulative incidence information was 0.022 (5-year, 95% CI: 0.021-0.023) and 0.044 (10-year, 95% CI: 0.041-0.048). Periodontitis improved the prediction of all-cause (AUC: 0.652; 95% CI: 0.627-0.676) and CVD-related (AUC: 0.649; 95% CI: 0.624-0.676) mortality over standard risk factors (all-cause: AUC: 0.631; 95% CI: 0.606-0.656; CVD-related: AUC: 0.629; 95% CI: 0.604-0.655). Conclusions: Moderate/severe periodontitis is associated with an increased risk of all-cause and CVD-related mortality in adults with diabetes. Periodontitis might represent a marker for residual risk. abstract_id: PUBMED:36452317 Synergistic impact of diabetes and cognitive impairment on all-cause and cause-specific mortality in Chinese older adults: A prospective population-based cohort study. Background: Diabetes mellitus (DM) or cognitive impairment (CI) is known to be strongly associated with mortality. DM commonly coexists with CI and proportionally increases with age. However, little is known about the combined effect of cognitive function and diabetes on mortality. This study aimed to evaluate the combined effects of DM and CI on all-cause and cause-specific mortality in Chinese older adults. Methods: This prospective population-based cohort study was based on the Beijing Elderly Comprehensive Health Cohort Study. A total of 4,499 older adults were included. Cox's proportional hazard models were utilized to calculate the effect of DM and CI on all-cause, cardiovascular disease (CVD) mortality and cancer mortality, and a multiplicative term was introduced to study a potential interaction between DM and CI on outcomes. Results: During a median follow-up of 6.8 years (ranging from 6.6 to 11.7 years), 667 (14.8%) participants died from all causes, 292 from CVD, and 215 from cancer. In the fully adjusted model, participants with coexisting DM and CI had the highest risk of all-cause mortality [hazard ratios (HRs), 3.08; 95% confidence intervals (CIs), 2.30,4.11] and CVD mortality (HRs, 3.85; 95% CIs, 2.60,5.71) compared with individuals with normal cognition and non-DM. We also found a multiplicative interaction between DM and CI in respect to all-cause (HRs, 2.46; 95% CI, 1.87,3.22) and CVD mortality (HRs, 3.15 95% CI, 2.19,4.55). In the diabetic population, CI was associated with an increased risk of all-cause mortality (HRs, 2.09; 95% CIs, 1.51,2.89) and CVD mortality (HRs, 3.16; 95% CIs, 2.02,5.05) compared with the normal cognition group. Restricted cubic spline revealed a linear inverse association between Mini-Mental State Examination (MMSE) score and all-cause, CVD mortality in the total sample and participants without diabetes. However, a nearly reverse J association was observed between MMSE and mortality from all causes and CVD in the diabetes group. Conclusion: The findings highlighted that cognitive impairment concomitant with diabetes further increases the risk of mortality. In addition to strengthening routine screening of cognitive functioning in older adults with early-stage diabetes, more extensive assessment of prognostic risks has high clinical value for developing comprehensive treatment plans. Answer: Yes, older adults with diabetes mellitus are at higher risk of all-cause mortality when they also have depression. A study conducted within the Translating Research Into Action for Diabetes (TRIAD) Study found that after controlling for various factors, mortality risk in persons with diabetes mellitus was 49% higher in those with depression than in those without. Specifically, for individuals aged 65 and older with depression, the mortality risk was 78% greater than in those without depression. In contrast, for those younger than 65, the effect of depression on mortality was smaller and not statistically significant (PUBMED:24823259). This suggests that depression significantly impacts mortality in older adults with diabetes mellitus, and they should be considered a high-priority population for depression screening and treatment.
Instruction: Team members' interaction anxiety and team-training effectiveness: a catastrophic relationship? Abstracts: abstract_id: PUBMED:25790576 Team members' interaction anxiety and team-training effectiveness: a catastrophic relationship? Objective: In the present study, we examined the relationship between team members' interaction anxiety and team-training effectiveness. Background: Training in the context of teams can be focused on either individual or team outcomes. Relatedly, the individual differences of team members can hinder or facilitate the effectiveness of training for both individuals and the team as a collective. Interaction anxiety is an understudied individual difference that has been shown to be negatively related to training effectiveness in individual contexts. However, its effects in team-training contexts are yet to be investigated. Method: A sample of 492 students in 123 four-person teams was trained to operate a complex, computer-based team task Results: Interaction anxiety negatively related to team cohesion and team-training effectiveness. A cusp catastrophe model fit the data and showed that teams with more than one high-interaction-anxiety member were negatively affected. Conclusion: Interaction anxiety inhibits the social activities of team training, in turn reducing team-training effectiveness when there is more than one high-interaction-anxiety individual on the training team. Application: These results highlight the importance of examining interaction anxiety as a training team compositional variable that may inhibit behaviors and team processesnecessary to capitalize on the positive social activities on which team training depends since the interaction anxiety composition of training teams may serve as an important boundary condition on the effectiveness team-training interventions. abstract_id: PUBMED:27126735 Effect of team training on improving MRI study completion rates and no-show rates. Purpose: Magnetic resonance imaging (MRI) is a high-cost imaging modality, and an optimized encounter ideally provides high-quality care, patient satisfaction, and capacity utilization. Our purpose was to assess the effectiveness of team training and its impact on patient show-up and completion rates for their MRI examinations. Materials And Methods: A total of 97,712 patient visits from three tertiary academic medical centers over 1-year intervals were evaluated, totaling 49,733 visits at baseline and 47,979 after training. Each center's MRI team received team training skill training including advanced communication and team training techniques training. This training included onsite instruction including case simulation with scenarios requiring appropriate behavioral and communicative interventions. Orientation and training also utilized customized online tools and proctoring. The study completion rate and patient show-up rate during consecutive year-long intervals before and after team training were compared to assess its effectiveness. Two-sided chi-square tests for proportions using were applied at a 0.05 significance level. Results: Despite differing no-show rates (5-22.2%) and study incompletion rates (0.7-3.7%) at the three academic centers, the combined patients' data showed significant (P < 0.0001) improvement in the patients' no-show rates (combined decreases from 11.2% to 8.7%) and incompletion rates (combined decreases from 2.3% to 1.4%). Conclusion: Our preliminary results suggest training of the imaging team can improve the no-show and incompletion rates of the MRI service, positively affecting throughput and utilization. Team training can be readily implemented and may help address the needs of the current cost-conscious and consumer-sensitive healthcare environment. J. MAGN. RESON. IMAGING 2016;44:1040-1047. abstract_id: PUBMED:26261115 Challenges of interprofessional team training: a qualitative analysis of residents' perceptions. Background: Simulation-based interprofessional team training is thought to improve patient care. Participating teams often consist of both experienced providers and trainees, which likely impacts team dynamics, particularly when a resident leads the team. Although similar team composition is found in real-life, debriefing after simulations puts a spotlight on team interactions and in particular on residents in the role of team leader. The goal of the current study was to explore residents' perceptions of simulation-based interprofessional team training. Methods: This was a secondary analysis of a study of residents in the pediatric residency training program at the University of California, San Francisco (United States) leading interprofessional teams in simulated resuscitations, followed by facilitated debriefing. Residents participated in individual, semi-structured, audio-recorded interviews within one month of the simulation. The original study aimed to examine residents' self-assessment of leadership skills, and during analysis we encountered numerous comments regarding the interprofessional nature of the simulation training. We therefore performed a secondary analysis of the interview transcripts. We followed an iterative process to create a coding scheme, and used interprofessional learning and practice as sensitizing concepts to extract relevant themes. Results: 16 residents participated in the study. Residents felt that simulated resuscitations were helpful but anxiety provoking, largely due to interprofessional dynamics. They embraced the interprofessional training opportunity and appreciated hearing other healthcare providers' perspectives, but questioned the value of interprofessional debriefing. They identified the need to maintain positive relationships with colleagues in light of the teams' complex hierarchy as a barrier to candid feedback. Discussion: Pediatric residents in our study appreciated the opportunity to participate in interprofessional team training but were conflicted about the value of feedback and debriefing in this setting. These data indicate that the optimal approach to such interprofessional education activities deserves further study. abstract_id: PUBMED:34506980 Team Sport Participation Protects Against Burnout During Neurosurgery Training: Cross-Sectional Survey Study. Objective: Burnout is experienced by up to two thirds of neurosurgery residents. Team sport participation as an adolescent protects against adverse mental health outcomes in adulthood. The objective of this study was to determine whether high school or collegiate team sport participation is associated with improved psychological well-being during neurosurgery residency. Methods: A cross-sectional survey study of U.S. neurosurgery residents was conducted between June 2020 and February 2021. Outcomes included self-ratings of sadness, anxiety, stress, burnout, optimism, and fulfillment, on 100-point scales, which were averaged into a "Burnout Composite Score" (BCS). Respondents were grouped and compared according to their prior self-reported participation in team sports (participants vs. nonparticipants). A 3-way analysis of variance tested the effects of resident level, exercise days, and team sport participation on BCS. Results: Of 229 submitted responses, 228 (99.5%) provided complete data and 185 (81.1%) residents participated in team sports. Days/week of exercise was similar across groups (2.5 ± 1.8 vs. 2.1 ± 1.8, P = 0.20). The team sport group reported lower mean BCS (37.1 vs. 43.6 P = 0.030, Cohen d = 0.369). There was a significant interaction between prior team sport participation and exercise regimen on BCS (F [3, 211] = 3.39, P = 0.019, n2 = 0.046), such that more exercise days per week was associated with decreased BCS for prior team sport athletes (F [3, 211] = 11.10, P < 0.0005), but not for nonparticipants (F [3, 211] = 0.476, P = 0.699). The positive impact of prior team sport participation was more pronounced for senior residents (-11.5 points, P = 0.016) than junior residents (-4.3 points, P = 0.29). Conclusions: Prior team sport participation was associated with lower BCS among neurosurgery residents, an effect more pronounced during senior residency. Lessons imparted during early team sport experience may have profound impacts on reducing burnout throughout a 7-year neurosurgery residency. abstract_id: PUBMED:32034800 Cognitive appraisals and team performance under stress: A simulation study. Objectives: The present study explored how challenge and threat responses to stress relate to performance, anxiety, confidence, team identity and team characteristics (time spent in training and postgraduate experience) in a medical simulation-based team competition. Methods: The study was conducted during a national simulation-based training event for residents, the SIMCUP Italia 2018. The SIMCUP is a simulation competition in which teams of four compete in simulated medical emergency scenarios. Cross-sectional data were collected prior to the 3 days of the competition. Subjects included 95 participants on 24 teams. Before the competition on each day, participants completed brief self-report measures that assessed demands and resources (which underpin challenge and threat responses to stress), cognitive and somatic anxiety, self-confidence and team identification. Participants also reported time (hours) spent practising as a team and years of postgraduate experience. A team of referees judged each scenario for performance and assigned a score. A linear mixed model using demands and resources was built to model performance. Results: The data showed that both demands and resources have positive effects on performance (31 [11-50.3] [P < .01] and 54 [25-83.3] [P < .01] percentage points increase for unitary increases in demands and resources, respectively); however, this is balanced by a negative interaction between the two (demands * resources interaction coefficient = -10 [-16 to -4.2]). A high level of resources is associated with better performance until demands become very high. Cognitive and somatic anxieties were found to be correlated with demands (Pearson's r = .51 [P < .01] and Pearson's r = .48 [P < .01], respectively). Time spent training was associated with greater perceptions of resources (Pearson's r = .36 [P < .01]). Conclusions: We describe a model of challenge and threat that allows for the estimation of performance according to perceived demands and resources, and the interaction between the two. Higher levels of resources and lower demands were associated with better performance. abstract_id: PUBMED:1886954 Guidelines for interaction by the therapeutic team in inpatient treatment of patients with chronic neurotic disorders In treatment of in-patients with chronic neurotic disorders we tried both to reduce distress in the therapeutic team (doctors, nurses, other therapists) as usually caused by those patients, and to improve outcome of treatment. Therefore, three to five guidelines on how to deal with the patient were given to the therapeutic team, and illustrated by possible literal statements to the patient. The guidelines aimed at a general interactional approach to the patient and did not determine specific therapeutic interventions. They were set up following principles of Brief Therapy as developed at the Mental Research Institute in Palo Alto. Results of our intervention in treatment of ten patients are reported. The therapeutic team rated the guidelines generally as positive. The therapeutic outcome varied greatly. A comparison with a matched pair control group showed a favourable tendency. Initial ratings by the team of the prescriptions predicted eventual improvement of the patients. abstract_id: PUBMED:655837 Stroke rehabilitation: a family-team education program. A stroke causes considerable anxiety and practical difficulties to the family of the patient. Additional confusion results because the difference between the acute care and the prolonged rehabilitation is poorly understood. For these reasons, a family-team conference was established at the Massachusetts Rehabilitation Hospital. Its purpose was to relieve anxiety and explain the scientific and professional aspects of the team approach to rehabilitation. The family-team program consisted of role descriptions presented by the representatives from the various disciplines involved in the rehabilitation process and a discussion of individual family-patient problems. Results of a three-year study were used to evaluate the sucess of the conference. Records of family attendance were compared with the number of persons contacted. Questionnaires completed by family members at the conference showed that the anxiety level of individual families had decreased. A better understanding of the team approach was indicated in more than 75% of those participating. More than 70% of the families felt more comfortable in visits to their relatives and in approaching team members with future questions. The family-team program is a practical instrument for expanding stroke rehabilitation and for including the needs and participation of the family. abstract_id: PUBMED:2764044 Multidisciplinary team counseling for fetal anomalies. Prenatal management of fetal anomalies requires a cooperative effort of the family, obstetrician, ultrasonographer, neonatologist, pediatric subspecialists, nursing staff, geneticists, and social workers. Our approach to these patients is to convene all team members and the family shortly after an anomaly is identified to review the diagnosis, prognosis, plans, and alternatives. Team counseling offers a clear definition of the problem, avoids unnecessary confusion and anxiety for the family, and provides a supportive environment during this family crisis. Our experience with 28 families has been favorable and is described herein. abstract_id: PUBMED:36476634 Occurrence and Impacts of Gastrointestinal Symptoms in Team-Sport Athletes: A Preliminary Survey. Objectives: Although gastrointestinal (GI) symptoms are prevalent in endurance athletes, scant research has examined GI symptoms in team-sport athletes, their impacts, and explanatory factors. This study aimed to assess the prevalence and severity of GI symptoms in team-sport athletes and identify potential risk factors. Design: An observational anonymous survey. Setting: Online. Participants: Hundred forty-three athletes (79 men and 64 women) from team-based sports, with soccer, rugby, and American football athletes comprising approximately 75% of the sample. Assessments Of Risk Factors: Age, gender, body mass index, competition experience, trait anxiety, and resting GI symptoms. Main Outcomes: Gastrointestinal symptoms during training and competition. Results: Overall, past-month GI symptoms during training and competition were mild and relatively infrequent. However, 13.9% and 37.5% of men and women, respectively, reported that GI symptoms had ever impacted their performance. In comparison to men, women reported that nausea, bloating, and abdominal cramping were more likely to have affected performance ( P < 0.05). Women also had higher trait anxiety and higher scores for resting GI symptoms, during-training GI symptoms, and during-competition GI symptoms ( P < 0.001). Resting GI symptoms were the strongest predictor of training and competition GI symptoms (ρ = 0.46-0.67), although trait anxiety was also consistently correlated with competition GI symptoms (ρ = 0.29-0.38). Conclusions: This study suggests that female team-sport athletes experience a higher burden of GI symptoms than males, and that resting symptoms and anxiety predict competition symptoms. Interventions targeting anxiety could theoretically reduce GI symptoms in some team-sport athletes, but this should be confirmed through experimental designs. abstract_id: PUBMED:28419847 The Oral and Maxillofacial Surgery Anesthesia Team Model. Oral and maxillofacial surgeons have been providing safe anesthesia to their patients using the anesthesia team model; this has allowed access to care for patients that have significant anxiety. The AAOMS strives to maintain the excellent safety record of the anesthesia team model by creating simulation programs in anesthesia, regularly updating the office anesthesia evaluation program, convening anesthesia safety conferences and strengthening the standards in our training programs. Through these efforts, our delivery of anesthesia to our patients will remain safe and effective. Answer: The relationship between team members' interaction anxiety and team training effectiveness has been explored in various studies, and the findings suggest that interaction anxiety can indeed have a negative impact on the effectiveness of team training. In one study, it was found that interaction anxiety negatively related to team cohesion and team-training effectiveness. Specifically, a cusp catastrophe model indicated that teams with more than one high-interaction-anxiety member were negatively affected. This suggests that interaction anxiety inhibits the social activities of team training, which in turn reduces team-training effectiveness when there is more than one high-interaction-anxiety individual on the training team (PUBMED:25790576). Another study on team training in the context of MRI study completion rates found that team training, which included advanced communication and team training techniques, led to significant improvements in no-show and incompletion rates. This suggests that team training can positively affect throughput and utilization, although the study does not directly address interaction anxiety (PUBMED:27126735). A qualitative analysis of residents' perceptions of interprofessional team training revealed that residents found simulated resuscitations helpful but anxiety-provoking, largely due to interprofessional dynamics. They appreciated the interprofessional training opportunity and valued hearing other healthcare providers' perspectives but questioned the value of interprofessional debriefing. This indicates that while interaction anxiety is present, the overall perception of team training's effectiveness is mixed (PUBMED:26261115). In the context of neurosurgery training, a study found that team sport participation was associated with lower burnout composite scores among residents, suggesting that team sport experience may have a protective effect against burnout and potentially improve team dynamics and training effectiveness (PUBMED:34506980). Lastly, a simulation study on cognitive appraisals and team performance under stress found that higher levels of resources and lower demands were associated with better performance. Time spent training was associated with greater perceptions of resources, which could imply that adequate training can mitigate the negative effects of interaction anxiety on team performance (PUBMED:32034800). In conclusion, while interaction anxiety can have a detrimental effect on team training effectiveness, particularly when multiple team members experience high levels of anxiety, team training interventions that focus on communication, resources, and stress management can potentially improve outcomes and mitigate the negative impact of interaction anxiety.
Instruction: Is antibiotic prophylaxis for bacterial endocarditis cost-effective? Abstracts: abstract_id: PUBMED:15951458 Is antibiotic prophylaxis for bacterial endocarditis cost-effective? Background: Antibiotic prophylaxis for bacterial endocarditis is recommended by the American Heart Association (AHA) before undergoing certain dental procedures. Whether such antibiotic prophylaxis is cost-effective is not clear. The authors' objective is to estimate the cost-effectiveness of predental antibiotic prophylaxis in patients with underlying heart disease. Methods: The authors conducted a cost-effectiveness analysis using a Markov model to compare cost-effectiveness of 7 antibiotic regimens per AHA guidelines and a no prophylaxis strategy. The study population consisted of a hypothetical cohort of 10 million patients with either a high or moderate risk for developing endocarditis. Results: Prophylaxis for patients with moderate or high risk for endocarditis cost $88,007/quality-adjusted life years saved if clarithromycin was used. Prophylaxis with amoxicillin and ampicillin resulted in a net loss of lives. All other regimens were less cost-effective than clarithromycin. For 10 million persons, clarithromycin prophylaxis prevented 119 endocarditis cases and saved 19 lives. Conclusion: Predental antibiotic prophylaxis is cost-effective only for persons with moderate or high risk of developing endocarditis. Contrary to current recommendations, our data demonstrate that amoxicillin and ampicillin are not cost-effective and should not be considered the agents of choice. Clarithromycin should be considered the drug of choice and cephalexin as an alternative drug of choice. The current published guidelines and recommendations should be revised. abstract_id: PUBMED:24176409 Antibiotic prophylaxis in urologic surgery Antibiotic prophylaxis is highly effective in prevention of postoperative infection. The aim is bacterial growth inhibition during operative procedure. To avoid side-effects, such as increase of bacterial resistant strains and modification of patient flora, some accurate rules are relevant and among them most important are: 1) never last more than 24 hours; 2) choose molecule active against bacteria generally responsible of infections according to the type of operation, but not used in therapeutic field; 3) of crucial importance is the timing of administration: it must start before operation, ideally 30 to 60 minutes before incision. Antibiotic prophylaxis cost must be considered as it accounts for more than 30% of total antibiotic use in hospital. abstract_id: PUBMED:16416720 Antibiotic prophylaxis for endocarditis: time to reconsider. Some cardiac conditions require antibiotic prophylaxis for some types of dental treatment to reduce the risk of infective endocarditis (IE). All medical and dental practitioners are familiar with this practice but tend to use different regimens in apparently similar circumstances. Generally, the trend has been to prescribe antibiotics if in doubt. This review explores the evidence for antibiotic prophylaxis to prevent IE: does it work and is it safe? The changing nature of IE, the role of bacteraemia of oral origin and the safety of antibiotics are also reviewed. Most developed countries have national guidelines and their points of similarity and difference are discussed. One can only agree with the authority who describes antibiotic guidelines for endocarditis as being 'like the Dead Sea Scrolls, they are fragmentary, imperfect, capable of various interpretations and (mainly) missing!' Clinical case-controlled studies show that the more widely antibiotics are used, the greater the risk of adverse reactions exceeding the risk of IE. However, the consensus is that antibiotic prophylaxis is mandatory for a small number of high-risk cardiac and high-risk dental procedures. There are a large number of low-risk cardiac and dental procedures in which the risk of adverse reactions to the antibiotics exceeds the risk of IE, where prophylaxis should not be provided. There is an intermediate group of cardiac and dental procedures for which careful individual evaluation should be made to determine whether IE or antibiotics pose the greater risk. These categories are presented. All medical and dental practitioners need to reconsider their approach in light of these current findings. abstract_id: PUBMED:22623459 Antibiotics in dentistry: Bacteremia, antibiotic prophylaxis, and antibiotic misuse. What is known regarding bacteremia? How effective is antibiotic prophylaxis for distant-site infections (late prosthetic joint infections and infective endocarditis)? Antibiotic resistance poses a growing danger to mankind. The misuse of antibiotics is a main cause of antibiotic resistance in bacteria. The author undertook a Medline search and a hand search of the literature regarding bacteremia, antibiotic prophylaxis for late prosthetic joint infections and infective endocarditis, antibiotic misuse, and antibiotic resistance. The findings indicate a clearer understanding of bacteremia emerging in the past 30 years, which has led to recent changes in antibiotic prophylaxis regimens. Dentists should understand how bacteremia affects their at-risk patients, the rationale for antibiotic prophylaxis, and how antibiotic misuse poses a threat to all. abstract_id: PUBMED:15121944 A cost-effectiveness analysis of bacterial endocarditis prophylaxis for febrile children who have cardiac lesions and undergo urinary catheterization in the emergency department. Objective: To prevent bacterial endocarditis (BE) in those at risk, the American Heart Association recommends antibiotics for patients who have a known urinary tract infection and are about to undergo urinary catheterization (UC). In young children who have cardiac lesions and undergo UC for fever without a source, the problem with prophylaxis only in the presence of infected urine is that the presence of urinary tract infection is unknown before testing. This study was conducted to determine the cost-effectiveness of BE prophylaxis before UC in febrile children aged 0-24 months with moderate-risk cardiac lesions. Methods: We evaluated the cost-effectiveness of BE prophylaxis compared with no prophylaxis from the societal perspective. Clinical outcomes were based on BE incidence and quality-adjusted life years (QALYs). Probabilities were derived from the medical literature. Costs were derived from national and local sources in US dollars for the reference year 2000, using a discount rate of 3%. Results: On the basis of the analysis, prophylaxis prevents 7 BE cases per 1 million children treated. When antibiotic-associated deaths were included, the no-prophylaxis strategy was more effective and less costly than the prophylaxis strategy. When antibiotic-associated deaths were excluded, amoxicillin cost 10 million dollars per QALY gained and 70 million dollars per case prevented. For vancomycin, it was 13 million dollars per QALY gained and 95 million dollars per case prevented. The results were robust to variations in the prophylactic efficacy of antibiotics, incidence of bacteremia after UC, incidence of BE after bacteremia, and costs associated with BE prophylaxis and treatment. Conclusion: In the emergency department, BE prophylaxis before UC in febrile children who are aged 0 to 24 months and have moderate-risk cardiac lesions is not a cost-effective use of health care resources. abstract_id: PUBMED:38117447 Infective Endocarditis Antibiotic Prophylaxis: Review of the Evidence and Guidelines. Purpose Of Review: The question of antibiotic prophylaxis and its role in prevention of infective endocarditis (IE) remains controversial, with differing recommendations from international societies. The aim of this review was to compare and contrast current recommendations on antibiotic prophylaxis for IE by the American Heart Association (AHA), the European Society of Cardiology (ESC), and the National Institute for Health and Care Excellence (NICE) and highlight the evidence supporting these recommendations. Recent Findings: International guidelines for administration of antibiotic prophylaxis for prevention of IE are largely unchanged since 2009. Studies on the impact of the more restrictive antibiotic prophylaxis recommendations are conflicting, with several studies suggesting lack of adherence to current guidance from the ESC (2015), NICE (2016), and AHA (2021). The question of antibiotic prophylaxis in patients with IE remains controversial, with differing recommendations from international societies. Despite the change in guidelines more than 15 years ago, lack of adherence to current guidelines persists. Due to the lack of high-quality evidence and the conflicting results from observational studies along with the lack of randomized clinical trials, the question of whether to recommend antibiotic prophylaxis or not in certain patient populations remains unanswered and remains largely based on expert consensus opinion. abstract_id: PUBMED:33705562 Infective endocarditis: prevention and antibiotic prophylaxis. The Swiss societies of Infectious Diseases, Pediatric Cardiology and Cardiology and the Pediatric Infectious Disease Group of Switzerland present the current update on infective endocarditis prophylaxis in a joint initiative. The major focus of the revised recommendations is a comprehensive prevention campaign for all patients at risk for infective endocarditis. Antibiotic prophylaxis is recommended only for individuals at high risk. Within this high-risk group there is a ranking order, and the conditions are presented accordingly. Antibiotic prophylaxis is no longer recommended for patients with unrepaired ventricular septal defects and patent ductus arteriosus. Recommendations for antibiotic prophylaxis for the prevention of infective endocarditis are categorized in dental and non-dental interventions. abstract_id: PUBMED:11143584 Prophylaxis for infective endocarditis. Who needs it? How effective is it? Objective: To review guidelines for using antibiotic prophylaxis to prevent infective endocarditis, and to present recent changes and controversies regarding these guidelines. Quality Of Evidence: Data are from physiologic and in vitro studies, as well as studies of animal models, and from retrospective analyses of human endocarditis cases. Systematic reviews and guidelines are also examined. As no randomized clinical trials have examined prophylaxis for bacterial endocarditis, many recommendations presented are based on consensus guidelines. Main Message: Antibiotic prophylaxis to prevent bacterial endocarditis should be used in high- and moderate-risk patients with cardiac disease. It should be given before procedures in which bacteremias are likely with organisms that cause endocarditis, such as viridans streptococci. For most procedures, a single dose of amoxicillin (2 g by mouth 1 hour before the procedure) is sufficient to ensure adequate serum levels before and after the procedure. Conclusion: Infective endocarditis continues to have high rates of morbidity and mortality. Antibiotic prophylaxis, therefore, is important to combat this preventable disease. For high- and moderate-risk patients with cardiac disease, the cost-benefit ratio favours prophylaxis. abstract_id: PUBMED:22368576 Antibiotic prophylaxis cards for bacterial endocarditis. Infective endocarditis is a largely preventable cause of morbidity and mortality in individuals with congenital heart disease. Patient and physician awareness is crucial, and in this regard, antibiotic prophylaxis cards are very helpful. I present an antibiotic prophylaxis card which is based on recent recommendations by the American Heart Association. This card is currently in use in Malta for both paediatric and adult patients. abstract_id: PUBMED:35536541 Antibiotic prophylaxis for preventing bacterial endocarditis following dental procedures. Background: Infective endocarditis is a severe infection arising in the lining of the chambers of the heart. It can be caused by fungi, but most often is caused by bacteria. Many dental procedures cause bacteraemia, which could lead to bacterial endocarditis in a small proportion of people. The incidence of bacterial endocarditis is low, but it has a high mortality rate. Guidelines in many countries have recommended that antibiotics be administered to people at high risk of endocarditis prior to invasive dental procedures. However, guidance by the National Institute for Health and Care Excellence (NICE) in England and Wales states that antibiotic prophylaxis against infective endocarditis is not recommended routinely for people undergoing dental procedures. This is an update of a review that we first conducted in 2004 and last updated in 2013. Objectives: Primary objective To determine whether prophylactic antibiotic administration, compared to no antibiotic administration or placebo, before invasive dental procedures in people at risk or at high risk of bacterial endocarditis, influences mortality, serious illness or the incidence of endocarditis. Secondary objectives To determine whether the effect of dental antibiotic prophylaxis differs in people with different cardiac conditions predisposing them to increased risk of endocarditis, and in people undergoing different high risk dental procedures. Harms Had we foundno evidence from randomised controlled trials or cohort studies on whether prophylactic antibiotics affected mortality or serious illness, and we had found evidence from these or case-control studies suggesting that prophylaxis with antibiotics reduced the incidence of endocarditis, then we would also have assessed whether the harms of prophylaxis with single antibiotic doses, such as with penicillin (amoxicillin 2 g or 3 g) before invasive dental procedures, compared with no antibiotic or placebo, equalled the benefits in prevention of endocarditis in people at high risk of this disease. Search Methods: An information specialist searched four bibliographic databases up to 10 May 2021 and used additional search methods to identify published, unpublished and ongoing studies SELECTION CRITERIA: Due to the low incidence of bacterial endocarditis, we anticipated that few if any trials would be located. For this reason, we included cohort and case-control studies with suitably matched control or comparison groups. The intervention was antibiotic prophylaxis, compared to no antibiotic prophylaxis or placebo, before a dental procedure in people with an increased risk of bacterial endocarditis. Cohort studies would need to follow at-risk individuals and assess outcomes following any invasive dental procedures, grouping participants according to whether or not they had received prophylaxis. Case-control studies would need to match people who had developed endocarditis after undergoing an invasive dental procedure (and who were known to be at increased risk before undergoing the procedure) with those at similar risk who had not developed endocarditis. Our outcomes of interest were mortality or serious adverse events requiring hospital admission; development of endocarditis following any dental procedure in a defined time period; development of endocarditis due to other non-dental causes; any recorded adverse effects of the antibiotics; and the cost of antibiotic provision compared to that of caring for patients who developed endocarditis. Data Collection And Analysis: Two review authors independently screened search records, selected studies for inclusion, assessed the risk of bias in the included study and extracted data from the included study. As an author team, we judged the certainty of the evidence identified for the main comparison and key outcomes using GRADE criteria. We presented the main results in a summary of findings table. Main Results: Our new search did not find any new studies for inclusion since the last version of the review in 2013. No randomised controlled trials (RCTs), controlled clinical trials (CCTs) or cohort studies were included in the previous versions of the review, but one case-control study met the inclusion criteria. The trial authors collected information on 48 people who had contracted bacterial endocarditis over a specific two-year period and had undergone a medical or dental procedure with an indication for prophylaxis within the past 180 days. These people were matched to a similar group of people who had not contracted bacterial endocarditis. All study participants had undergone an invasive medical or dental procedure. The two groups were compared to establish whether those who had received preventive antibiotics (penicillin) were less likely to have developed endocarditis. The authors found no significant effect of penicillin prophylaxis on the incidence of endocarditis. No data on other outcomes were reported. The level of certainty we have about the evidence is very low. Authors' Conclusions: There remains no clear evidence about whether antibiotic prophylaxis is effective or ineffective against bacterial endocarditis in at-risk people who are about to undergo an invasive dental procedure. We cannot determine whether the potential harms and costs of antibiotic administration outweigh any beneficial effect. Ethically, practitioners should discuss the potential benefits and harms of antibiotic prophylaxis with their patients before a decision is made about administration. Answer: The cost-effectiveness of antibiotic prophylaxis for bacterial endocarditis is a subject of debate and research. According to a cost-effectiveness analysis using a Markov model, predental antibiotic prophylaxis is cost-effective only for persons with moderate or high risk of developing endocarditis, with clarithromycin being the most cost-effective regimen compared to other antibiotics like amoxicillin and ampicillin, which resulted in a net loss of lives (PUBMED:15951458). However, another study concluded that in the emergency department, bacterial endocarditis (BE) prophylaxis before urinary catheterization in febrile children with moderate-risk cardiac lesions is not a cost-effective use of health care resources (PUBMED:15121944). The effectiveness of antibiotic prophylaxis in preventing postoperative infection in urologic surgery is acknowledged, but the cost must be considered as it accounts for a significant portion of total antibiotic use in hospitals (PUBMED:24176409). Additionally, there is a consensus that antibiotic prophylaxis is mandatory for a small number of high-risk cardiac and dental procedures, but for many low-risk procedures, the risk of adverse reactions to antibiotics exceeds the risk of infective endocarditis (IE), and prophylaxis should not be provided (PUBMED:16416720). International guidelines for the administration of antibiotic prophylaxis for the prevention of IE are largely unchanged since 2009, and studies on the impact of more restrictive antibiotic prophylaxis recommendations are conflicting, with several studies suggesting a lack of adherence to current guidance (PUBMED:38117447). The Swiss societies recommend antibiotic prophylaxis only for individuals at high risk (PUBMED:33705562), while the American Heart Association suggests that antibiotic prophylaxis to prevent bacterial endocarditis should be used in high- and moderate-risk patients with cardiac disease (PUBMED:11143584). Overall, the evidence suggests that while antibiotic prophylaxis can be cost-effective for certain high-risk individuals, its use should be carefully considered, particularly in light of the potential for antibiotic resistance and adverse reactions. The decision to use prophylaxis should be based on individual risk assessment and adherence to current guidelines (PUBMED:35536541).
Instruction: Are physical education-related state policies and schools' physical education requirement related to children's physical activity and obesity? Abstracts: abstract_id: PUBMED:22568462 Are physical education-related state policies and schools' physical education requirement related to children's physical activity and obesity? Background: This study examines whether the extent of states' physical education (PE) requirement policies and the prevalence of schools requiring PE are associated with children's physical activity (PA) and obesity. Methods: Two cycles (2002-2003 and 2006-2007) of cross-sectional data with individual- and state-level information were analyzed. Information on states' PE requirement was from the 2003 and 2006 Physical Education-Related State Policy Classification System, and the prevalence of schools requiring PE was from the 2002 and 2006 School Health Profiles. The individual-level sample includes 11- to 17-year-old children from the 2003 (N = 25,251) and 2007 (N = 23,728) National Survey of Children's Health. The associations of states' PE requirement and schools' PE requirement with children's PA and obesity were examined, controlling for individual, family, and state covariates. Variation in the associations by individual and family characteristics was also tested. Results: States' PE requirement was not associated with any outcome. In cycle 1, overall, a 10% increase in the percentage of schools requiring PE was associated with a 28% increase in the number of days having vigorous PA per week. In cycle 2, the association was not significant. However, significant variation in the association by gender in cycle 2 suggests an influence of schools' PE requirement on girls' PA only. No association was found between schools' PE requirement and obesity. Conclusions: Gaps exist between state PE-related policies and implementation in schools. However, schools' PE requirement seems to improve children's PA with some gender variation. The association between schools' PE requirement and children's weight is less clear. abstract_id: PUBMED:22516484 Physical education policy compliance and children's physical fitness. Background: Physical education policies have received increased attention as a means for improving physical activity levels, enhancing physical fitness, and contributing to childhood obesity prevention. Although compliance at the school and district levels is likely to be critical for the success of physical education policies, few published studies have focused on this issue. Purpose: This study investigated whether school district-level compliance with California physical education policies was associated with physical fitness among 5th-grade public-school students in California. Methods: Cross-sectional data from FITNESSGRAM(®) 2004-2006, district-level compliance with state physical education requirements for 2004-2006, school- and district-level information, and 2000 U.S. Census data were combined to examine the association between district-level compliance with physical education policies and children's fitness levels. The analysis was completed in 2010. Results: Of the 55 districts with compliance data, 28 (50%) were in compliance with state physical education mandates; these districts represented 21% (216) of schools and 18% (n=16,571) of students in the overall study sample. Controlling for other student-, school-, and district-level characteristics, students in policy-compliant districts were more likely than students in noncompliant districts to meet or exceed physical fitness standards (AOR=1.29, 95% CI=1.03, 1.61). Conclusions: Policy mandates for physical education in schools may contribute to improvements in children's fitness levels, but their success is likely to depend on mechanisms to ensure compliance. abstract_id: PUBMED:25269062 Supporting public health priorities: recommendations for physical education and physical activity promotion in schools. Physical activity (PA) provides numerous physiological and psychosocial benefits. However, lifestyle changes, including reduced PA opportunities in multiple settings, have resulted in an escalation of overweight and obesity and related health problems. Poor physical and mental health, including metabolic and cardiovascular problems is seen in progressively younger ages, and the systematic decline in school PA has contributed to this trend. Of note, the crowded school curriculum with an intense focus on academic achievement, lack of school leadership support, funding and resources, plus poor quality teaching are barriers to PA promotion in schools. The school setting and physical educators in particular, must embrace their role in public health by adopting a comprehensive school PA program. We provide an overview of key issues and challenges in the area plus best bets and recommendations for physical education and PA promotion in the school system moving forward. abstract_id: PUBMED:26104603 State Legislation Related to Increasing Physical Activity: 2006-2012. Background: Strategies to improve physical activity prevalence often include policy and environmental changes. State-level policies can be influential in supporting access and opportunities for physical activity in schools and communities. The purpose of this study was to explore the prevalence of state legislation related to physical activity and identify the correlates of enactment of this legislation. Methods: An online legislative database was used to collect bills from 50 states in the U.S. from 2006 to 2012 for 1010 topics related to physical activity. Bills were coded for content and compiled into a database with state-level variables (eg, obesity prevalence). With enactment status as the outcome, bivariate and multivariate analyses were conducted. Results: Of the 1,542 bills related to physical activity introduced, 30% (N = 460) were enacted. Bills on public transportation and trails were more likely to be enacted than those without these topics. Primary sponsorship by the Republican Party, bipartisan sponsorship, and mention of specific funding amounts were also correlates of enactment. Conclusion: Policy surveillance of bills and correlates of enactment are important for understanding patterns in legislative support for physical activity. This information can be used to prioritize advocacy efforts and identify ways for research to better inform policy. abstract_id: PUBMED:17212759 Policies and opportunities for physical activity in middle school environments. Background: This study examined physical activity opportunities and barriers at 36 geographically diverse middle schools participating in the Trial of Activity for Adolescent Girls. Methods: Principals, physical education and health education department heads, and program leaders were interviewed to assess policies and instructional practices that support physical activity. Results: Schools provided approximately 110 hours per year in physical education instruction. Approximately 20% of students walked or bicycled to school. Eighty-three percent of schools offered interscholastic sports and 69% offered intramural sports. Most schools offered programs for girls, but on average, only 24 girls ( approximately 5%) in the schools attended any programs. Only 25% of schools allowed after school free play. An overall score created to assess school environmental support for physical activity indicated that, on average, schools met 6.7 items of 10 items. Free/reduced lunch program participation versus not (p = .04), perceived priority of physical education instruction over coaching (p = .02), and safety for walking/bicycling to school (p = .02) predicted environmental support score. Conclusions: Schools have policies and practices that support physical activity, although unfavorable practices exist. Schools must work with community partners and officials to provide environments that optimally support physical activity, especially schools that serve low-income students. abstract_id: PUBMED:28872431 National Plan for Physical Activity: Education Sector. Background: Schools must play a central role in combating the prevalence of overweight and obesity among children and youths. This cannot be achieved without more robust policy and funding programs at both federal and state levels. Methods: Reviews of meta-analyses were used to assess the efficacy of improving PA/MVPA through interventions in school physical education programs. Individual research studies were reviewed to assess the efficacy of improving PA/MVPA in preschool settings, recess, and classroom activity breaks. Legislation at the federal and state levels was reviewed along with surveillance and accountability mechanisms at the state level. Results: Physical education interventions produce improvements in PA/MVPA if protocols relating to use of time are followed. PA/MVPA in recess can be increased through careful planning for attractive activity opportunities on carefully designed playgrounds. Classroom activity breaks provide important PA/MVPA daily and improve student on-task behavior. Conclusions: Federal legislation is needed to provide guidelines and financial support for states to improve the quantity and quality of PA in school programs. States need to develop clear expectations for quantity and quality of PA programs in schools, surveillance systems to monitor district compliance in meeting those expectations, and an accountability system aimed at ensuring that state expectations are being met with assistance for districts that do not meet expectations. abstract_id: PUBMED:32717868 Assessment of Attitudes Toward Physical Education by the Implementation of an Extracurricular Program for Obese Children. The World Health Organization (WHO) identifies the importance of implementing physical activity programs such as physical education (PE) classes in schools. This study identifies the attitudes of obese children toward PE, before and after participation in a vigorous-intensity physical exercise program without the participation of normal-weight peers using a questionnaire on Attitudes toward Physical Education (CAEF). 98 children between 8-11 years of age were randomized in an Experimental Group (GE) (n = 48) and a Control Group (CG) (n = 47). They were assessed using a questionnaire on Attitudes toward Physical Education (CAEF). All the study participants exhibited a BMI Z-score ≥ 2. Before the intervention, the only difference between boys and girls was "empathy to teacher and physical education subject" (p = 0.001, d de Cohen = 0.72, r = 0.34). The interaction between gender and training was only present in empathy for the teacher, with a medium effect size (η2 = 0.055). The implementation of PE with two hours per week elicits only a few effects over the attitude of obese children, even though with a certain engagement of gender through training in the adjustment of empathy for teachers and the PE class. abstract_id: PUBMED:19699440 School physical activity environment related to student obesity and activity: a national study of schools and students. Purpose: To explore whether characteristics of the U.S. secondary school physical activity environment are associated with student body mass index (BMI) and physical activity. Methods: This report uses data from two studies: Monitoring the Future (MTF; an annual nationally representative survey of 8th-, 10th-, and 12th-grade public and private school students) and Youth, Education, and Society (a survey of administrators in schools that have completed their 2-year participation in the MTF study). School policies and programs related to various health issues, including physical education (PE) and sports activity, were examined for relationships with student self-reported height, weight, being active in sports, exercising vigorously, and participating in school athletics. Results: The results show that in 2004-2007, the percentage of students who attended schools that required PE in their grade differed sharply by grade level: 88% of 8th graders, 48% of 10th graders, and 20% of 12th graders. There were few statistically significant associations between school PE requirements and student BMI. The average percentage of students who participated in interscholastic or varsity sports was associated at the bivariate level with a lower percentage of students being overweight in all three grades. Other measures of PE and sports activity showed varying associations with BMI and physical activity measures. Conclusions: Relationships between the school physical activity environment and student BMI and physical activity were not uniformly strong. We conclude that, as currently practiced in schools, existing variations in physical activity policies may not be sufficient to produce discernible school-wide differences; thus, there is a need for more vigorous PE programming than is typically provided. abstract_id: PUBMED:20120126 National plan for physical activity: education sector. Background: Schools must play a central role in combating the prevalence of overweight and obesity among children and youths. This cannot be achieved without more robust policy and funding programs at both federal and state levels. Methods: Reviews of meta-analyses were used to assess the efficacy of improving PA/MVPA through interventions in school physical education programs. Individual research studies were reviewed to assess the efficacy of improving PA/MVPA in preschool settings, recess, and classroom activity breaks. Legislation at the federal and state levels was reviewed along with surveillance and accountability mechanisms at the state level. Results: Physical education interventions produce improvements in PA/MVPA if protocols relating to use of time are followed. PA/MVPA in recess can be increased through careful planning for attractive activity opportunities on carefully designed playgrounds. Classroom activity breaks provide important PA/MVPA daily and improve student on-task behavior. Conclusions: Federal legislation is needed to provide guidelines and financial support for states to improve the quantity and quality of PA in school programs. States need to develop clear expectations for quantity and quality of PA programs in schools, surveillance systems to monitor district compliance in meeting those expectations, and an accountability system aimed at ensuring that state expectations are being met with assistance for districts that do not meet expectations. abstract_id: PUBMED:38468640 Accelerometry-assessed daily physical activity and compliance with recommendations in Spanish children: importance of physical education classes and vigorous intensity. Background: Physical activity (PA) is associated with numerous health benefits. Vigorous PA (VPA) may have a greater impact on public health than lower-intensity PA. The incorporation of a specific recommendation on VPA could complement and improve existing recommendations for average daily moderate-vigorous PA (MVPA). Physical education classes could have a positive impact on children's adherence to average daily physical activity recommendations. The aim was to investigate the association between MVPA and VPA in children, as well as adherence to recommendations, and obesity and the presence of physical education classes. Methods: A cross-sectional study of physical activity was conducted in a sample of 8 and 9-year-old children in Andalusia (Spain). GENEActiv accelerometers were used, placed on the non-dominant wrist for at least eight consecutive days (24-h protocol). School days with and without physical education class, and weekend days were defined. ROC curves were used to calculate the threshold associated with obesity for average daily MVPA and VPA for recommendations. Results: A total of 360 schoolchildren were included in the analyses (184 girls). An average of 7.7 (SD 1.4) valid days per participant were evaluated, with 19.9 (SD 10.5) and 11.4 (SD 5.1) minutes of VPA performed by boys and girls respectively. 25.8% of the participants were classified with central obesity. The optimal threshold determined with ROC analysis was 12.5 and 9.5 minutes of average daily VPA for boys and girls, respectively (RecVPA), and 75 minutes of average daily MVPA for both sexes (RecMVPA). The RecVPA showed stronger association with obesity. On school days with physical education class, compared to days without this class, children showed increased VPA and MVPA engagement and better compliance with recommendations, with smaller differences in adherence according to sex or obesity. Conclusions: On days with physical education class, more physical activity was accumulated at all intensities and greater adherence to the recommendations than on days without this class. VPA had a stronger correlation with the absence of obesity than lower-intensity activity. It was also observed that boys were physically more active and had higher adherence to the recommendations than girls. Answer: Physical education (PE) related state policies and schools' physical education requirements do appear to have some relationship with children's physical activity levels, but the association with obesity is less clear. A study examining the extent of states' PE requirement policies and the prevalence of schools requiring PE found that while state PE requirements were not associated with children's physical activity or obesity, schools' PE requirements seemed to improve children's physical activity, particularly among girls in one of the study cycles. However, no association was found between schools' PE requirement and obesity (PUBMED:22568462). Another study focused on California found that school district-level compliance with state physical education policies was associated with higher physical fitness levels among 5th-grade students. Students in policy-compliant districts were more likely to meet or exceed physical fitness standards compared to students in noncompliant districts (PUBMED:22516484). Research also indicates that robust policy and funding programs at both federal and state levels are necessary to improve the quantity and quality of physical activity in school programs. Physical education interventions can produce improvements in physical activity if protocols relating to the use of time are followed (PUBMED:20120126). Furthermore, a study on the importance of physical education classes and vigorous intensity physical activity in Spanish children found that on school days with physical education class, children showed increased engagement in both vigorous physical activity and moderate-vigorous physical activity, and better compliance with recommendations compared to days without PE class. The study also suggested that vigorous physical activity may have a stronger correlation with the absence of obesity than lower-intensity activity (PUBMED:38468640). In summary, while PE-related state policies and school requirements can positively influence children's physical activity levels, particularly when schools comply with these policies, the direct impact on obesity is not as evident. The quality of PE programs and the inclusion of vigorous physical activities may be important factors in enhancing the effectiveness of these policies in addressing obesity among children.
Instruction: Contact isolation in surgical patients: a barrier to care? Abstracts: abstract_id: PUBMED:12947316 Contact isolation in surgical patients: a barrier to care? Background: Contact isolation is commonly used to prevent transmission of resistant organisms. We hypothesized that contact isolation negatively impacts the amount of direct patient care. Methods: For 2 hours per day over a 5-week period, a single observer recorded provider/patient contact in adjacent isolated and nonisolated patient rooms on both the surgical intensive care unit (ICU) and surgical wards of a university hospital. Number of visits, contact time, and compliance with isolation were recorded, as was illness severity as assessed by APACHE II score. Results: Isolated patients were visited fewer times than nonisolated patients (5.3 vs 10.9 visits/h, P <.0001) and had less contact time overall (29 +/- 5 vs 37 +/- 3 min/h, P =.008), in the ICU (41 +/- 10 vs 47 +/- 5 min/h, P =.03), and on the floor (17 +/- 3 vs 28 +/- 4 min/h, P =.039), in spite of higher mean APACHE II scores in the isolated (10.1 +/- 1.0 vs 7.6 +/- 0.8, P =.05). Among floor patients with APACHE II scores greater than 10, patients in the isolated group had nearly 40% less contact time per hour than patients in the nonisolated group (19 +/- 4 vs 34 +/- 7 min/h, P =.05). Conclusions: Because of the significantly lower contact time observed, particularly among the most severely ill of floor patients, we propose a reexamination of the risk-benefit ratio of this infection control method. abstract_id: PUBMED:21811572 The impact of Contact Isolation on the quality of inpatient hospital care. Background: Contact Isolation is a common hospital infection prevention method that may improve infectious outcomes but may also hinder healthcare delivery. Methods: To evaluate the impact of Contact Isolation on compliance with individual and composite process of care quality measures, we formed four retrospective diagnosis-based cohorts from a 662-bed tertiary-care medical center. Each cohort contained patients evaluated for one of four Centers for Medicare and Medicaid Services (CMS) Hospital Compare process measures including Acute Myocardial Infarction (AMI), Congestive Heart Failure (CHF), Pneumonia (PNA) and Surgical Care Improvement Project (SCIP) from January 1, 2007 through May 30, 2009. Results: The 6716-admission cohort included 1259 with AMI, 834 with CHF, 1377 with PNA and 3246 in SCIP. Contact Isolation was associated with not meeting 4 of 23 individual hospital measures (4 of 10 measures were not met for care provided while patients are typically isolated). Contact Isolation was independently associated with lower compliance with the composite pneumonia process-of-care measure (OR 0.3, 95% CI 0.1-0.7). AMI, CHF and SCIP composite measures were not impacted by Contact Isolation. Conclusions: Contact Isolation was associated with lower adherence to some pneumonia quality of care process measures of care on inpatient wards but did not impact CHF, AMI or SCIP measures. abstract_id: PUBMED:35761758 Comparing the impact of two contact isolation modes for hospitalised patients with Clostridioides difficile infection on the quality of care. Background: Clostridioides difficile causes healthcare-associated infections. Environmental spore acquisition is a major mode of transmission. Patient cohorting to prevent cross-transmission in healthcare-institutions is a reasonable component of an enhanced infection control strategy. Objective: The purpose of the study was to assess the effect of two different contact isolation modes on the quality of care of hospitalised patients with Clostridioides difficile infection (CDI). Methods: A retrospective cohort-study of patients with CDI hospitalised under one of two contact isolation modes: contact isolation in a multi-patient room without a dedicated nursing team vs. contact isolation in a permanent cohort isolation unit with a dedicated nursing team. Patients' files were reviewed for demographics, clinical characteristics, risk-assessment scores, clinical quality measures including the number of blood tests collected per day, the number of radiological tests applied per day and the time at which a radiological test was conducted, as process measurements, along with the length of stay and mortality, as outcome measures. The STROBE checklist for reporting observational studies was followed. Results: One hundred and seventy-eight patients with CDI were included; 100 in a permanent cohort isolation unit and 78 under contact isolation in a multi-patient room. No difference was found in all clinical quality process measures and in all outcome measures. Multivariable logistic regression showed that nursing home residence was associated with in-hospital mortality (OR, 2.51; CI, 1.29-4.97; p = .007), whereas the mode of hospitalisation was not. Conclusions: The different contact isolation modes of hospitalisation did not compromise the quality of care of patients with CDI. Relevance To Clinical Practice: Cohorting of patients with CDI is used to prevent cross-transmission, though it raises a major concern regarding quality of care. In this study we show there was no compromise in patient care, therefore it is a reasonable component of an enhanced infection control strategy in a hospital setting. abstract_id: PUBMED:11768817 Nosocomial pneumonia is not prevented by protective contact isolation in the surgical intensive care unit. Nosocomial pneumonia (NP) is the leading cause of death from hospital-acquired infection in intubated surgical intensive care unit (SICU) patients. To determine whether protective contact isolation would lower the incidence of NP in intubated patients we performed a prospective, randomized, and controlled study in two SICUs in a tertiary medical center. Over a period of 15 months two identical ten-bed SICUs alternated for 3-month periods between protective contact isolation (isolation group) and standard "universal precautions" (control group). In the isolation group all personnel and visitors donned disposable gowns and nonsterile gloves before entering an intubated patient's room; handwashing was required before entry and on leaving the room. In the control group caregivers utilized only "standard precautions" including handwashing and nonsterile gloves for intubated patients. Respiratory cultures were obtained 48 hours after SICU admission and every 48 hours thereafter until extubation, transfer to floor care, or death. Airway colonization (AC) occurred in 72.7 per cent of isolated patients and 69.0 per cent of control patients (P = 0.61). The incidence of NP was significantly higher in the isolation group (36.4%) compared with the control group (19.5%) (P = 0.02). There was no statistically significant difference between groups in days from SICU admission to AC, days to NP, and mortality. We conclude that protective contact isolation with gowns, gloves, and handwashing is not superior to gloves and handwashing alone in the prevention of AC and NP in SICU patients and may in fact be detrimental. abstract_id: PUBMED:28666781 Evaluation of sedation-related medication errors in patients on contact isolation in the intensive care unit. Background: Patients in intensive care units (ICUs) may be placed on contact isolation for meticillin-resistant Staphylococcus aureus (MRSA) colonization to prevent transmission. Prior studies suggest that isolated patients may receive substandard care compared with non-isolated patients. An optimal level of sedation is required to facilitate mechanical ventilation (MV) and to minimize adverse outcomes. Aim: To determine if patients on MV and isolated for MRSA colonization are at increased risk of oversedation compared with non-isolated patients. Methods: Retrospective chart review of adult patients on MV who received an MRSA nasal polymerase chain reaction assay and sedation within 24 and 48 h of ICU admission, respectively. Endpoints included rate of inappropriate sedation, length of ICU stay, length of time on MV, and incidence of ventilator-associated complications. Findings: In total, 226 patients were included (114 MRSA positive, 112 MRSA negative). Baseline demographics were similar between the groups, with the exception of ICU admission diagnosis. Fifty-six (55%) isolated patients experienced inappropriate sedation compared with 49 (50%) non-isolated patients (P=0.482). Isolated patients spent longer in the ICU (10.4 vs 6.8 days, P=0.0006), longer on MV (8.98 vs 4.81 days, P<0.001), and required tracheostomies more frequently [37 (32%) vs 14 (13%), P=0.0003] than non-isolated patients. Conclusions: Isolated patients were not at increased risk of oversedation compared with non-isolated patients. There was an association between the use of contact isolation for MRSA nares colonization and prolonged ICU stay and prolonged MV. abstract_id: PUBMED:24485377 Knowledge and attitudes of visitors to patients in contact isolation. In this study, conducted at a tertiary care center, we surveyed visitors to patients in contact isolation to assess their knowledge and attitudes about contact isolation. Although response rates were low, we found that visitors had an overall positive perception and understanding of contact isolation. We think this is likely attributable to the communication and education provided by health care providers to the visitors. abstract_id: PUBMED:8610665 Effectiveness of contact isolation during a hospital outbreak of methicillin-resistant Staphylococcus aureus. Contact isolation has been recommended by the Centers for Disease Control and Prevention for the prevention of nosocomial transmission of methicillin-resistant Staphylococcus aureus (MRSA), but there are few data which prospectively quantitate the effectiveness of contact isolation for this purpose. During an outbreak of MRSA in a neonatal intensive care unit between July 18, 1991 and January 30, 1992, weekly surveillance cultures were performed on all patients. Sixteen of 331 admissions became colonized with MRSA, and 3 (19%) developed infections: bacteremia, conjunctivitis, and dialysis catheter site infection. The isolates from all 16 patients were submitted to plasmid profile analysis and restriction enzyme analysis of whole cell DNA. All of the patients had identical chromosomal patterns and plasmid profiles, which differed from control isolates from other wards, indicating that the outbreak resulted from spread of a unique strain. None of 144 personnel who were cultured after recent contact with newly colonized patients during the outbreak were found to carry MRSA, which suggests that patients were the reservoir for transmission rather than caregivers. The most probable source for each individual transmission was determined based on proximity in time and space and shared exposure to caregivers. The rate of transmission of MRSA from patients on contact isolation was significantly lower (0.009 transmissions per day on isolation) than the rate for patients not on isolation (0.140 transmissions per day unisolated, relative risk = 15.6, 95% confidence interval 5.3-45.6, p < 0.0001). The authors conclude that the risk of nosocomial transmission of MRSA was reduced 16-fold by contact isolation during the outbreak in this neonatal intensive care unit. These data confirm the results of previous studies which have suggested that contact isolation was effective in controlling the epidemic spread of methicillin-resistant Staphylococcus aureus. abstract_id: PUBMED:26897698 Assessing patient and caregiver understanding of and satisfaction with the use of contact isolation. Background: Contact isolation is a method used for limiting the spread of antimicrobial-resistant organisms when caring for patients. This policy has been linked to several adverse outcomes and less patient satisfaction. We assessed patient and caregiver understanding and satisfaction with the use of contact isolation. Methods: A prospective survey of >500 patients in contact isolation at our institution was performed during 2014. Participants responded to a series of statements relating to contact isolation, using a 5-point Likert scale. Responses were assessed for overall positivity or negativity and further compared according to floor type or designation. Results: Of the patients, 48.7% responded to the survey; 70 caregivers also responded. Patient and caregiver responses were similar and were positive overall. Most respondents felt safer because of the use of contact isolation and because it prevented infections. A smaller majority of respondents also thought the policy was adequately explained to them and adhered to by staff. Conclusions: In the largest collection of respondents surveyed to date about contact isolation and its impact on them, the policy was viewed positively, both by patients and caregivers. There is still room for improvement in the area of patient education regarding the use of contact isolation. abstract_id: PUBMED:11886673 Effectiveness of contact isolation in the control of multiresistant bacteria in an intensive care service Aim: To describe the frequency, characteristics and progression of critically ill patients admitted to the ICU, for whom isolation is indicated due to detection of multiresistant pathogenic bacteria, and to study the effectiveness of precautionary measures to avoid dissemination of these microorganisms. Patients And Methods: Prospective, observational, cohort study performed by a specially created working group of four nurses and an ICU specialist. The study included 55 patients in whom contact isolation was indicated (isolation rate, 15.2 per 100 patients), collected over a 16-month period. Results: The multiresistant bacteria responsible for isolation of the patients were: Pseudomonas aeruginosa (17 cases), Staphylococcus aureus (17 cases), Stenotrophomonas maltophilia (15 cases), Acinetobacter baumannii (4 cases) and extended-spectrum beta-lactamase (ESBL)- producing Enterobacteria (2 cases). Vancomycin-resistant Enterococcus spp. was not identified in any case. The mean duration of ICU isolation was 17.6 6 5.1 days (range 1-75). Multiresistant bacteria were classified as intra-ICU nosocomial in 39 cases (70.9%), extra-ICU nosocomial in 10 cases (18.2%) and community-acquired in 6 (10.9%). During the study period, no epidemic outbreak due to any of the controlled bacteria was detected. The multiresistant bacteria presented in the form of colonization in 41 cases (74.5%). The reasons for discontinuing isolation were death of the patient in 18 cases, transferal to a hospital ward (discharge from the ICU) in 19 cases, and eradication of the bacteria in 18 cases. Of the 55 patients with multiresistant bacteria, 35 (63.6%) died during hospitalization, and 23 of these (41.8%) during their stay in the ICU. Conclusions: The implementation of a working team for early detection of multiresistant pathogenic bacteria resulted in application of contact isolation in 15.2% of patients admitted. Surveillance to fulfill isolation precautions in a medical-surgical ICU achieved an absence of epidemic outbreaks due to these bacteria during the study period. abstract_id: PUBMED:34347907 Geographic contact drives increased reproductive isolation in two cryptic Empidonax flycatchers. Geographic contact between sister lineages often occurs near the final stages of speciation, but its role in speciation's completion remains debated. Reproductive isolation may be essentially complete prior to secondary contact. Alternatively, costly interactions between partially reproductively isolated species - such as maladaptive hybridization or competition for resources - may select for divergence, increasing reproductive isolation and driving speciation toward completion. Here, we use coalescent demographic modelling and whole-genome data sets to show that a period of contact and elevated hybridization between sympatric eastern North American populations of two cryptic bird species preceded a major increase in reproductive isolation between these populations within the last 10,000 years. In contrast, substantial introgression continues to the present in a western contact zone where geographic overlap is much narrower and probably of more recent origin. In the sympatric eastern region where reproductive isolation has increased, it is not accompanied by character displacement in key morphometric traits, plumage coloration, or ecological traits. While the precise trait and underlying mechanism driving increased reproductive isolation remains unknown, we discuss several possibilities and outline avenues for future research. Overall, our results highlight how demographic models can reveal the geographic context in which reproductive isolation was completed, and demonstrate how contact can accelerate the final stages of speciation. Answer: The evidence from the provided abstracts suggests that contact isolation can indeed be a barrier to care for surgical patients. A study recorded that isolated patients were visited fewer times and had less contact time overall compared to nonisolated patients, despite having higher illness severity scores (PUBMED:12947316). This indicates that contact isolation may negatively impact the amount of direct patient care received by isolated patients. Additionally, contact isolation was associated with lower adherence to some pneumonia quality of care process measures on inpatient wards, although it did not impact measures for Congestive Heart Failure (CHF), Acute Myocardial Infarction (AMI), or Surgical Care Improvement Project (SCIP) (PUBMED:21811572). This suggests that the impact of contact isolation on the quality of care may vary depending on the condition being treated. In the context of Clostridioides difficile infection, a study found no difference in clinical quality process measures and outcome measures between patients isolated in a multi-patient room without a dedicated nursing team and those in a permanent cohort isolation unit with a dedicated nursing team (PUBMED:35761758). This indicates that the mode of contact isolation may not compromise the quality of care for patients with this specific infection. However, another study found that protective contact isolation did not prevent nosocomial pneumonia and may have been detrimental, as the incidence of nosocomial pneumonia was higher in the isolation group compared to the control group (PUBMED:11768817). This raises concerns about the effectiveness of contact isolation in preventing certain infections. Furthermore, patients on contact isolation for MRSA colonization in the ICU did not have an increased risk of oversedation but did have longer ICU stays and time on mechanical ventilation (PUBMED:28666781). This suggests that while sedation management may not be affected, other aspects of care, such as length of stay, could be impacted by contact isolation. Overall, while contact isolation is an important infection control strategy, these studies highlight the need for careful consideration of its potential negative effects on patient care and the importance of ensuring that isolation practices do not inadvertently reduce the quality of care for surgical patients.
Instruction: Optimizing the approach to anemia in the preterm infant: is there a role for erythropoietin therapy? Abstracts: abstract_id: PUBMED:23025780 Erythropoiesis and the approach to anemia in premature infants. Objective: To review causes of anemia in preterm infants and to suggest potential preventive measures. Methods: Data for this review is obtained from review of the literature. Results: An approach to investigating and treating causes of neonatal anemia is outlined. Conclusions: Clinical practices can significantly impact anemia in premature infants. Delayed cord clamping, decreasing phlebotomy loss and optimizing nutritional support are practices that may decrease the severity of anemia, thereby decreasing the need for transfusions or erythropoietin treatment. abstract_id: PUBMED:29366419 Efficacy of Supplementation with Iron Sulfate Compared to Iron Bisglycinate Chelate in Preterm Infants. Background: Strategies to prevent anaemia in preterm infants include drawing fewer blood samples, the use of recombinant human erythropoietin and iron supplementation. Although iron sulfate is the most commonly used pharmaceutical formulation for iron supplementation, there are few studies comparing different iron salts in infants. Objective: This is a study of retrospective data comparison of two groups of preterm infants receiving erythropoietin to evaluate the efficacy of iron bisglycinate chelate to iron sulfate. Subjects And Methods: Three-hundred infants of gestational age ≤32 weeks were enrolled: 225 were supplemented with iron sulfate (3 mg/kg/day) and 75 were supplemented with iron bisglycinate chelate (0.75 mg/kg/day). The effect on erythropoiesis was assessed with a general linear model that estimates the response variables (values for Haemoglobin, Haematocrit, absolute values and percentage Reticulocytes, Reticulocyte Haemoglobin content) based on treatment, time, birth weight, and gestational age. Results: Supplementation with iron bisglycinate chelate at a dose of 0.75 mg/kg/day demonstrated an efficacy comparable to iron sulfate at a dose of 3 mg/kg/day in both populations of preterm infants. The two cohorts had similar erythropoietic response, without significant differences. Conclusions: The higher bioavailability of iron bisglycinate chelate resulted in a lower load of elemental iron, a quarter of the dose, and achieved equivalent efficacy compared to iron sulfate. Iron bisglycinate chelate may appear to be an alternative to iron sulfate in the prevention and treatment of preterm newborn anaemia. abstract_id: PUBMED:33076939 Effect of early prophylactic low-dose recombinant human erythropoietin on retinopathy of prematurity in very preterm infants. Background: Very preterm infants are at risk of developing retinopathy of prematurity (ROP). Recombinant human erythropoietin (rhEPO) is routinely used to prevent anemia in preterm infants; however, the effect of rhEPO on ROP development is still controversial. The purpose of this study was to evaluate the effect of early prophylactic low-dose rhEPO administration on ROP development in very preterm infants. Methods: A total of 1898 preterm infants born before 32 weeks of gestation were included. Preterm infants received rhEPO (n = 950; 500 U/kg, rhEPO group) or saline (n = 948, control group) intravenously within 72 h of birth and then once every other day for 2 weeks. Results: The total incidence of ROP was not significantly different between the two groups (10.2% vs. 13.2%, p = 0.055). Further analysis showed that rhEPO group had lower rates of type 2 ROP than the control group (2.2% vs. 4.1%, RR 0.98; 95% CI 0.96-1.00; p = 0.021). Subgroup analysis found that rhEPO treatment significantly decreased the incidence of type 2 ROP in infant boys (1.8% vs. 4.3%, p = 0.021) and in those with a gestational age of 28-296/7 weeks (1.1% vs. 4.9%, p = 0.002) and birth weight of 1000-1499 g (1.2% vs. 4.2%, p = 0.002). There was a small increasing tendency for the incidence of ROP in infants with a gestational age of < 28 weeks after rhEPO treatment. Conclusions: Repeated low-dose rhEPO administration has no significant influence on the development of ROP; however, it may be effective for type 2 ROP in infant boys or in infants with gestational age > 28 weeks and birth weight > 1500 g. Trial registration The data of this study were retrieved from two clinical studies registered ClinicalTrials.gov (NCT02036073) on January 14, 2014, https://clinicaltrials.gov/ct2/show/NCT02036073 ; and (NCT03919500) on April 18, 2019. https://clinicaltrials.gov/ct2/show/NCT03919500 . abstract_id: PUBMED:9280091 Optimizing the approach to anemia in the preterm infant: is there a role for erythropoietin therapy? Objective: To define a subset of very low birth weight (VLBW) infants who might benefit from recombinant human erythropoietin (r-HuEPO) treatment. Study Design: We reviewed the records for all VLBW (birth weight (BW) < or = 1500 gm) infants who were admitted to our nursery within the first 3 days of life between January 1991 and December 1994 and discharged alive. Results: These infants received an average of 2.02 transfusions, far fewer than the 7 to 11 previously reported for VLBW infants. Infants with a BW of 1251 to 1500 gm received very few transfusions. More than three quarters of transfused infants received a transfusion in the first 2 weeks of life before r-HuEPO would be expected to be effective. Assigning units to individual infants and holding the units for 14 days, a practice adopted in our blood bank in 1993, resulted in a 44% decrease in donor exposures in infants receiving more than one transfusion. Holding assigned units for 30 days, a practice our blood bank has now adopted, should result in 56% of all transfused infants having a single donor exposure and 89% having one or two donor exposures. Cost-benefit analysis only supports routine use of r-HuEPO in infants weighing less than 750 gm. Conclusion: VLBW infants receive far fewer transfusions than the number previously reported. Assigning units to individual patients and holding those units for 30 days, together with efforts to minimize the need for transfusions make routine use of r-HuEPO unnecessary. abstract_id: PUBMED:22391632 Multidose optimization simulation of erythropoietin treatment in preterm infants. Introduction: Preterm infants commonly develop anemia requiring red blood cell transfusions (RBCTx). Although an alternative therapy is recombinant human erythropoietin (Epo), it is not widely employed. To provide a rigorous scientific basis supporting the latter approach, a model-based simulation analysis of endogenous erythropoiesis was developed. Results: The pharmacodynamic/pharmacokinetic (PK/PD) model identified an optimal Epo dosing algorithm in preterm infants that demonstrated maximal efficacy when Epo was dosed frequently during the early weeks of life (when phlebotomy loss is greatest). Model-based simulations employing optimized Epo dosing predicted that 13 of the 27 (46%) infants would avoid RBCTx ("good responders"). Importantly, simulation results identified five subject-specific covariate factors predictive of good Epo response. Discussion: This simulation study provides a basis for possibly eliminating RBCTx in infants who can be selected for optimized Epo therapy. Methods: Epo PD hemoglobin production parameters were determined in 27 preterm infants studied intensively during the first 28 d of life. Model-derived Epo PD parameters were combined with PK parameters derived from the literature to simulate an optimized intravenous Epo bolus dosing schedule. The goal of this simulated optimized schedule was to eliminate RBCTx, as prescribed per current guidelines, in as many preterm infants as possible. abstract_id: PUBMED:23485870 Population pharmacodynamic analysis of erythropoiesis in preterm infants for determining the anemia treatment potential of erythropoietin. A population pharmacokinetics/pharmacodynamic (PK/PD) model was developed to describe changes in erythropoiesis as a function of plasma erythropoietin (EPO) concentration over the first 30 days of life in preterm infants who developed severe anemia requiring red blood cell (RBC) transfusion. Several covariates were tested as possible factors influencing the responsiveness to EPO. Discarded blood samples in 27 ventilated preterm infants born at 24-29 wk of gestation were used to construct plasma EPO, hemoglobin (Hb), and RBC concentration-time profiles. The amount of Hb removed for laboratory testing and that transfused throughout the study period were recorded. A population PK/PD model accounting for the dynamic Hb changes experienced by these infants was simultaneously fitted to plasma EPO, Hb, and RBC concentrations. A covariate analysis suggested that the erythropoietic efficacy of EPO is increased for preterm infants at later gestational ages. The PD analysis showed a sevenfold difference in maximum Hb production rate dependent on gestational age and indicated that preterm infants, when stimulated by EPO, have the capacity to produce additional Hb that may result in a decrease in RBC transfusions. The present model has utility in clinical trial simulations investigating the treatment potential of erythropoietic stimulating agents in the treatment of anemia of prematurity. abstract_id: PUBMED:26502725 A Mass Balance-Based Semiparametric Approach to Evaluate Neonatal Erythropoiesis. Postnatal hemoglobin (Hb) production in anemic preterm infants is determined by several factors including the endogenous erythropoietin levels, allogeneic RBC transfusions administered to treat anemia, and developmental age. As a result, their postnatal Hb production rate can vary considerably. This work introduces a novel Hb mass balance-based semiparametric approach that utilizes infant blood concentrations of Hb from the first 30 postnatal days to estimate the amount of Hb produced and the erythropoiesis rate in newborn infants. The proposed method has the advantage of not relying on specific structural pharmacodynamic model assumptions to describe the Hb production, but instead utilizes simple mass balance principles and nonparametric regression analysis. The developed method was applied to the Hb data from 79 critically ill anemic very low birth weight preterm infants to evaluate the dynamic changes in erythropoiesis during the first month of life and to determine the inter-subject variability in Hb production. The estimated mean (±SD) cumulative amount of Hb produced by the infants over the first month of life was 6.6 ± 3.4 g (mean body weight, 0.768 kg), and the mean estimated body weight-scaled Hb production rate over the same period was 0.23 ± 0.12 g/day/kg. A significant positive correlation was observed between infant gestational age and the mean body weight-scaled Hb production rate over the first month of life (P < 0.05). We conclude that the proposed mathematical approach and its implementation provide a flexible framework to evaluate postnatal erythropoiesis in newborn infants. abstract_id: PUBMED:15214254 Issues in transfusing preterm infants in the NICU. Decisions regarding whether or not to transfuse preterm infants in the neonatal intensive care unit are often difficult. Although numbers of red blood cell transfusions have decreased in recent years, transfusions are still a common occurrence. For each infant, the advantages of transfusion must be balanced against the consequences and potential risks. This article examines physiologic and iatrogenic causes of anemia in preterm infants, clinical consequences of anemia, the use of transfusions and their risks, and the role of erythropoietin therapy. abstract_id: PUBMED:15693870 Recombinant human erythropoietin therapy in low-birthweight preterm infants: a prospective controlled study. Background: This study aimed to detect the effectiveness of recombinant human erythropoietin therapy in preventing premature anemia in low-birthweight preterm infants. Methods: A total of 292 premature infants who were born earlier than 33 gestational weeks and smaller than 1500 g birthweight were enrolled into the study. In addition to their conventional supportive therapy (medications), recombinant human erythropoietin 200 U/kg twice a week, subcutaneously, was given to randomly selected 142 premature infants for 6 weeks. The blood count variables and need for transfusions were compared with the remaining 150 premature infants during 6 months follow up. Results: Serum erythropoietin levels were 11.3 +/- 6.1 mU/mL and 38.3 +/- 19.1 mU/mL in the erythropoietin group before and at the fourth week of the study, respectively (P < 0.001). Reticulocyte counts of the group treated with erythropoietin were 146 x 10(6) +/- 28 x 10(6)/mL and 122 x 10(6) +/- 27 x 10(6)/mL at the fourth and seventh week of the study, respectively, and these values were significantly higher when compared with the control group (P < 0.001 and P < 0.001). At the same period, hematocrit values were also found to be higher in the treatment group than the control group (P < 0.001). Serum ferritin levels were lower in the treatment group compared with the control group at the fourth week of the study. No side-effects related to erythropoietin usage were encountered. The need for packed cell transfusions were 47% in the group treated with erythropoietin and 62.6% in the control group. A statistically significant difference was found for transfusion needs between the control and treatment groups (P < 0.001). Conclusion: Recombinant erythropoietin is effective therapy for maintaining stable hematocrit levels in low-birthweight preterm infants and prevents the need for blood transfusions. abstract_id: PUBMED:12596608 Caring for the renal failure patient: optimizing iron therapy. The effectiveness of anemia management in patients with end stage renal disease (ESRD) has increased over the past 4 years. However, approximately 26% of treated patients still do not meet the minimum hemoglobin (Hgb) value of 11 g/dl that is recommended by the K/DOQI Clinical Practice Guidelines (National Kidney Foundation [NKF], 2001). One of the main obstacles to good patient outcome may be iron deficiency, which is common in both the predialysis and dialysis period. Since iron is needed for Hgb synthesis, iron depletion exacerbated anemia and reduces the response to recombinant erythropoietin (rEPO) therapy. Health care providers can significantly improve patient outcome by addressing iron deficiency more rigorously. A good starting point is the establishment of an iron deficiency management protocol that includes early evaluation of iron status and aggressive iron therapy. Iron therapy, in turn, can be optimized by administering safe and effective iron supplements and by implementing maintenance iron regimens to prevent the recurrence of iron deficiency. By making these simple improvements to their treatment approach, clinicians can enhance the effectiveness of anemia management in patients with ESRD. Answer: Erythropoietin therapy has been considered as a potential approach to manage anemia in preterm infants. The role of recombinant human erythropoietin (rhEPO) in this context has been explored in various studies. One study reviewed the records of very low birth weight (VLBW) infants and found that these infants received fewer transfusions than previously reported, suggesting that routine use of rhEPO might be unnecessary, especially when efforts are made to minimize the need for transfusions and when blood units are assigned to individual patients for extended periods. The study concluded that cost-benefit analysis only supports the routine use of rhEPO in infants weighing less than 750 gm (PUBMED:9280091). Another study evaluated the effect of early prophylactic low-dose rhEPO administration on retinopathy of prematurity (ROP) in very preterm infants. The study found that repeated low-dose rhEPO administration had no significant influence on the development of ROP overall, but it may be effective for type 2 ROP in infant boys or in infants with a gestational age > 28 weeks and birth weight > 1500 g (PUBMED:33076939). A population pharmacokinetics/pharmacodynamic (PK/PD) model suggested that the erythropoietic efficacy of EPO is increased for preterm infants at later gestational ages and indicated that preterm infants have the capacity to produce additional hemoglobin when stimulated by EPO, potentially resulting in a decrease in red blood cell transfusions (PUBMED:23485870). Furthermore, a study on the efficacy of iron supplementation in preterm infants found that iron bisglycinate chelate may be an alternative to iron sulfate in the prevention and treatment of preterm newborn anemia, with a higher bioavailability allowing for a lower dose to achieve equivalent efficacy (PUBMED:29366419). In summary, while there is evidence supporting the use of erythropoietin therapy in certain subsets of preterm infants, particularly those with very low birth weight, the decision to use rhEPO should be made on a case-by-case basis, considering the potential benefits, risks, and the cost-effectiveness of the treatment. Practices such as minimizing blood loss, optimizing nutritional support, and using alternative iron supplementation forms can also play significant roles in managing anemia in preterm infants (PUBMED:23025780).
Instruction: Is there scope for providing oral cancer health advice in dental access centres? Abstracts: abstract_id: PUBMED:18833209 Is there scope for providing oral cancer health advice in dental access centres? Objective: There are more than 3,500 new cases of oral cancer each year in the UK. The purpose of this study was to establish the proportion of patients attending a dental access centre that are at risk of developing oral cancer because of lifestyle habits, and in turn determine whether access centres are a suitable location for the provision of advice on smoking cessation and alcohol consumption. Methods: Data were collected prospectively about the smoking and drinking habits of patients attending a dental access centre in Nottingham. Three hundred and fifty-nine patients attended the dental access centre during the survey period. Patients were categorised into one of four groups ranging from low risk to high risk, according to their smoking history and alcohol intake. Results: More than 50% of patients attending the dental access centre were smokers, with almost 40% of the patients in high or very high tobacco and alcohol use groups. The majority of attendees were under 45 years of age. Conclusions: Significant numbers of patients attending the dental access centre have lifestyle habits that make them vulnerable to oral cancer. Dental access centres could usefully provide opportunistic health messages to patients about risk factors in the development of oral cancer. abstract_id: PUBMED:29284949 Smoking Cessation Advice: Knowledge, Attitude, and Practice among Clinical Dental Students'. Background: Smoking is the single most important public health challenge facing the National Health Service. The detrimental effects on the general health of tobacco smoking are well documented. Smoking is a primary risk factor for oral cancer and many oral diseases. Dental professional scan plays an important role in preventing adverse health effects by promoting smoking cessation. Objective: To assess the knowledge, attitude, and practice among clinical dental students in giving smoking cessation advice and to explore the barriers to this activity. Materials And Methods: A total of 262 clinical dental trainee of two dental colleges (College of Dental Sciences and Bapuji Dental College) of Davangere city were included in the survey. A self-administered questionnaire was administered to assess the knowledge, attitude, and practice toward Tobacco Cessation Advise. Results: Among the 262 participants in the study, around 51% said they know about Nicotine Replacement Therapy, and among them, only 4.6% were aware of the options available in the market. When asked about 5A's of tobacco cessation, only 35.5% were aware of it. Similarly, when asked about 5R's of tobacco cessation, 48.5% were unaware of it. Conclusions: The respondents did not have sufficient knowledge regarding tobacco cessation advice. With patient's disinterest and lack of time being quoted as the important barriers in providing tobacco cessation advice, it is highly recommended that there is need to incorporate few chapters on tobacco, its effect and cessation of habit in the undergraduate dental curriculum with simultaneous application of the same in clinical practice. abstract_id: PUBMED:37951164 Exploring health advice by dental professionals in USA: A secondary data analysis of NHANES (2015-2018). Objectives: To examine socioeconomic and ethnic variations in the provision of health advice by dental professionals. Methods: Data were from the National Health and Examination Survey (NHANES) (2015-2018). Socioeconomic position (Poverty-income ratio and education) and ethnicity were the main exposures. The outcome variable of interest was whether participants received health advice regarding the "benefits of quitting cigarettes," "benefits of monitoring blood sugar levels," and "importance of oral cancer screening". Logistic regression analysis was used to examine the relationship between socioeconomic factors/ethnicity, and health advice after adjusting for covariates. Results: The analysis included a total of 5524 people aged eighteen and above who had complete data. Black and Hispanic individuals had higher odds of receiving advice on smoking (OR = 1.49, 95% CI: 1.04-2.12 and OR = 1.48, 95% CI: 1.05-2.07, respectively) and glucose monitoring (OR = 3.00, 95% CI: 2.03-4.43 and OR = 3.14, 95% CI: 2.04-4.82, respectively), but no significant difference for cancer screening advice.Higher poverty-income ratios (PIR) were associated with lower odds of receiving smoking advice (OR = 0.91, 95% CI: 0.84-0.98), but no significant associations were observed for glucose monitoring or cancer screening advice. The study's findings reveal a social gradient in the provision of cancer advice, with individuals having higher education levels, particularly university education (OR = 1.69, 95% CI: 1.24-2.31), showing significantly higher odds of receiving cancer screening advice CONCLUSION: The study highlights significant variations in health advice provision in dental settings, with education level, ethnicity, and smoking status playing prominent roles, emphasizing the need for targeted interventions to promote equity and cultural competence in delivering health advice in dental settings. Practice Implication: The results emphasize the importance of strong policies and ongoing education for dental professionals to ensure optimal treatment and prevention. abstract_id: PUBMED:21436799 Patient awareness of oral cancer health advice in a dental access centre: a mixed methods study. Objective: In the United Kingdom in 2006, 5,325 persons were diagnosed with oral cancer; and in 2007 it caused around 1,850 deaths. The purpose of this study was to assess the patient awareness, in a dental access centre, of a poster and leaflet campaign providing information about smoking and excess alcohol consumption as risk factors in the development of oral cancer, and to explore dental patients' beliefs and perceptions about these risk factors. Methods: Posters and leaflets providing information about risk factors for oral cancer were displayed in the patient waiting areas of a dental access centre. Data were collected prospectively in relation to the smoking and drinking habits of patients attending the centre. This information was used to categorise patients into one of four groups ranging from low to high consumption. During triage, patients were asked if they had read any of the information about oral cancer that was on display, and patients in the high risk groups were asked to participate in a semi-structured interview that would explore their knowledge about risk factors and their views on the delivery of healthcare messages in relation to oral cancer. Results: Data on risk status and exposure to the poster and leaflet campaign were collected for 1,161 patients attending during the study period. More than 50% of these patients were smokers, with 36% in the high or very high tobacco and alcohol use groups. Approximately 40% of patients within each consumption group had read any of the information available. Nine patients agreed to be interviewed and overall knowledge about risk factors for oral cancer, even after reading the information was poor. Conclusion: Dental access centres attract a significant number of patients with lifestyle habits that make them vulnerable to oral cancer, and as such are well placed to deliver oral health messages to this high risk group. However, the delivery of information through a simple poster and leaflet campaign is likely to have limited impact. abstract_id: PUBMED:35330372 Oral Health among Elderly, Impact on Life Quality, Access of Elderly Patients to Oral Health Services and Methods to Improve Oral Health: A Narrative Review. Dental health is often neglected among the elderly because of the numerous comorbidities in this population, such as cardiovascular diseases. However, dental health influences general health and quality of life by impacting both the general health and the psychological state of the individual. The present review highlights the main dental comorbidities in the elderly population, their impact on the quality of life, the barriers towards access to dental care in the elderly and methods to improve their dental health. Information related to dental care and its importance must be provided both to older individuals and their caregivers in order to detect dental pathology and treat it adequately. Ensuring dental health involves the whole society of elders, caregivers, dental care providers, the public sector, health policymakers, and the private sector. abstract_id: PUBMED:31490570 Patients with head and neck cancers' oral health knowledge, oral health-related quality of life, oral health status, and adherence to advice on discharge to primary dental care: A prospective observational study. Aims: This prospective observational study set out to assess patients' oral health knowledge, oral health related quality of life (OHRQoL) and status immediately posttherapy, and adherence with oral health behaviors posttreatment with radiotherapy for head and neck cancer (HaNC). Methods: A prospective observational study post HaNC therapy. At time one (T1), prior to discharge to primary dental care, oral health knowledge, OHRQoL, and status were assessed and dental health advice was delivered. At time two (T2), four weeks postdischarge, patients' adherence with dental health advice was reassessed. Results: Thirty people postradiotherapy were recruited (mean age 58.9, SD ± 8.4). Twenty-three (77%) were men. Thirteen (45%) respondents reported they were not informed about the side effects of radiotherapy. On discharge to primary care, 20 (67%) of the respondents rated their oral health as fair or worse, while 47% were displeased with dental appearance. Respondents reported a mean of 8.6 impacts (SD ± 5.3) on OHRQoL. Five (17%) and two (7%) patients had an established or a new osteoradionecrosis, respectively, and 16 patients (53%) had greater than two new carious teeth. One month postdischarge (T2), adherence with dental advice was high, 93% were registered with a primary care dentist and all brushed their teeth daily. Conclusion: Within the limitations of this study in a single-center, adherence with oral health advice was high. However, participants felt they received little preparatory information about side effects and impacts of radiotherapy on oral and dental health, and how to maintain oral health after radiotherapy. abstract_id: PUBMED:24074015 Addressing oral health disparities, inequity in access and workforce issues in a developing country. The health sector challenges in India like those in other low and middle income countries are formidable. India has almost one-third of the world's dental schools. However, provisions of oral health-care services are few in rural parts of India where the majority of the Indian population resides. Disparities exist between the oral health status in urban and rural areas. The present unequal system of mainly private practice directed towards a minority of the population and based on reparative services needs to be modified. National oral health policy needs to be implemented as a priority, with an emphasis on strengthening dental care services under public health facilities. A fast-changing demographic profile and its implications needs to be considered while planning for the future oral health-care workforce. Current oral health status in developing countries, including India, is a result of government public health policies, not lack of dentists. The aim of the article is to discuss pertinent issues relating to oral health disparities, equity in health-care access, dental workforce planning and quality concerns pertaining to the present-day dental education and practices in India, which have implications for other developing countries. abstract_id: PUBMED:15132812 Cancer-related oral health care services and resources: a survey of oral and dental care in Canadian cancer centres. Purpose: Prevention and management of oral complications of cancer and cancer therapy will improve oral function and quality of life, and reduce morbidity and the cost of care. Oral assessment, and oral and dental care have been strongly recommended before cancer therapy and should be continued during and after cancer therapy. The purpose of this survey was to assess the resources available for oral care in Canadian cancer centres. Methods: Provincial cancer centres were assessed by questionnaire to determine the resources available for oral care in these facilities. Results: Wide variability in oral and dental care of patients with cancer across Canada and a lack of documented standards of care were reported. Very few cancer centres had institutionally supported dental staff to support the oral care of patients with cancer, and few had dental treatment capability on site. The majority of centres managed oral care needs in the community with the patient's prior dentist. Conclusions: We recommend that national guidelines be developed for medically necessary oral and dental care for patients with cancer. abstract_id: PUBMED:1799086 Getting dental services to the rural 85 percent. For the past 30 years the organization of government dental services in Papua New Guinea has been modelled on the School Dental Service in New Zealand. Although the two most important mouth diseases occur in adults, children have been given priority. The dental work has been done by a workforce of less than 200 dental officers and dental therapists. Dental services are invisible to the mass of Papua New Guineans. Such organization of dental services has been disastrous for the oral health of the rural population. The need is urgent to integrate dental services within the mainstream health services, doing as much as possible at the aid posts, and referring on to health centres only what cannot be done at the local level. Both the present cadre of 2250 orderlies in aid posts plus the planned new 4750 community health workers by the year 2000 need to be trained in basic dental skills. The 2400 aid posts need to be equipped with a kit of basic dental materials and instruments, costing about K250 each. Training in basic dental care can be done in six weeks, and should take place within the province, under conditions as close as possible to real life. The benefits of providing a basic dental service at aid posts are obvious--the population coverage will be much higher, the range of diseases that are preventable or controllable will increase, and many of the deaths that now occur from cancer of the mouth will be prevented or delayed. abstract_id: PUBMED:15646586 The role of dental hygienists in oral health prevention. Most dental diseases are preventable. This indicates that the main concept of dentistry could be changed to a situation in which the dental hygienist becomes the principal oral care professional. The competencies of dental hygienists focus on disease prevention and oral health promotion; thus referral to a dentist would only become necessary in the event of a failure in the preventive program where disease cannot be controlled. Future oral health care personnel need to be better educated to encourage people to implement healthy lifestyles rather than to treat teeth. In addition, the connection between oral health, general health and health-related quality of life will necessitate a multidisciplinary approach to prevention and oral health promotion. To focus strictly on oral health would too narrowly define the role of the dental hygienist in comprehensive prevention and health promotion. There is no precise boundary between the oral cavity and the rest of the body. Dietary advice to prevent dental caries and smoking cessation counseling to prevent periodontal disease and oral cancer also promote general health. Consequently the focus on prevention and health promotion makes the dental hygienist a very important person in the dental team of the future. Answer: Yes, there is scope for providing oral cancer health advice in dental access centres. The study conducted in a dental access centre in Nottingham found that more than 50% of patients attending were smokers, with almost 40% of patients in high or very high tobacco and alcohol use groups, which are significant risk factors for oral cancer (PUBMED:18833209). This indicates that dental access centres serve a population that could benefit from oral cancer health advice. Furthermore, the study concluded that dental access centres could usefully provide opportunistic health messages to patients about risk factors in the development of oral cancer. Additionally, the study on clinical dental students' knowledge, attitude, and practice regarding smoking cessation advice highlighted the need for dental professionals to be more involved in promoting smoking cessation, as smoking is a primary risk factor for oral cancer (PUBMED:29284949). This suggests that dental professionals, including those working in dental access centres, could play an important role in preventing adverse health effects by promoting smoking cessation. Moreover, the secondary data analysis of NHANES (2015-2018) revealed that dental professionals in the USA do provide health advice regarding the benefits of quitting cigarettes, which is relevant to oral cancer prevention (PUBMED:37951164). This further supports the idea that dental access centres could be an appropriate setting for delivering oral cancer health advice. However, it is important to note that simply displaying posters and leaflets may have limited impact, as a study in a dental access centre showed that even after reading the provided information, patients' overall knowledge about risk factors for oral cancer was poor (PUBMED:21436799). This suggests that more interactive and engaging methods of delivering health advice may be necessary to effectively educate patients about oral cancer risks. In conclusion, dental access centres have the potential to be a suitable location for the provision of advice on smoking cessation and alcohol consumption, which are risk factors for oral cancer. However, the approach to delivering this advice needs to be carefully considered to ensure it is effective.
Instruction: Exercise-induced reduction in systemic vascular resistance: a covert killer and an unrecognised resuscitation challenge? Abstracts: abstract_id: PUBMED:19631477 Exercise-induced reduction in systemic vascular resistance: a covert killer and an unrecognised resuscitation challenge? Background: Systemic vascular resistance falls in exercise as a consequence of metabolically-linked vasodilatation in active skeletal muscles. This exercise-induced vasodilatation is closely linked with reduced muscle tissue oxygen tension in and is characterised by reduced response to adrenergic vasoconstrictor mechanisms which is often referred to as functional sympatholysis. Systemic arterial blood pressure in exercise is maintained at normal or, more commonly, at elevated levels by increase in cardiac output and increased sympathetic vasomotor tone. Recovery of normal resting skeletal muscle tissue oxygen tension and skeletal muscle vascular tone after exercise depends on the post-exercise recovery process. This process requires ongoing elevated skeletal muscle perfusion and can therefore be predicted to be impaired in shock and cardiopulmonary resuscitation scenarios. Comprehensive consideration of this exercise physiology and its extrapolation into shock, cardiac arrest and resuscitation scenarios supports the proposal that exercise-induced sympatholytic vasodilatation in skeletal muscle may be of considerable unrecognised significance for resuscitation medicine. MAIN HYPOTHESIS: Reduced systemic vascular resistance due to pre-existing exercise-induced sympatholytic vasodilatation in skeletal muscle can significantly exacerbate systemic arterial hypotension in acute shock states and resuscitation scenarios. SUB-HYPOTHESES: 1. Onset of syncope, clinical shock states and pulseless electrical activity can occur at significantly higher cardiac output levels in subjects who were engaged in immediate pre-morbid exercise as compared to resting subjects. 2. The efficacy of external chest compression in generating coronary and cerebral perfusion in cardiopulmonary resuscitation can be significantly impaired when cardiac arrest has occurred during exercise. 3. The efficacy of adrenergic vasopressor agents in resuscitation scenarios can be significantly impaired in subjects who were engaged in immediate pre-morbid exercise. Current Evidence: The limited available evidence is compatible with the hypothesis being true but does not provide direct confirmation. There is no evidence available directly supporting or refuting the hypothesis. Implications: Significant potential clinical implications are outlined relating to the management of cardiopulmonary and trauma resuscitation for patients who were involved in immediate pre-morbid exercise, particularly, but not exclusively, at higher exercise intensities. There are also significant potential prognostic implications. Conclusion: Reduction in systemic vascular resistance due to exercise-induced sympatholytic vasodilatation in skeletal muscle may largely explain the reported poor success rate for cardiopulmonary resuscitation with prompt defibrillation for sudden cardiac arrest in young previously healthy athletes. Investigation of this unexplored area of pathophysiology poses major difficulties but could lead to significant improvements in the outcomes of resuscitation for patients who were involved in immediate pre-morbid exercise. abstract_id: PUBMED:33001175 Systemic vascular distensibility relates to exercise capacity in connective tissue disease. Objective: Exercise intolerance is a common clinical manifestation of CTD. Frequently, CTD patients have associated cardio-pulmonary disease, including pulmonary hypertension or heart failure that impairs aerobic exercise capacity (pVO2). The contribution of the systemic micro-vasculature to reduced exercise capacity in CTD patients without cardiopulmonary disease has not been fully described. In this study, we sought to examine the role of systemic vascular distensibility, α in reducing exercise capacity (i.e. pVO2) in CTD patients. Methods: Systemic and pulmonary vascular distensibility, α (%/mmHg) was determined from multipoint systemic pressure-flow plots during invasive cardiopulmonary exercise testing with pulmonary and radial arterial catheters in place in 42 CTD patients without cardiopulmonary disease and compared with 24 age and gender matched normal controls. Results: During exercise, systemic vascular distensibility, α was reduced in CTD patients compared with controls (0.20 ± 0.12%/mmHg vs 0.30 ± 0.13%/mmHg, P =0.01). The reduced systemic vascular distensibility α, was associated with impaired stroke volume augmentation. On multivariate analysis, systemic vascular distensibility, α was associated with a decreased exercise capacity (pVO2) and decreased systemic oxygen extraction. Conclusion: Systemic vascular distensibility, α is associated with impaired systemic oxygen extraction and decreased aerobic capacity in patients with CTD without cardiopulmonary disease. abstract_id: PUBMED:30009276 Meal-exercise challenge and physical activity reduction impact on immunity and inflammation (MERIIT trial). Background: The effect of a pre-exercise meal as countermeasure to exercise induced immunodepression is poorly known. Also, sedentary behavior is associated with increased cardiometabolic risk but studies on immune changes are lacking. Therefore, we aimed to assess: 1) the impact of a pre-exercise Mediterranean meal (MdM) compared with a fast-food type meal (FFM) on exercise-induced immunological changes and 2) the impact of an induced acute period of sedentary behavior on neuro-immune-endocrine status. Methods: /Design: This is a two steps clinical trial including: (a) randomized crossover clinical trial, comparing the effect a high-fat/low-nutrient dense meal, FFM, with an isoenergetic similar high-nutrient dense meal, MdM, in the immune response to an exercise challenge (EC) and (b) a pilot trial assessing the neuro-immune-endocrine change induced by acute decreasing by half the usual physical activity level. Results: A total of 46 participants (26 females), median aged 25 years were included. Of those 39-completed protocol, including overweight, physical active and inactive and participants with asthma. There were no differences in the EC between interventions. Dietary factors and physical activity were closely monitored during interventions and kept similar. During physical inactivity induction, 31% reached the target of 50% reduction in mean step number and 77% reached a 30% reduction. Conclusion: The use of a pre-exercise meal to modulate immune response and the understanding of the immunological impact of physical inactivity might help to establish future recommendations on how to practice exercise in a safer way and to recognize the potential impact of inactivity. abstract_id: PUBMED:27849481 Brief Report: Systemic Vascular Access and Resuscitation via Corpus Cavernosum. Background: Pre-hospital systemic vascular access with early resuscitation in the hypovolemic trauma patient can be problematic and is attempted through venous cut-downs, peripheral IV lines, and/or interosseous routes. This brief report examines an alternative for males via the corpus cavernosum (CC). Methods: A systematic literature review using certain inclusion criteria including, but not limited to, corpus cavernosum access and resuscitation was conducted and a summary table created. Findings: The six articles that met criteria revealed quick and easy CC access with rapid flow rates and resuscitation times in both humans and in animal models using either fluids or blood products. Only one article revealed a complication which was a shaft hematoma that resolved spontaneously over a period of a few days. Discussion/impact/recommendations: Systemic vascular access and resuscitation via the CC could be considered as a safe and effective alternative if more traditional techniques fail in an appropriate pre-hospital hypovolemic male casualty (i.e., no genital or pelvic trauma that could interfere with the technique). If further studies are conducted and the technique more robustly validated it could be considered as a possible addition to pre-hospital treatment protocols such as TCCC. Possible publication bias could have been a limitation of this study. abstract_id: PUBMED:30254047 Efficient Use of Simple Exercise-Induced Bronchoconstriction Challenge Testing in Pediatric Exercise-Induced Dyspnea. Background: A simple exercise test to evaluate for exercise-induced bronchoconstriction (EIB) is routinely ordered in pediatric patients with exercise-induced dyspnea. However, the utility of this test in establishing the cause of exercise-induced dyspnea is not thoroughly examined in the pediatric population. We sought to assess the efficiency of a simple EIB challenge test in finding the cause of exercise-induced dyspnea in pediatric patients referred to our tertiary center in the last 5 y. Methods: We performed a retrospective chart review for all of these exercise tests done at Akron Children's Hospital from March 2011 to March 2016. Patients with chronic conditions (eg, cystic fibrosis, cardiac abnormality) were excluded. Demographics, clinical diagnosis of asthma, a presumptive diagnosis of exercise-induced asthma or EIB by the referring provider, symptoms with and without exercise, albuterol use, spirometry, and simple EIB challenge test results were collected. The chi-square test of independence was utilized in the examination of potential dependent relationships between categorical variables. A P value < .05 was considered to be statistically significant. Results: Out of 164 enrolled subjects (57 males; age 6-20 y), only 19% showed evidence of EIB. There were no significant associations between EIB status (ie, EIB-positive or EIB-negative) based on exercise testing and gender, typical symptoms of EIB, diagnosis of exercise-induced asthma or EIB, and albuterol use (P > .05). However, a subject without asthma was 2.8 times more likely to have negative exercise test for EIB (odds ratio 2.8, 95% CI 1.3-6.5); in addition, approximately 85% of tests in subjects without asthma were negative. Conclusion: In a majority of subjects without asthma, a simple EIB challenge testing failed to uncover the cause of exercise-induced dyspnea and thus was inefficient. In these subjects, cardiopulmonary exercise testing may be more useful and cost-effective to explore other causes of dyspnea including EIB. abstract_id: PUBMED:28939136 Exercise-induced bronchoconstriction in children with asthma: An observational cohort study. Background/purpose: The diagnosis of exercise-induced bronchoconstriction (EIB) was established by changes in lung function after exercise challenge. The prevalence of EIB and factors related to EIB were not fully described in children with asthma. The aim of this study was to investigate the prevalence and predictors of EIB in children with asthma. Methods: A total of 149 children with physician-diagnosed asthma above 5 years of age underwent standardized treadmill exercise challenge for EIB and methacholine challenge for airway hyper-responsiveness from October 2015 to December 2016. Results: EIB presented in 52.5% of children with asthma. Compared with children without EIB, there were more patients with atopic dermatitis in children with EIB (p = 0.038). Allergic to Dermatohagoides pteronyssinus and Dermatophagoides farinae were also found more in children with EIB (p = 0.045 and 0.048 respectively). Maximal decrease in forced expiratory volume in 1 s (FEV1) were highest in patients who were most sensitive to methacholine provocation (provocation concentration causing 20% fall in FEV1 [PC20] ≤ 1 mg/mL). Patients, who were more sensitive to methacholine challenge (with lower PC20 levels), develop EIB with more decline in FEV1 after exercise challenge (p = 0.038). Among patients with EIB, airflow limitation development in patient with methacholine-induced airway hyper-responsiveness was more abrupt and severe compared with patients without airway hyper-responsiveness (p = 0.045 and 0.033 respectively). Conclusion: EIB presented in 52.5% of children with asthma. The more severe methacholine-induced hyper-responsiveness, the higher prevalence of EIB as well as the severity. abstract_id: PUBMED:7787319 Thrombomodulin in exercise-induced asthma. Thrombomodulin (TM) is a membrane protein present in the vascular endothelium. It has also been found in human plasma, within which, however, its pathological functions have not been clearly described. In this study, the plasma TM concentrations in 19 asthmatic patients were determined by sandwich enzyme immunoassay using two monoclonal antibodies for human TM. The concentration of plasma TM in exercise-induced asthma (EIA)-positive asthmatic patients was significantly increased by exercise challenge. In addition, for these patients a positive correlation was found between the severity of EIA and the degree of change in plasma TM induced by exercise challenge. These findings suggest that the increase in influx of TM into the plasma in EIA-positive asthmatics may be due to generalized pulmonary endothelial damage following exercise challenge. abstract_id: PUBMED:25104236 Influence of pulmonary vascular reserve on exercise-induced pulmonary hypertension in patients with systemic sclerosis. Background: Exercise-induced pulmonary hypertension (PH) is considered as an early preclinical functional phase of resting PH in systemic sclerosis (SSc). In this study, we investigated the prevalence of exercise-induced PH in patients with SSc and evaluated the influence of pulmonary vascular reserve on exercise-induced PH. Methods: This prospective study included 568 SSc patients. The patients with interstitial lung disease and those with left ventricular dysfunction were excluded (n = 50); finally, 518 patients underwent simple exercise echocardiography using a Master's two-step. Systolic pulmonary artery pressure (SPAP), the ratio of early diastolic transmitral flow velocity to early diastolic mitral annular velocity (E/e') and pulmonary vascular resistance (PVR) were measured before and after exercise. ΔPVR (the difference between rest and post) was used for the assessment of pulmonary vascular reserve. All patients were stratified into the no exercise-induced PH (SPAP <50 mmHg) or exercise-induced PH (SPAP ≥50 mmHg, n = 133) group. Results: Of the study patients, 27% patients were identified as having exercise-induced PH. ΔPVR was higher in the exercise-induced PH than no exercise-induced PH group (0.2 ± 0.3 vs. 0.4 ± 0.4WU, P < 0.0001). A weak correlation was found between postexercise SPAP and postexercise E/e' (r = 0.31, P < 0.0001), whereas a strong correlation was found between postexercise SPAP and postexercise PVR (r = 0.62, P < 0.0001). The analyzed data demonstrated that ΔPVR was independently associated with exercise-induced PH (odds ratio, 3.435; 95% CI, 1.013-11.650, P = 0.033). Conclusions: The present study demonstrated that exercise-induced PH was common in patients with SSc. Exercise-induced PH might be closely associated with the factors affecting reduced pulmonary vascular reserve in patients with SSc. abstract_id: PUBMED:28392861 Reported Exercise-Related Respiratory Symptoms and Exercise-Induced Bronchoconstriction in Asthmatic Children. Background: Unlimited physical activity is one of the key issues of asthma control and management. We investigated how reliable reported exercise-related respiratory symptoms (ERRS) are in predicting exercise-induced bronchoconstriction (EIB) in asthmatic children. Methods: In this prospective study, 179 asthmatic children aged 7 - 15 years were asked for specific questions on respiratory symptoms related to exercise and allocated into two groups according to whether they complained about symptoms. Group I (n = 134) consisted of children answering "yes" to one or more of the questions and group II (n = 45) consisted of children answering "no" to all of the questions. Results: Sixty-four of 179 children showed a positive exercise challenge test (ECT). There was no difference in the frequency of a positive test between children in group I (n = 48) and group II (n = 12) (P = 0.47). The sensitivity of a positive report for ERRS to predict a positive ECT was only 37%, with a specificity of 0.72. Conclusion: According to current guidelines, the report or lack of ERRS has direct consequences on treatment decisions. However, the history of ERRS did not predict EIB and one-third of asthmatic children without complaints of ERRS developed EIB during the ECT. This raises the question of the need for objective measures of bronchial hyperresponsiveness (BHR) in pediatric asthma management. abstract_id: PUBMED:23830131 Assessment of exercise-induced bronchoconstriction in adolescents and young children. Recent research shows important differences in exercise-induced bronchoconstriction (EIB) between children and adults, suggesting a different pathophysiology of EIB in children. Although exercise can trigger classic symptoms of asthma, in children symptoms can be subtle and nonspecific; parents, children, and clinicians often do not recognize EIB. With an age-adjusted protocol, an exercise challenge test can be performed in children as young as 3 years of age. However, an alternative challenge test is sometimes necessary to assess potential for EIB in children. This review summarizes age-related features of EIB and recommendations for assessing EIB in young children and adolescents. Answer: The hypothesis that exercise-induced reduction in systemic vascular resistance could be a covert killer and an unrecognized resuscitation challenge is supported by the understanding that systemic vascular resistance falls during exercise due to metabolically-linked vasodilation in active skeletal muscles. This vasodilation is characterized by a reduced response to adrenergic vasoconstrictor mechanisms, a phenomenon often referred to as functional sympatholysis (PUBMED:19631477). The maintenance of systemic arterial blood pressure during exercise is typically achieved through increased cardiac output and elevated sympathetic vasomotor tone. However, during the post-exercise recovery process, the restoration of normal resting skeletal muscle tissue oxygen tension and vascular tone depends on continued elevated skeletal muscle perfusion, which can be impaired in shock and cardiopulmonary resuscitation scenarios (PUBMED:19631477). The main hypothesis suggests that pre-existing exercise-induced sympatholytic vasodilation in skeletal muscle can significantly exacerbate systemic arterial hypotension in acute shock states and resuscitation scenarios. This could lead to the onset of syncope, clinical shock states, and pulseless electrical activity occurring at higher cardiac output levels in subjects who were engaged in immediate pre-morbid exercise compared to resting subjects. Additionally, the efficacy of external chest compression in generating coronary and cerebral perfusion during cardiopulmonary resuscitation could be significantly impaired when cardiac arrest occurs during exercise. Furthermore, the efficacy of adrenergic vasopressor agents in resuscitation scenarios could be significantly impaired in subjects who were engaged in immediate pre-morbid exercise (PUBMED:19631477). The limited available evidence is compatible with the hypothesis but does not provide direct confirmation. There are significant potential clinical implications related to the management of cardiopulmonary and trauma resuscitation for patients who were involved in immediate pre-morbid exercise, particularly at higher exercise intensities. There are also significant potential prognostic implications. The reduction in systemic vascular resistance due to exercise-induced sympatholytic vasodilation in skeletal muscle may largely explain the reported poor success rate for cardiopulmonary resuscitation with prompt defibrillation for sudden cardiac arrest in young previously healthy athletes. Investigating this unexplored area of pathophysiology poses major difficulties but could lead to significant improvements in the outcomes of resuscitation for patients who were involved in immediate pre-morbid exercise (PUBMED:19631477).
Instruction: Is human chorionic gonadotropin useful for identifying and treating nonpalpable testis? Abstracts: abstract_id: PUBMED:199275 Effects of human chorionic gonadotropin and N6,O2'-dibutyryladenosine 3' ,5'-monophosphate on phosphofructokinase activity in isolated rat testis. Phosphofructokinase activity in rat testis is elevated by treatment in vitro with human chorionic godadotropin or N6,O2'-dibutyryladenosine 3' ,5'-monophosphate. Puromycin or actinomycin D suppresses the effect of the gonadotropin but does not affect the enzyme increase induced by the cyclic nucleotide. The possible cause for the divergent action of the two stimulatory agents are discussed. abstract_id: PUBMED:4341694 Biological activity of human chorionic gonadotropin released from testis binding-sites. The effect of testicular binding of human chorionic gonadotropin upon the biological activities of the hormone was examined by comparison of the binding and activation properties of (125)I-labeled gonadotropin before and after binding to rat testis in vitro. Biologically active (125)I-gonadotropin taken up by rat testis was dissocated from testis binding-sites at low pH and evaluated for its ability to bind again to testis, adenylate cyclase activation, and stimulation of steroidogenesis during subsequent incubation with fresh testis. Binding to tissue receptor-sites for 4 hr did not impair the biological properties of gonadotropin, though hormone remaining in the incubation medium had reduced affinity for tissue binding-sites during subsequent incubation with rat testes. In comparison to the original preparation, (125)I-labeled gonadotropin previously eluted from specific binding-sites of rat testis showed significantly increased binding activity and stimulation of cyclic AMP and testosterone release during further incubation with rat testes in vitro. The enhancement of biological activity of the eluted hormone is attributable to affinity purification of the original hormone preparation by selective uptake at receptorsites. These results demonstrate that gonadotropin is not inactivated or degraded during combination with gonadotropin receptors of rat testis. abstract_id: PUBMED:3601278 Receptor binding properties and biologic action of deglycosylated human chorionic gonadotropin in human ovary and testis. The receptor binding properties and biologic actions of chemically deglycosylated-asialo human choriogonadotropin (AHF-hCG) were studied in human ovary and testis. In corpus luteum and testis homogenates, the relative binding affinity of AHF-hCG was two- to fourfold higher in the ovary and five- to tenfold higher in the testis than that of native hCG. When assayed for luteinizing hormone (LH)-like activity in granulosa-luteal cells from in vitro fertilization patients and in testicular minces from patients undergoing orchiectomy for prostatic cancer, AHF-hCG did not stimulate cyclic adenosine monophosphate production. When added with hCG to granulosa-luteal cells or to testicular minces, AHF-hCG inhibited hCG-stimulated cyclic adenosine monophosphate production. These results indicate that the enhanced affinity to LH receptor caused by removal of the sugar moieties from hCG is associated with total inability to activate granulosa-luteal and Leydig cell adenylate cyclase, and that AHF-hCG is, in the human gonad, an hCG antagonist. abstract_id: PUBMED:6695875 Human chorionic gonadotropin testing. Changing testicular response in infants with microphallus and gonadotropin deficiency. The evaluation of microphallus often includes human chorionic gonadotropin (HCG) stimulation to evaluate testicular androgen production. The response in testosterone secretion after five days of stimulation with chorionic gonadotropin is abnormal in patients with either primary testicular defects or gonadotropin deficiency. The testes of gonadotropin-deficient children do respond if the chorionic gonadotropin therapy is continued for six weeks. The current study reports a normal response to the short course (five days) of chorionic gonadotropin therapy in two 2-month-old patients with XY chromosomes with microphallus whose clinical course and repeated testing at age 3 or 4 years indicated gonadotropin deficiency. The normal response in the neonatal period is probably the result of maternal chorionic gonadotropin priming of the fetal testes in utero. These results indicate that HCG testing is an unreliable method for identifying gonadotropin deficiency in the neonatal period. abstract_id: PUBMED:2732296 Testosterone synthesis and adenylate cyclase activity in the early human fetal testis appear to be independent of human chorionic gonadotropin control. Androgen secretion by the fetal testis is essential for male phenotypic differentiation. In the human fetus testosterone formation is initiated soon after the differentiation of the testis (approximately 8 weeks of gestation), and the maximal testosterone content in fetal testes is achieved between 10 and 15 weeks of fetal life. The testosterone content of the fetal testis declines at the beginning of the third trimester and remains low until after birth. In an effort to understand the regulation of the onset of testosterone formation in the human fetal testis we measured adenylate cyclase activity in response to hCG stimulation in homogenates of fetal testes obtained from first and second trimester human abortuses. Basal adenylate cyclase activity was 50 pmol/mg protein.min at 10 weeks gestation, the peak activity was 137 pmol/mg protein.min at 12 weeks gestation, and activity declined thereafter to 8 pmol/mg protein.min by 16 weeks gestation. NaF-stimulated (0.6 mmol/L) and forskolin-stimulated (50 mumol/L) activities were 4- to 8-fold greater than basal adenylate cyclase activities. The maximal forskolin-stimulated activity occurred at 11 weeks (803 pmol/mg protein.min), and it fell to 35 pmol/mg protein.min by 17 weeks gestation. In contrast, hCG-stimulated (1 mumol/L) adenylate cyclase activity was only slightly greater than basal rates at all ages examined. In addition, hCG did not stimulate baseline testosterone formation in minces of testes obtained between 12 and 18 weeks of gestation. These findings suggest that the onset of testosterone formation in human fetal testes may be independent of gonadotropin control. abstract_id: PUBMED:3220457 Reduction of testicular human chorionic gonadotropin receptors by human chorionic gonadotropin in vivo and in vitro. Changes in rat and human testicular human chorionic gonadotropin (hCG) binding sites induced by hCG were estimated in vivo and in vitro. After a single administration of hCG, the specific 125I-hCG bindings were significantly reduced for 7 and 5 days in rat and human testes, respectively. Thereafter, 125I-hCG bindings had recovered to pretreatment values by the 14th day after the administration. Occupied hCG bindings accounted for about half of the reduced bindings on the day after administration of hCG. After this time, however, the occupancy did not contribute so much to the reduction of the bindings. In experiments in vitro using the organ culture technique, an exposure to hCG for 24 h induced a dose-related significant loss of the specific 125I-hCG bindings for 7 and 5 days in rat and human testes, respectively. Thereafter, the loss was gradually recovered. These patterns of changes in 125I-hCG bindings in vitro were similar to those in vivo. These findings suggest that the reduction in hCG binding sites by hCG is due to not only occupancy but also downregulation of the binding sites and that the testicular organ culture method used in the present study is useful to study hormonal regulation of testicular function, especially in human testes. abstract_id: PUBMED:2060708 Human chorionic gonadotropin induces all stages of spermatogenesis in vitro in the male Japanese eel (Anguilla japonica). In the cultivated male Japanese eel, spermatogonia are the only germ cells present in the testis. Using a newly developed organ culture system, we obtained evidence that human chorionic gonadotropin (HCG) can induce the entire process of spermatogenesis, in vitro, from spermatogonia to spermatozoa within 24 days. The HCG-induced spermatogenesis in vitro was accompanied by a marked activation of Sertoli cells and Leydig cells, occurring prior to the beginning of spermatogonial proliferation. These results indicate that gonadotropin triggers spermatogenesis in the Japanese eel and further suggest that this effect of gonadotropin is mediated through the actions of testicular somatic cells. abstract_id: PUBMED:6735151 Onset of the response to chorionic gonadotropin in the chick embryo testis. The stage of development of the chick embryo testis when it begins to respond to gonadotropin stimulation was investigated. The testosterone secretion in vitro, measured by radioimmunoassay, was employed to evaluate the response to hCG in testis from 8 to 16 days of incubation. At 8 to 10 days of the chick embryo development, the testis secreted testosterone, but no increment in the steroid production has been observed after hCG treatment. On the contrary, at 12, 14, and 16 days a clear increase in testosterone secretion has been demonstrated when hCG was added to the culture medium. The absence of hCG response before 12 days of incubation agrees with the hypothesis of an early independence period between testis and adenohypophysis during embryonic development. abstract_id: PUBMED:6543041 Effect of acute chloroquine treatment on prostaglandin- and gonadotropin-stimulated testosterone secretion of rat testis. The effect of chloroquine injection to rats on in vitro-testosterone secretion stimulated by human chorionic gonadotropin (hCG) and prostaglandin E1 (PGE1) was studied. Rats were injected for 5 days with chloroquine phosphate. Testosterone secretion was stimulated by hCG or PGE1 for 3 h in the removed testis and was measured by radioimmunoassay. Chloroquine treatment in vivo does not seem to have any demonstrable effect on testosterone secretion stimulated by hCG or PGE 1 in prepubertal as well as in the postpubertal rats. However, in pubertal rats chloroquine treatment inhibited testosterone secretion in hCG-stimulated testis. abstract_id: PUBMED:4008604 Leydig cell desensitization by human chorionic gonadotropin does not occur in the human fetal testis. In vivo data concerning human fetal testicular testosterone production as well as in vitro findings in fetal and neonatal rats suggest that fetal Leydig cells may be capable of responding to gonadotropins and secreting testosterone at high levels for prolonged periods, in contrast to adult testes which reportedly become desensitized after high dose gonadotropin administration. To evaluate fetal testicular testosterone production during long term, high dose gonadotropic stimulation, we cultured human, rhesus monkey, and rabbit fetal testes in organ and cell cultures. After 24 h of culture with different concentrations of hCG (0-100 ng/ml, physiological fetal concentrations during human gestation), the fetal testes were still able to respond to a second hCG stimulus (no desensitization). The 24-h incubation with hCG (0-100 ng/ml) also increased the capacity of the cultures to secrete testosterone during a second incubation in a dose-dependent manner even in the absence of hCG (steroidogenic enzyme induction). Furthermore, hCG increased thymidine incorporation into DNA by the human fetal testis. The results of this study substantiate the role of hCG in the regulation of fetal Leydig cells. They suggest that long term effects via nuclear mechanisms (RNA and DNA synthesis) may be important aspects of this regulation, and that fetal Leydig cells are able to respond to sustained concentrations of gonadotropin without being desensitized. Answer: The provided abstracts do not directly address the use of human chorionic gonadotropin (hCG) for identifying and treating nonpalpable testis. However, they do provide insights into the effects of hCG on testicular function and its potential implications for related conditions. For instance, hCG has been shown to elevate phosphofructokinase activity in isolated rat testis, which could suggest a role in stimulating testicular metabolism or function (PUBMED:199275). Additionally, hCG has been found to maintain its biological activity after binding to testicular receptors, which includes the stimulation of cyclic AMP and testosterone release in rat testes (PUBMED:4341694). This could imply that hCG might stimulate testicular activity, potentially aiding in the treatment of conditions like nonpalpable testis by promoting testicular descent or growth. Moreover, hCG has been demonstrated to induce all stages of spermatogenesis in vitro in the male Japanese eel, indicating its role in triggering testicular development and function (PUBMED:2060708). This could be relevant to the treatment of nonpalpable testis if hCG can similarly stimulate testicular development in humans. However, it is important to note that the response to hCG can vary depending on the developmental stage and the presence of gonadotropin receptors. For example, in the early human fetal testis, testosterone synthesis and adenylate cyclase activity appear to be independent of hCG control (PUBMED:2732296), and in the chick embryo testis, the response to hCG only begins at a certain stage of development (PUBMED:6735151). In the context of nonpalpable testis, which is often a congenital condition, the effectiveness of hCG treatment would depend on the presence and responsiveness of hCG receptors in the affected testes. While the abstracts suggest that hCG can stimulate testicular function, they do not provide conclusive evidence for its use in identifying or treating nonpalpable testis. Clinical studies specifically investigating the use of hCG for nonpalpable testis would be required to determine its utility in this context.
Instruction: Is the assessment of lipid profile performed up to seventy two hours from hospital admission due to acute coronary syndrome still valid? Abstracts: abstract_id: PUBMED:15531936 Is the assessment of lipid profile performed up to seventy two hours from hospital admission due to acute coronary syndrome still valid? Background: Serum concentration of lipids and lipoproteins changes during the course of acute coronary syndrome (ACS). Total cholesterol and LDL-cholesterol levels decrease. Also HDL-cholesterol and triglyceride levels are not stable. Therefore, it has been suggested that reliable assessment of lipid profile can be performed either during the first 24 hours or after 6 weeks or even 3 months from the onset of ACS. Aim: To examine whether lipid parameters remain stable during the first 72 hours of hospitalisation due to ACS. Methods: The study group consisted of 84 patients, aged 30-88 years, hospitalised due to ACS, of whom 90% underwent primary percutaneous coronary intervention (PCI). In all patients lipid profile was assessed twice. The first measurement was performed as soon as possible, within 24 hours from hospital admission. The second measurement was obtained within 24 hours from the first sample (group I, n=42), between 24 and 48 hours (group II, n=28), or between 48 and 72 hours (group III, n=15) from the first measurement. Lipid parameters obtained from the first measurement were compared with those obtained during the second assessment. Results: There were no significant differences between lipid parameters measured during the first and second measurement within all three investigated groups nor were there any significant inter-group differences. Conclusions: Lipid profile assessed within 72 hours from the hospital admission due to ACS is as valid as the measurement performed within the first 24 hours. abstract_id: PUBMED:22218413 Emergency service admission time and in-hospital mortality in acute coronary syndrome. Background: The relationship between admission time to an emergency service and in-hospital outcomes in acute coronary syndrome (ACS) is controversial. Admission during off-hours would be associated with worse prognosis. Objective: To assess the influence of admission time on prolonged hospitalization and mortality for ACS patients, regarding regular hours (7AM-7PM) and off-hours (7PM-7AM). Methods: The study assessed prospectively 1,104 consecutive ACS patients. In-hospital mortality and length of hospital stay ≥ 5 days were the outcomes analyzed. Results: Admission during regular hours was greater as compared with that during off-hours (63% vs. 37%; p < 0.001). Unstable angina was more prevalent during regular hours (43% vs. 32%; p < 0.001), while non-ST-segment elevation myocardial infarction (NSTEMI) was during off-hours (33% vs. 43%; p = 0.001). Differences in neither mortality nor length of hospital stay were observed in the time periods studied. Predictive factors for length of hospital stay ≥ 5 days were as follows: age [OR 1.042 (95%CI: 1.025 - 1.058), p < 0.001]; ejection fraction (EF) [OR 0.977 (95%CI: 0.966 - 0.988), p < 0.001]; NSTEMI [OR 1.699 (95%CI: 1.221 - 2.366), p = 0.001]; and smoking [OR 1.723 (95%CI: 1.113 - 2.668), p = 0.014]. Predictive factors for in-hospital mortality were as follows: age [OR 1.090 (95%CI: 1.047 - 1.134), p < 0.001]; EF [OR 0.936 (95%CI: 0.909 - 0.964), p < 0.001]; and surgical treatment [OR 3.781 (95%CI: 1.374 - 10.409), p = 0.01]. Conclusion: Prolonged length of hospital stay and in-hospital mortality in ACS patients do not depend on admission time. abstract_id: PUBMED:30540646 Higher in-hospital mortality during weekend admission for acute coronary syndrome: a large-scale cross-sectional Italian study. Aims: An increased mortality risk during weekend hospital admission has been consistently observed. In the present study, we evaluated whether the current improvement in management of acute coronary syndromes (ACS) has reduced this phenomenon. Methods And Results: We extracted data from the Italian National Healthcare System Databank of 80 391 ACS admissions in the region of Lombardia between 2010 and 2014. ICD-9 codes were used to assess the diagnosis. We performed a multiple logistic regression analysis to compare the mortality rates between weekend and weekday admissions.Mean age of the study population was 67.6 years; 30.1% of patients were women. ST segment elevation myocardial infarction (STEMI) accounts for 42.2% of admissions. The total in-hospital mortality was 3.05% and was positively predicted by weekend admission [odds ratio (OR) 1.13, P = 0.006], age and female sex. The weekend effect on mortality was only significant for STEMI (OR 1.11, P = 0.04) in comparison to non-STEMI (NSTEMI) or unstable angina.The trend of the risk of death was found to be negatively correlated with age: the risk of death was significantly higher in all age clusters younger than 75 (OR 1.22, P < 0.01) and even greater in the very young subgroup under 45 years of age (OR 2.09, P = 0.03). Conclusion: Our data indicate that increased mortality risk is still present during weekend admissions. This phenomenon is particularly evident in younger patients and in individuals admitted for STEMI. abstract_id: PUBMED:36836002 The Profile and All-Cause In-Hospital Mortality Dynamics of St-Segment Elevation Myocardial Infarction Patients during the Two Years of the COVID-19 Pandemic. During the coronavirus pandemic 2019 (COVID-19), some studies showed differences in the profile of subjects presenting with acute coronary syndromes as well as in overall mortality due to the delay of presentation and other complications. The purpose of this study was to compare the profile and outcomes, with emphasis on all-cause in-hospital mortality, of ST-elevation myocardial infarction (STEMI) subjects presenting to the emergency department during the pandemic period compared with a control group from the previous year, 2019. The study enrolled 2011 STEMI cases, which were divided into two groups-pre-pandemic (2019-2020) and pandemic period (2020-2022). Hospital admissions for a STEMI diagnosis sharply decreased during the COVID-19 period by 30.26% during the first year and 25.4% in the second year. This trend was paralleled by a significant increase in all-cause in-hospital mortality: 11.5% in the pandemic period versus 8.1% in the previous year. There was a significant association between SARS-CoV-2 positivity and all-cause in-hospital mortality, but no correlation was found between COVID-19 diagnosis and the type of revascularization. However, the profile of subjects presenting with STEMI did not change over time during the pandemic; their demographic and comorbid characteristics remained similar. abstract_id: PUBMED:24978875 Out-of-hours exercise treadmill testing reduces length of hospital stay for chest pain admissions. Aims: The objective was to examine the impact of out-of-hours exercise treadmill tests (ETTs) on length of hospital stay (LOS) for patients admitted to a chest pain assessment unit with symptoms suggestive of acute coronary syndrome. Methods: Prospective observational study with 30-day follow-up of low-to-intermediate-risk chest pain patients undergoing out-of-hours ETT. Eligible patients had a nonischemic ECG, normal 6-12-h ST-segment monitoring, a negative 12-h troponin T assay, and no contraindications to exercise. Observed LOS was compared to expected LOS in the absence of out-of-hours ETT, using Wilcoxon rank-sum test. Estimated bed day savings and major adverse events at 30 days after discharge were examined. Results: Four hundred and twenty-two patients with a mean age of 52 years (SD 13 years, 25-83 years) were evaluated. Fifty-two per cent (n = 221) were men; 66% (n = 279) had one or less cardiovascular risk factors; and 79% (n = 334) of the patients presented on a Friday or Saturday. ETT was performed on a weekend day in 86% (n = 363) of the patients, facilitating same-day discharges in 71% (n = 300). The median LOS (interquartile range) was 1 day (1, 2 days) for patients assessed with out-of-hours ETT. The expected median LOS (IQR) was 3 days (2, 4 days) (P < 0.05) in the absence of out-of-hours ETT. Each out-of-hours ETT was estimated to save a mean (SD, range) of 1.6 (0.6, 1-4) bed days. Thirty-day mortality and readmission rates were 0 and 0.2% (1 of 422), respectively. Conclusion: The availability of out-of-hours ETT facilitates safe early discharge and reduced LOS for low-to-moderate-risk patients admitted with symptoms of acute coronary syndrome. abstract_id: PUBMED:26535040 Serial measurement of lipid profile and inflammatory markers in patients with acute myocardial infarction. Serum concentration of lipids and lipoproteins changes during the course of acute coronary syndrome as a consequence of the inflammatory response. The objective of this study was to evaluate the effect of acute myocardial infarction (AMI) on the levels of lipid profile and inflammatory markers. We investigated 400 patients with AMI who were admitted within 24 h of onset of symptoms. Serum levels of total cholesterol (TC), triglyceride (TG), low density lipoprotein (LDL) and high density lipoprotein (HDL) were determined by standard enzymatic methods along with high sensitive C-reactive protein (hs-CRP) (latex enhanced immunoturbidimetric assay) and cytokines, interleukin (IL)-6 and IL-10 (quantitative ''sandwich'' enzyme-linked immunosorbent assay). The results indicate a trend of reduced TC, LDL, and HDL, and elevated TG levels, along with pro- and anti-inflammatory markers (p < 0.001), between day 1 and the day 2 serum samples of AMI patients. However, corrections in the serum levels have been observed at day 7. Our results demonstrate significant variations in the mean lipid levels and inflammatory markers between days 1, 2 and 7 after AMI. Therefore, it is recommended that the serum lipids should be assessed within 24 hours after infarction. Early treatment of hyperlipidemia provides potential benefits. Exact knowledge regarding baseline serum lipids and lipoprotein levels as well as their varying characteristics can provide a rational basis for clinical decisions about lipid lowering therapy. abstract_id: PUBMED:27183926 Impact of delay in admission on the outcome of critically ill patients presenting to the emergency department of a tertiary care hospital from low income country. Objective: To assess the impact of admission delay on the outcome of critical patients. Methods: The retrospective chart review was done at Aga Khan University Hospital, Karachi, and comprised adult patients visiting the Emergency Department during 2010. Outcome measures assessed were total hospital length of stay, total cost of the visit and in-hospital mortality. Patients admitted within 6 hours of presentation at Emergency Department were defined as non-delayed. Data was analysed using SPSS 19. Results: Of the 49,532 patients reporting at the Emergency Department during the study period, 17,968 (36.3%) were admitted. Of them 2356(13%) were admitted to special or intensive care units, 1595(67.7%) of this sub-group stayed in the Emergency Department for >6 hours before being shifted to intensive care. The study focussed on 325(0.65%) of the total patients; 164(50.5%) in the non-delayed group and 161(49.5%) in the delayed group. The admitting diagnosis of myocardial infarction (p=0.00) and acute coronary syndrome (p=0.01) was significantly more common in the non-delayed group compared to other diagnoses like cerebrovascular attacks (p=0.03) which was significantly more common in the delayed group. There was no significant difference in the hospital length of stay between the two groups (p>0.05). The Emergency Department cost was significantly increased in the delayed group (p<0.05), but there was no difference in the overall hospital cost between the groups (p>0.05). Conclusions: There was no significant difference in the delayed and non-delayed groups, but long Emergency Department stays are distressing for both physicians and patients. abstract_id: PUBMED:30646122 Performance of Electronic Prediction Rules for Prevalent Delirium at Hospital Admission. Importance: Delirium at admission is associated with increased hospital morbidity and mortality, but it may be missed in up to 70% of cases. Use of a predictive algorithm in an electronic medical record (EMR) system could provide critical information to target assessment of those with delirium at admission. Objectives: To develop and assess a prediction rule for delirium using 2 populations of veterans and compare this rule with previously confirmed rules. Design, Setting, And Participants: In a diagnostic study, randomly selected EMRs of hospitalized veterans from the Veterans Affairs (VA) External Peer Review Program at 118 VA medical centers with inpatient facilities were reviewed for delirium risk factors associated with the National Institute for Health and Clinical Excellence (NICE) delirium rule in a derivation cohort (October 1, 2012, to September 30, 2013) and a confirmation cohort (October 1, 2013, to March 31, 2014). Delirium within 24 hours of admission was identified using key word terms. A total of 39 377 veterans 65 years or older who were admitted to a VA medical center for congestive heart failure, acute coronary syndrome, community-acquired pneumonia, and chronic obstructive pulmonary disease were included in the study. Exposure: The EMR calculated delirium risk. Main Outcomes And Measures: Delirium at admission as identified by trained nurse reviewers was the main outcome measure. Random forest methods were used to identify accurate risk factors for prevalent delirium. A prediction rule for prevalent delirium was developed, and its diagnostic accuracy was tested in the confirmation cohort. This consolidated NICE rule was compared with previously confirmed scoring algorithms (electronic NICE and Pendlebury NICE). Results: A total of 27 625 patients were included in the derivation cohort (28 118 [92.2%] male; mean [SD] age, 75.95 [8.61] years) and 11 752 in the confirmation cohort (11 536 [98.2%] male; mean [SD] age, 75.43 [8.55] years). Delirium at admission was identified in 2343 patients (8.5%) in the derivation cohort and 882 patients (7.0%) in the confirmation cohort. Modeling techniques identified cognitive impairment, infection, sodium level, and age of 80 years or older as the dominant risk factors. The consolidated NICE rule (area under the receiver operating characteristic [AUROC] curve, 0.91; 95% CI, 0.91-0.92; P < .001) had significantly higher discriminatory function than the eNICE rule (AUROC curve, 0.81; 95% CI, 0.80-0.82; P < .001) or Pendlebury NICE rule (AUROC curve, 0.87; 95% CI, 0.86-0.88; P < .001). These findings were confirmed in the confirmation cohort. Conclusions And Relevance: This analysis identified preexisting cognitive impairment, infection, sodium level, and age of 80 years or older as delirium screening targets. Use of this algorithm in an EMR system could direct clinical assessment efforts to patients with delirium at admission. abstract_id: PUBMED:15104981 Differences in the management and prognosis of patients with non-ST segment elevation acute coronary syndrome according to the department of initial admission Objectives: To assess the influence of the department of initial admission on the hospital management and 3-month prognosis of patients with non-ST elevation acute coronary syndromes. Patients And Method: The data for the 4115 patients admitted to 18 hospitals in the PEPA study were compared according to the department of initial admission. Results: Twenty-six percent of the patients were admitted to the coronary care unit, 53% to the cardiology department, 9% to the internal medicine department, and 12% were discharged from the emergency ward. The baseline risk profile was high in patients admitted to the coronary care unit and decreased progressively in patients admitted to the cardiology, internal medicine and emergency departments (P<.00001). The intensity of medical management was progressively lower in these departments, but not in parallel to their different baseline lower risk profile. Beta blockers were administered to 50%, 45%, 27% and 21% of the patients, respectively; an exercise test was performed in 34%, 44%, 35% and 12%; coronary angiography in 46%, 34%, 19% and 0%; and coronary revascularization in 22%, 12%, 9% and 0% (P<.00001). The 3-month incidence of mortality or myocardial infarction was 12.2%, 6.4%, 8.7% and 3.8%, respectively (P<.00001), differences that became nonsignificant after adjustment for risk profile on admission. Conclusions: Patients with non-ST elevation acute coronary syndrome admitted to the coronary care unit or cardiology department have a profile of higher risk on admission than patients admitted to the internal medicine department. Also, these patients more frequently receive pharmacological treatments and diagnostic and therapeutic procedures of proven efficacy but not in a manner that parallels their different risk profile on admission. However, these differences in the intensity of in-hospital management do not seem to lead to differences in the 3-month prognosis. abstract_id: PUBMED:24526561 Does admission anaemia still predict mortality six years after myocardial infarction? Background: Anaemia is present in 12-30% of patients with acute coronary syndromes (ACS). Many studies have shown that admission anaemia is an independent predictor of in-hospital or short-term mortality in patients with ACS. However, there is limited data on the long-term prognostic importance of anaemia in this group of patients. Aim: To establish the relation between haemoglobin concentration on admission and six-year all-cause mortality in patients with ST-segment elevation myocardial infarction (STEMI) treated invasively. Methods: We retrospectively studied 551 patients with the diagnosis of STEMI referred to the catheterisation laboratory of our hospital and treated with successful primary percutaneous coronary intervention. Patients were divided into two groups according to admission haemoglobin concentration (< 13 g/dL in males and < 12 g/dL in females). Results: A total of 551 patients with STEMI (164 female, 30%) were included in the analysis, mean age was 63 ± 12 years. Anaemia on admission was present in 11% (n = 61) of the patients. Of the entire cohort, renal failure was present in 25% (n = 138), and diabetes in 16% (n = 88). Admission haemoglobin concentration was significantly associated with age (r = -0.2663, p < 0.05), blood pressure (systolic blood pressure [SBP]: r = 0.1940, diastolic blood pressure [DBP]: r = 0.2023, p < 0.05), glucose concentration (r = -0.1218, p < 0.05), white blood cells count (r = 0.1230, p < 0.05), cholesterol concentration (r = 0.1253,p < 0.05), estimated glomerular filtration rate (eGFR; r = 0.1819, p < 0.05), Killip-Kimball class (r = -0.1387, p < 0.05) and TIMI risk score for STEMI (r = -0.2647, p < 0.05). During follow-up, 27% (n = 130) of the patients died. The mortality rate was significantly higher in the patients with admission anaemia (47% vs. 24%, p = 0.0002). The patients with anaemia were older (p = 0.0007), had lower blood pressure (SBP: p = 0.007; DBP: p = 0.01), higher heart rate (p = 0.03), higher glycaemia concentration (p = 0.003), higher C-reactive protein concentration (p = 0.0007) and lower white blood cells count (p = 0.03). Patients with anaemia had more frequently renal failure (eGFR < 60 mL/min/1.73 m²) (p = 0.02) and a significantly higher TIMI risk score for STEMI (p = 0.01). In multivariate analysis, all-cause mortality was associated with: anaemia on admission (OR = 2.29; 95% CI 1.20-4.36; p = 0.011), low ejection fraction (OR = 2.97; 95% CI 1.78-4.96; p < 0.001) and age (OR = 1.65 [per 10 years]; 95% CI 1.34-2.03; p < 0.001). Anaemia on admission remained an independent predictor of six-year mortality. Conclusions: Admission anaemia significantly influences all-cause mortality in patients with STEMI treated invasively ina six-year follow-up and may be used for risk stratification in this population. Answer: Yes, the assessment of lipid profile performed up to seventy-two hours from hospital admission due to acute coronary syndrome (ACS) is still valid. A study that included 84 patients hospitalized due to ACS, of whom 90% underwent primary percutaneous coronary intervention (PCI), found no significant differences between lipid parameters measured during the first and second measurement within all three investigated groups. The first measurement was performed within 24 hours from hospital admission, and the second measurement was obtained within 24 hours from the first sample, between 24 and 48 hours, or between 48 and 72 hours from the first measurement. The results indicated that lipid profile assessed within 72 hours from the hospital admission due to ACS is as valid as the measurement performed within the first 24 hours (PUBMED:15531936).
Instruction: Oral absorption of PEG-coated versus uncoated gold nanospheres: does agglomeration matter? Abstracts: abstract_id: PUBMED:25884802 Oral absorption of PEG-coated versus uncoated gold nanospheres: does agglomeration matter? Background: Particle size is thought to be a critical factor affecting the bioavailability of nanoparticles following oral exposure. Nearly all studies of nanoparticle bioavailability focus on characterization of the primary particle size of the material as supplied or as dosed, and not on agglomeration behavior within the gastrointestinal tract, which is presumably most relevant for absorption. Methods: In the study reported here, snapshots of agglomeration behavior of gold nanospheres were evaluated in vivo throughout the gastrointestinal tract using transmission electron microscopy. Agglomeration state within the gastrointestinal tract was then used to help explain differences in gastrointestinal particle absorption, as indicated by tissue levels of gold detected using inductively coupled plasma mass spectrometry. Mice were dosed (10 mg/kg) with either 23 nm PEG-coated or uncoated gold nanospheres. Results: Transmission electron microscopy demonstrates that PEG-coated gold nanoparticles can be observed as primary, un-agglomerated particles throughout the gastrointestinal tract and feces of dosed animals. In contrast, uncoated gold nanoparticles were observed to form agglomerates of several hundred nanometers in all tissues and feces. Inductively coupled plasma mass spectrometry shows significantly higher levels of gold in tissues from animals dosed with PEG-coated versus uncoated 23 nm gold nanoparticles. Retention of particles after a single oral gavage was also very high, with all tissues of animals dosed with PEG-coated particles having detectable levels of gold at 30 days following exposure. Conclusions: Qualitative observation of these particles in vivo shows that dispersed PEG-coated particles are able to reach the absorptive tissues of the intestine while agglomerated uncoated particles are sequestered in the lumen of these tissues. However, the large differences observed for in vivo agglomeration behavior were not reflected in oral absorption, as indicated by gold tissue levels. Additional factors, such as surface chemistry, may have played a more important role than in vivo particle size and should be investigated further. abstract_id: PUBMED:30697047 A comparison of poly-ethylene-glycol-coated and uncoated gold nanoparticle-mediated hepatotoxicity and oxidative stress in Sprague Dawley rats. Background: Gold nanoparticles (GNPs) and their functional derivatives are of great interest because of their many biomedical applications. GNPs are increasingly being incorporated into new diagnostic and therapeutic approaches in medicine. Consequently, there has been a strong push to fully understand their interactions with blood components. The agglomeration of cells reflects the interaction of nanoparticles with blood components. Methods: The main aim of this study was to compare the effects of poly-ethylene-glycol (PEG)-oated and uncoated GNPs on the generation of reactive oxygen species (ROS); on the actions of distinct hepatotoxicity biomarkers such as alanine (ALT) and aspartate (AST) aminotransferases, and alkaline phosphatase (ALP); and on the histology of liver tissues in the rat model. Four distinct doses of PEG-coated and uncoated GNPs (12.5, 25, 50, and 100 µg/kg body weight) were used. Each group consisted of three rats receiving an oral administration of PEG-coated and uncoated GNPs for 5 days with one dose per 24 hours. The control group consisted of three rats that received deionized water. Twenty-four hours after the last treatment, samples were collected following standard procedures. Results: PEG-coated and uncoated GNPs enhanced the generation of ROS and the activity of serum aminotransferases (ALT/AST) and ALPs relative to the negative control. A liver histology assessment of GNP-exposed rats revealed statistically significant responses in the variation of the morphologies of tissues relative to those of the negative control. Nonetheless, uncoated GNPs demonstrated enhanced hepatotoxic outcomes relative to those of PEG-coated GNPs. The results demonstrated that both GNPs may be able to promote hepatotoxicity in Sprague Dawley rats through mechanisms of oxidative stress. However, uncoated GNPs have more harmful effects than PEG-coated GNPs relative to the negative control. Conclusion: Taken together, the results of this study indicate that PEG-coated GNPs may be safer to use in nanomedicinal applications than uncoated GNPs. However, more studies must be performed to confirm the outcomes of PEGylation. abstract_id: PUBMED:28222611 Influence of PEG coating on the oral bioavailability of gold nanoparticles in rats. Metallic nanoparticles can be produced in a variety of shapes, sizes, and surface chemistries, making them promising potential tools for drug delivery. Most studies to date have evaluated uptake of metallic nanoparticles from the GI tract with methods that are at best semi-quantitative. This study used the classical method of comparing blood concentration area under the curve (AUC) following intravenous and oral doses to determine the oral bioavailability of 1, 2 and 5 kDa PEG-coated 5 nm gold nanoparticles (AuNPs). Male rats were given a single intravenous dose (0.8 mg/kg) or oral (gavage) dose (8 mg/kg) of a PEG-coated AuNP, and the concentration of gold was measured in blood over time and in tissues (liver, spleen and kidney) at sacrifice. Blood concentrations following oral administration were inversely related to PEG size, and the AUC in blood was significantly greater for the 1 kDa PEG-coated AuNPs than particles coated with 2 or 5 kDa PEG. However, bioavailabilities of all of the particles were very low (&lt; 0.1%). Concentrations in liver, spleen and kidney were similar after the intravenous doses, but kidney showed the highest concentrations after an oral dose. In addition to providing information on the bioavailability of AuNPs coated with PEG in the 1-5 kDa range, this study demonstrates the utility of applying the blood AUC approach to assess the quantitative oral bioavailability of metallic nanoparticles. abstract_id: PUBMED:27389183 Enhancement of light absorption by blood to Nd:YAG laser using PEG-modified gold nanorods. Background And Objective: On the basis of the principle of selective photothermolysis, laser therapy has been the most effective treatment strategy for Port-wine stains (PWSs) caused by the expansion of dermal capillaries. Neodymium:Yttrium Aluminum Garnet (Nd:YAG) laser at 1064 nm wavelength has great potential for deeply buried PWS, although its application is limited because of its weak absorption by blood. The purpose of this study is to investigate the effect of PEG-modified gold nanorods (NRs) on the blood absorption enhancement for Nd:YAG laser. Materials And Methods: PEG-modified gold nanorods (NRs) were synthesized via the seeded growth method. Then, the effect of PEG-modified gold NRs on blood light absorbance was investigated through adding different concentration of PEG-modified gold NRs to 1 ml of blood at room temperature. Finally, the optical properties of whole mice blood with or without PEG-modified gold NRs under slow heating were investigated. Results: The average length and width of PEG-modified gold NRs are 79.5 ± 10.5 and 13.5 ± 0.9 nm, respectively, with the aspect ratio of 5.89, and a strong absorption peak exists at ∼1050 nm in the near-infrared range. A linear correlation between the blood absorbance at 1064 nm and the amount of PEG-modified gold NRs was obtained. The absorbance at 1064 nm increased 17.6, 33.0, 48.3, and 65.4 times when 0.4, 0.8, 1.2, and 1.6 mg of PEG-modified gold NRs was added to 1 ml of blood at room temperature, respectively. After adding 0.8 mg of PEG-modified gold NRs to 1 ml of blood, blood absorbance at 1064 nm at different temperatures increased by an average of 24.0 times. After intravenously injecting PEG-modified gold NRs (0.87 mg/ml) into Sprague-Dawley mice, the blood absorbance at 1064 nm increased from 0.014 to 0.5. Conclusion: Our findings suggest that PEG-modified gold NRs injection is an efficient way to enhance light absorption by blood to Nd:YAG laser. Lasers Surg. Med. 48:790-803, 2016. © 2016 Wiley Periodicals, Inc. abstract_id: PUBMED:25089510 Preparation and characterization of PEG-coated silica nanoparticles for oral insulin delivery. The present study reports the production and characterization of PEG-coated silica nanoparticles (SiNP-PEG) containing insulin for oral administration. High (PEG 20,000) and low (PEG 6000) PEG molecular weights were used in the preparations. SiNP were produced by sol-gel technology followed by PEG adsorption and characterized for in vitro release by Franz diffusion cells. In vitro permeation profile was assessed using everted rat intestine. HPLC method has been validated for the determination of insulin released and permeated. Insulin secondary structure was performed by circular dichroism (CD). Uncoated SiNP allowed slower insulin release in comparison to SiNP-PEG. The coating with high molecular weight PEG did not significantly (p&gt; 0.05) alter insulin release. The slow insulin release is attributed to the affinity of insulin for silanol groups at silica surface. Drug release followed second order kinetics for uncoated and SiNP-PEG at pH 2.0. On the other hand, at pH 6.8, the best fitting was first-order for SiNP-PEG, except for SiNP which showed a Boltzmann behavior. Comparing the values of half-live, SiNP-PEG 20,000 showed a faster diffusion followed by Si-PEG 6000 and SiNP. CD studies showed no conformational changes occurring after protein release from the nanoparticles under gastrointestinal simulated conditions. abstract_id: PUBMED:35929297 Triptorelin-functionalized PEG-coated biosynthesized gold nanoparticles: Effects of receptor-ligand interactions on adhesion to triple negative breast cancer cells. This paper presents the results of an experimental and computational study of the adhesion of triptorelin-conjugated PEG-coated biosynthesized gold nanoparticles (GNP-PEG-TRP) to triple-negative breast cancer (TNBC) cells. The adhesion is studied at the nanoscale using a combination of atomic force microscopy (AFM) experiments and molecular dynamics (MD) simulations. The AFM measurements showed that the triptorelin-functionalized gold nanoparticles (GNP-TRP and GNP-PEG-TRP) have higher adhesion to triple-negative breast cancer cells (TNBC) than non-tumorigenic breast cells. The increased adhesion of GNP-TRP and GNP-PEG-TRP to TNBC is also attributed to the overexpression of LHRH receptors on the surfaces of both TNBC. Finally, the molecular dynamics model reveals insights into the effects of receptor density, molecular configuration, and receptor-ligand docking characteristics on the interactions of triptorelin-functionalized PEG-coated gold nanoparticles with TNBC. A three to nine-fold increase in the adhesion is predicted between triptorelin-functionalized PEG-coated gold nanoparticles and TNBC cells. The implications of the results are then discussed for the specific targeting of TNBC. abstract_id: PUBMED:33201 Relative systemic availability of sulfapyridine from commercial enteric-coated and uncoated sulfasalazine tablets. The absorption of sulfapyridine after a single 2.0-Gm oral dose of sulfasalazine, the drug of choice in the treatment of inflammatory bowel disease, as commercial uncoated and enteric-coated and uncoated tablets was evaluated in four healthy male adults. The peak plasma concentration of sulfapyridine after the enteric-coated tablets occurred at 20 hours on the average (compared to 14 hours for the uncoated tablets) and was only 50% of that attained from the uncoated tablets (P less than 0.05). The low relative extent of systemic availability of sulfapyridine from the enteric-coated tablets (65.5 +/- 6.3 per cent, mean +/- S.E.) compared to uncoated tablets may be due to absorption rate-dependent presystemic metabolism, since the relative extent of sulfapyridine absorption was 92.7 +/- 6.2 per cent compared to uncoated tablets. These findings suggest that enteric-coated and uncoated tablets of sulfasalazine are not bioequivalent. It remains to be determined whether the clinical efficacy of sulfasalazine from enteric-coated tablets is affected. abstract_id: PUBMED:29098838 Oral absorption of asiatic acid nanoparticles modified with PEG A solvent diffusion method was used to prepare pegylated asiatic acid (AA) loaded nanostructured lipid carriers (p-AA-NLC), and the ligated intestinal circulation model was established to observe the absorption and distribution in small intestine. The concentration of AA in bile after oral administration of p-AA-NLC was detected by HPLC in healthy SD rats to indirectly evaluate the oral absorption promoting effect of PEG-modified namoparticles. The results showed that the penetration of p-AA-NLC was enhanced significantly and the transport capacity was increased greatly in small intestinal after PEG modification. As compared with the normal nanoparticles (AA-NLC), the Cmax of the drug excretion was increased by 76%, the time to reach the peak (tmax ) was decreased and the elimination half-life t1/2 was doubled in the rats after oral administration of p-AA-NLC, and the AUC0→t was 1.5 times of the AA-NLC group, indicating that the oral bioavailability of AA-NLC was significantly improved by hydrophilic modification of PEG. abstract_id: PUBMED:30766426 Influence of chitosan coating on the oral bioavailability of gold nanoparticles in rats. Gold nanoparticles are one of the most extensively investigated metallic nanoparticles for several applications. It is less toxic than other metallic nanolattices. The exceptional electrical and thermal conductivity of gold make it possible to be administered as non-invasive radiofrequency irradiation therapy that produces sufficient heat to kill tumor cells. Nanoparticles are generally administered intravenously instead of orally due to negligible oral absorption and cellular uptake. This study evaluated the oral bioavailability of gold nanoparticles coated with chitosan (C-AuNPs), a natural mucoadhesive polymer. We employed traditional method of evaluating bioavailability that involve estimation of maximum concentrations and area under the curve of 3 nm chitosan coated gold nanoparticles (C-AuNPs) in the rat plasma following intravenous and oral administrations (0.8 mg and 8 mg/kg body weight respectively). The oral bioavailability of C-AuNPs was found to be 2.46% (approximately 25 folds higher than polyethylene glycol (PEG) coated gold nanoparticles, reported earlier). These findings suggest that chitosan coating could be better than PEG coating for the enhancement of oral bioavailability of nanoparticles. abstract_id: PUBMED:33524523 Preparation and evaluation of PEG-coated zein nanoparticles for oral drug delivery purposes. The aim was to produce PEG-coated nanoparticles (NP-PEG), with mucus-permeating properties, for oral drug delivery purposes by using simple procedures and regulatory-approved compounds in order to facilitate a potential clinical development. For this purpose, zein nanoparticles were prepared by desolvation and, then, coated by incubation with PEG 35,000. The resulting nanocarriers displayed a mean size of about 200 nm and a negative zeta potential. The presence of PEG on the surface of nanoparticles was evidenced by electron microscopy and confirmed by FTIR analysis. Likely, the hydrophobic surface of zein nanoparticles (NP) was significantly reduce by their coating with PEG. This increase of the hydrophilicity of PEG-coated nanoparticles was associated with an important increase of their mobility in pig intestinal mucus. In laboratory animals, NP-PEG (fluorescently labelled with Lumogen® Red 305) displayed a different behavior when compared with bare nanoparticles. After oral administration, NP appeared to be trapped in the mucus mesh, whereas NP-PEG were capable of crossing the protective mucus layer and reach the epithelium. Finally, PEG-coated zein nanoparticles, prepared by a simple and reproducible method without employing reactive reagents, may be adequate carriers for promoting the oral bioavailability of biomacromolecules and other biologically active compounds with low permeability properties. Answer: The oral absorption of PEG-coated versus uncoated gold nanospheres has been studied to determine the impact of agglomeration on bioavailability. In one study, PEG-coated gold nanoparticles were found to remain un-agglomerated throughout the gastrointestinal tract, while uncoated gold nanoparticles formed agglomerates of several hundred nanometers. Despite this difference in agglomeration behavior, the study concluded that the large differences observed for in vivo agglomeration behavior did not reflect in oral absorption as indicated by gold tissue levels. It suggested that surface chemistry might play a more important role than in vivo particle size in determining oral absorption (PUBMED:25884802). Another study compared the hepatotoxicity and oxidative stress induced by PEG-coated and uncoated gold nanoparticles in rats. Both types of nanoparticles enhanced the generation of reactive oxygen species and the activity of serum aminotransferases, indicating potential hepatotoxicity. However, uncoated gold nanoparticles demonstrated more harmful effects than PEG-coated nanoparticles, suggesting that PEG-coated gold nanoparticles may be safer for use in nanomedicinal applications (PUBMED:30697047). Furthermore, a study assessing the oral bioavailability of PEG-coated gold nanoparticles in rats found that bioavailability was very low (< 0.1%) for all particles, regardless of the PEG size used for coating. This study also demonstrated the utility of applying the blood AUC approach to assess the quantitative oral bioavailability of metallic nanoparticles (PUBMED:28222611). In conclusion, while agglomeration may be a factor in the behavior of nanoparticles within the gastrointestinal tract, it does not appear to be the sole determinant of oral absorption. Surface chemistry, such as PEG-coating, seems to play a more significant role in influencing the bioavailability and safety profile of gold nanoparticles following oral exposure.
Instruction: Impact of chronic dialysis on serum PSA, free PSA, and free/total PSA ratio: is prostate cancer detection compromised in patients receiving long-term dialysis? Abstracts: abstract_id: PUBMED:36757372 Immuno-analytical characteristics of PSA and derived biomarkers (total PSA, free PSA, p2PSA) Prostate-specific antigen (PSA) is the recommended tumor marker for individual screening and follow-up of prostate cancer. This paper reviews main structural and physiological data about prostate specific antigen isoforms: total PSA, free PSA, [-2]proPSA (also named p2PSA). It describes the pre-, per- and post-analytical conditions for these different parameters. It presents the interpretation of results and derived calculated indices (free/total PSA ratio, Prostate Health Index or PHI) for the management of prostate cancer (initial diagnosis and follow-up). abstract_id: PUBMED:10367847 Impact of chronic dialysis on serum PSA, free PSA, and free/total PSA ratio: is prostate cancer detection compromised in patients receiving long-term dialysis? Objectives: The increased incidence of malignancy (ie, prostate cancer) in patients with end-stage renal failure is well known. However, little is known of the impact of hemodialysis and various membrane types on total and free prostate-specific antigen (PSA). We prospectively studied the impact of high- and low-flux dialysis membranes and kidney function on total PSA (tPSA), free PSA (fPSA), and free/total PSA ratio (f/t PSA). Methods: A total of 149 men were included. tPSA, fPSA, and f/t PSA were measured before and immediately after dialysis with high-flux (n = 101) and low-flux (n = 48) membranes in the serum and in the dialysis ultrafiltrate. A multivariate analysis of the impact of kidney function and age on the rate of change of all parameters was performed. Results: Overall, a significant decrease of fPSA (from 0.49 +/- 0.3 to 0.35 +/- 0.3 ng/mL, P &lt;0.0001) and f/t PSA (from 45 +/- 19% to 38 +/- 13%, P &lt;0.0001) and a nonsignificant decrease in serum tPSA were observed. However, fPSA (from 0.51 +/- 0.5 to 0.27 +/- 0.3 ng/mL, P &lt;0.0001) and f/t PSA (from 47 +/- 19% to 31 +/- 18%, P &lt;0.0001) decreased significantly in high-flux membranes only. The ultrafiltrate contained 100% fPSA in high-flux membranes and no fPSA in low-flux membranes. Age, serum creatinine, blood urea nitrogen, and dialysis evaluation parameters (Kt/V) had no impact on correlation with changes in tPSA and fPSA. Conclusions: tPSA molecules do not pass high- and low-flux membranes; fPSA passes high-flux membranes only. The nonsignificant decrease of tPSA is due to adsorption to both dialysis membranes. Although tPSA can safely be used to screen patients on dialysis, independently from the dialysis procedure and membrane, fPSA and f/t PSA are only reliable with low-flux membranes. Finally, we can state that the fPSA is most probably cleared through the kidneys by glomerular filtration. abstract_id: PUBMED:30543191 Relevance of total PSA and free PSA prescriptions. The total PSA testing has been frequently criticized for its effectiveness in the early detection of prostate cancer. These widely resumed in and out of the medical community have led to a decrease in prescriptions. Recommendations against the use of PSA for screening have been issued in France and in most countries. In the US, the significant decline in total PSA use since 2004 has led to an increase in advanced cancer diagnosis, which has led the authorities to back down. The US preventive services task force (USPSTF) now recommends that men between the ages of 55 and 69 make an individual decision about prostate cancer screening with their clinician. The purpose of this work is to summarize the good practices for the use of the total and free PSA with the views of these last data. abstract_id: PUBMED:35777806 Technical Note: Newly Reformulated Total and Free PSA Immunoassay on Cobas e411 Analyzer Is Virtually Free from Biotin Interference. Objective: Total and free prostate specific antigens (PSA) have been used as diagnostic markers for monitoring progress of therapy in patients with prostate cancer as well as for screening purpose. Roche total and free PSA immunoassay utilizes biotinylated antibody in assay design. As a result, both assays are affected by elevated serum biotin levels. Recently, Roche reformulated these assays to reduce biotin interference. We evaluated biotin interference in these products. Materials And Methods: We prepared three serum pools with one pool containing high amount of total PSA. Then aliquots of each serum pool were further supplemented with various concentrations of biotin (100-1500 ng/mL) followed by measuring both total and free PSA using Roche total and free PSA immunoassay and Cobas e411 analyzer. Results: We observed no significant interference of biotin in both total and free PSA assays up to biotin concentration of 1200 ng/mL. Conclusion: We concluded that newly reformulated total and free PSA immunoassays are virtually free from biotin interference. abstract_id: PUBMED:26558056 The value of percentage free prostate specific antigen (PSA) in the detection of prostate cancer among patients with intermediate levels of total PSA (4.0-10.0 ng/mL) in Nigeria. Objectives: To assess the value of percentage free prostate-specific antigen (%fPSA) in the detection of prostate cancer among Nigerian men with an intermediate total PSA level (4-10 ng/mL), and to show if the optimum threshold for biopsy is different from Caucasian values when the widely used (six-core, digitally directed) prostate biopsy protocol in Nigerians is applied. Patients And Methods: The study included 105 patients aged &gt;50 years, with a palpably benign prostate gland and intermediate levels of total PSA (4-10 ng/mL). These patients had a free PSA assay and a transrectal digitally directed six-core biopsy of the prostate. The %fPSA was calculated and the optimum threshold value for detecting prostate cancer was determined. Results: The mean (SD) age of the patients was 64.4 (6.6) years and their mean (SD) total PSA level was 6.6 (1.7) ng/mL. Of these men 14 (13.3%) had cancer of the prostate detected by the prostate biopsy. The %fPSA level related directly to sensitivity values but inversely to the specificity and the positive predictive values. The best threshold of %fPSA for detecting cancer in these men was &lt;40%, with a sensitivity of 100%, specificity of 93.4% and a positive predictive value of 70% (P &lt; 0.05). Conclusions: In evaluating Nigerian patients with a palpably benign prostate gland and within the intermediate total PSA range, when digitally directed biopsy protocol is adopted, a %fPSA threshold of &lt;40% will detect significant percentage of those with prostate cancer, with a minimal number of unnecessary biopsies. This value differs from that reported in western studies in which transrectal ultrasonography-directed biopsy was used. abstract_id: PUBMED:33114134 The Association between Prostate-Specific Antigen Velocity (PSAV), Value and Acceleration, and of the Free PSA/Total PSA Index or Ratio, with Prostate Conditions. Introduction: Prostate-specific antigen velocity (PSAV) is used to monitor men with clinical suspicion of prostate cancer (PCa), with a normal cut-off point of 0.3-0.5 ng/mL/year. The aim of the study is to establish the predictive capacity of PSAV (value and acceleration) and of the free PSA/total PSA index or ratio. Method: Prospective multicentre observational study in 2035 men of over 47 years of age. Inclusion Criteria: men who wished to be informed on the health of their prostate. Exclusion Criteria: men with a previously diagnosed prostate condition. Groups: GA: (n = 518): men with serum PSA equal to or greater than 2.01 ng/mL. GB: (n = 775): men with serum PSA greater than or equal to 0.78 ng/mL and less than 2.01 ng/mL. GC: (n = 742): men with serum PSA less than 0.78 ng/mL. Variables: prostate-specific antigen (PSA); age; body mass index (BMI); PSA velocity (PSAV) (ng/mL per year); free PSA/total PSA index (iPSA); PSAV acceleration (increasing: positive, or decreasing: negative); prostate diagnosis (benign prostatic hyperplasia (BPH), prostatic intraepithelial neoplasia (PIN), or infectious and non-infectious prostatitis and prostatic adenocarcinoma (PCa)); de novo diagnoses of urinary tract diseases or conditions; concomitant treatments, diseases and conditions; final diagnosis of prostate health. Results: Mean age 62.35 years (SD 8.12), median 61 (47-94); age was lowest in GC. Mean BMI was 27.89 kg/m2 (SD 3.96), median 27.58 (18.56-57.13); no differences between groups. Mean PSAV was 0.69, SD 2.16, median 0.13 (0.001-34.46); PSAV was lowest in GC. Mean iPSA was 27.39 u/L (SD 14.25), median 24.29 (3.7-115); iPSA was lowest in GA. PSAV had more positive acceleration in GA and more negative acceleration in GC. There were 1600 (78.62%) cases of normal prostate or BPH, 322 (15.82%) cases of PIN or non-infectious prostatitis, and 113 (5.55%) cases of PCa. There were more cases of BPH in GC and more cases of PIN or prostatitis and cancer in GA (p = 0.00001). De novo diagnoses: 15 cases of urinary incontinence (UI), 16 discomfort/pain in LUT, 112 cases of voiding disorders, 12 urethral strictures, 19 hematuria, 51 cystitis, 3 pyelonephritis, 4 pelvic inflammatory disease; no differences were found between groups. In the multivariate analysis, PSAV and the direction of PSAV acceleration (positive or negative) were the variables which were correlated most strongly with prostate health. iPSA was associated with the presence of prostatitis, PCa, and BPH. Men in GA had more prostatitis, PCa, treatment with alpha blockers, and history of previous smoking. GB had more cases of BPH and more positive acceleration of PSAV. GC had more normal prostates, more BPH, more use of ranitidine, and more PSAV with negative acceleration. Conclusions: PSAV, direction of PSAV acceleration, and iPSA in PSA cut-off points of 0.78 ng/mL and 2.01 ng/mL in a priori healthy men over 47 predict the probability of benign or malignant pathology of the prostate. abstract_id: PUBMED:26157519 Prostate-Specific Antigen (PSA) and Prostate Volume: Better Predictor of Prostate Cancer for Bosnian and Herzegovina Men. Background: The serum prostate specific antigen for the early detection and screening for prostate cancer are very common used among physicians as the best screening tool for prostate cancer. The result of prostate specific antigen levels discriminates whether or not a prostate biopsy should be performed. The lack of specificity is a limitation of PSA as tumor marker, increased PSA concentrations are found not only in patients with prostate cancer but also in patients with benign prostatic disease. The object of this study was to improve the specificity and sensitivity of prostatic cancer detection. We evaluated total PSA levels, free PSA levels and the prostate volume in asymptomatic patients which came for routine check without medical history of prostate cancer. Methods: We received medical record of 90 patients between 50-60 years. Total and free PSA in serum was measured with the analyzer Architeckt i2000 SR. Prostate volume was determined by transrectal ultrasound. Results: The ratio of total and free PSA levels to prostate volume was significantly (p &lt; 0.001) between all three groups. It was observed that increased prostate volume correlates with increased level of total and free PSA in serum. Conclusion: Early studies have demonstrated the advantage of measuring prostate volume with PSA total and free levels in serum as a useful tool for early diagnosis of prostate cancer. Data from this study on 90 patients with total PSA in the range from 0,22-7,0 ng/ml confirmed the well known correlation. All three parameters total PSA, free PSA and prostate volume showed significant correlation and a useful tool in prediction of prostate cancer for Bosnia and Herzegovina men. abstract_id: PUBMED:23105449 Stability of total and free prostate specific antigen in serum samples at different storage conditions. The present preliminary study was performed to find out stability of total prostate specific antigen (PSA) and free prostate specific antigen (FPSA) in serum of healthy males as well as in patients of benign and malignant disorders of prostate at various freezing and nonfreezing temperatures and at different duration of time.The results of our study indicated long-term stability of both the analytes in frozen serum. Serum total and free PSA were stable only for three to four days in regular refrigerators in unfrozen states. Clotted blood kept at room temperature (25°C-30°C) did not cause change in concentrations of both the analytes for twenty four hours. abstract_id: PUBMED:26443748 Stability and accuracy of total and free PSA values in samples stored at room temperature. Introduction: In 2010, an estimated 476,076 total PSA tests were performed in Ireland, at a cost of €3.6 million with the majority ordered by general practitioners. We aimed to replicate storage conditions at room temperature and see if prolonged storage affected total and free PSA values. Methods: Blood samples were taken from 20 male patients in four VACUETTE® Serum Separator tubes (Greiner-Bio-One, Austria) and stored at room temperature (22 °C) for different time intervals (4, 8, 24, 48 h) before being centrifuged and analyzed. Total PSA (tPSA) and free PSA (fPSA) values were determined using the Tosoh AIA 1800 assay (Tokyo, Japan). Results: Mean tPSA values were measured at 4, 8, 24 and 48 h with values of 7.9, 8.1, 7.8 and 8.0 μg/L, respectively. Values ranged from -1.26 to +2.53 % compared to the initial 4 h interval reading, indicating tPSA remained consistent at room temperature. The tPSA showed no significance between groups (ANOVA, p = 0.283). Mean fPSA values at 4, 8, 24 and 48 h were 2.05, 2.04, 1.83, 1.82 μg/L, respectively. At 24 and 48 h there was 10.73 and 11.22 % reduction, respectively, in fPSA compared to the 4-h time interval, indicating prolonged storage resulted in reduced fPSA values. After 24 h, there was an 8.8 % reduction in the free/total PSA %. The fPSA showed significant differences between groups (ANOVA, p = 0.024). Conclusions: Our recommendation is that samples that have been stored for prolonged amounts of time (greater than 24 h) should not be used for free PSA testing. abstract_id: PUBMED:34798395 Measurement of total and free prostate specific antigen (PSA) in human serum samples using an ultra-microanalytical system. Prostate specific antigen (PSA) is a serine protease used for the screening of prostate cancer. The total portion of PSA (tPSA) can be found in its free form (fPSA), or bound to other proteins forming a stable complex. A heterogeneous sandwich-type UltraMicro Enzyme-Linked ImmunoSorbent Assay (UMELISA) has been developed for the measurement of tPSA and fPSA in human serum samples. Strips coated with a high affinity monoclonal antibody (MAb) directed against PSA are used as solid phase, to ensure the specificity of the assay. Biotinylated MAbs specific for tPSA and fPSA ensured sensitivity, given the high affinity binding to streptavidin. The assay was completed in 1.5 h, with a measuring range 0.019-20 µg/L (tPSA), and 0.009-20 µg/L (fPSA). The intra- and inter-assay CV were lower than 9%. Recovery percentages were 96-105%. High correlations were found between the values of the UMELISA PSA standards and the International Reference Standards 96/670 (R2 = 0.9996) and 96/688 (R2 = 0.9989). The assay did not recognize any of the interfering molecules tested. Regression analysis of serum samples showed a good correlation with Roche Elecsys total PSA (n = 631, R2 = 0.986, ρc = 0.992), BioMérieux VIDAS TPSA (n = 631, R2 = 0.989, ρc = 0.993) and Roche Elecsys free PSA (n = 164, R2 = 0.973, ρc = 0.979), all with a relative difference below 15%, and a p &lt; 0.001. A retrospective study of the use of UMELISA PSA in Cuba was carried out. The analytical performance characteristics of UMELISA PSA support its use for the quantification of tPSA and fPSA in human serum samples in a single kit, making it an affordable diagnostic assay available to Cuban Public Health System and developing countries. Between the years 2014-2020, more than 3 million Cuban patients have benefited from the test for free. Answer: The impact of chronic dialysis on serum PSA, free PSA, and free/total PSA ratio has been studied to determine if prostate cancer detection is compromised in patients receiving long-term dialysis. The study found that there was a significant decrease in free PSA (fPSA) and the free/total PSA ratio (f/t PSA) after dialysis with high-flux membranes, but not with low-flux membranes. Specifically, fPSA and f/t PSA decreased significantly when high-flux membranes were used, with the ultrafiltrate containing 100% fPSA in high-flux membranes and no fPSA in low-flux membranes. Total PSA (tPSA) molecules did not pass through either high- or low-flux membranes, and the nonsignificant decrease in serum tPSA was attributed to adsorption to both types of dialysis membranes. The study concluded that while tPSA can be used to screen patients on dialysis regardless of the dialysis procedure and membrane type, fPSA and f/t PSA are only reliable with low-flux membranes. This suggests that prostate cancer detection using fPSA and f/t PSA may be compromised in patients receiving long-term dialysis with high-flux membranes (PUBMED:10367847).
Instruction: Do psychosocial work environment factors measured with scales from the Copenhagen Psychosocial Questionnaire predict register-based sickness absence of 3 weeks or more in Denmark? Abstracts: abstract_id: PUBMED:21172770 Do psychosocial work environment factors measured with scales from the Copenhagen Psychosocial Questionnaire predict register-based sickness absence of 3 weeks or more in Denmark? Aims: To analyse the predictive validity of 18 psychosocial work environment scales from the Copenhagen Psychosocial Questionnaire version II (COPSOQ II) with regard to risk of sickness absence. Methods: The study population consisted of 3188 wage earners (52% women) from a representative sample of Danish residents. Participants received the long version of the COPSOQ II in autumn and winter 2004-2005, including 18 psychosocial work environment scales from the domains ''Demands at work'', ''Work organization and job contents'', and ''Interpersonal relations and leadership''. The study endpoint was register-based sickness absence of 3 weeks or more in the 1-year period following completion of the COPSOQ II. Associations between COPSOQ scales at baseline and sickness absence at follow-up were analysed with Cox proportional hazards models, adjusted for age, gender, prevalence of a health problem at baseline, and occupational grade. Results: Sickness absence during follow-up was predicted by a one standard deviation increase on the scales of cognitive demands (hazard ratio (HR) 1.17, 95% confidence interval (CI) 1.00-1.37), emotional demands (HR 1.28, 95% CI 1.10-1.50), and role conflicts (HR 1.32, 95% CI 1.15-1.52). After applying adjustment for multiple testing, the effect of emotional demands and of role conflict remained statistically significant, but not the effect of cognitive demands. Conclusions: Selected psychosocial work environment factors from the COPSOQ predict register-based sickness absence in the Danish workforce. abstract_id: PUBMED:37563743 Prospective associations of psychosocial work factors with sickness absence spells and duration: Results from the French national working conditions survey. Background: Some psychosocial work factors are associated with sickness absence, however little information is available on the associations of various psychosocial work factors and multiple exposures with sickness absence spells and duration, and gender differences. Methods: Data were from the French working conditions survey conducted on a nationally representative sample of the working population. The study sample included 17,437 employees (7292 men, 10,145 women) followed from 2013 to 2016 and/or from 2016 to 2019. Occupational exposures (20 psychosocial work factors, 4 working time/hours factors, 4 physical work exposures) were measured at the beginning of each follow-up period. Hurdle and multinomial models were used to study the associations with the number of days and spells of sickness absence. Results: Most of the psychosocial work factors predicted the risk of at least 1 day of sickness absence. Stronger associations were found among women than men for some factors. Psychosocial work factors were more likely to predict the number of spells than the number of days of sickness absence. Some physical work exposures predicted sickness absence spells and days, whereas shift work in women predicted the risk of at least 1 day of sickness absence. Dose-response associations were found between multiple psychosocial work exposures and sickness absence spells, and between multiple physical exposures and sickness absence spells and days. Conclusion: Comprehensive prevention policies oriented toward the whole psychosocial and physical work environment should be useful to reduce sickness absence among men and women. abstract_id: PUBMED:36242547 The Predictive Validity of the Danish Psychosocial Work Environment Questionnaire With Regard to Onset of Depressive Disorders and Long-Term Sickness Absence. Objectives: To investigate the predictive validity of 32 measures of the Danish Psychosocial Work Environment Questionnaire (DPQ) against two criteria variables: onset of depressive disorders and long-term sickness absence (LTSA). Methods: The DPQ was sent to 8958 employed individuals in 14 job groups of which 4340 responded (response rate: 48.4%). Depressive disorders were measured by self-report with a 6-month follow-up. LTSA was measured with a 1-year follow-up in a national register. We analyzed onset of depressive disorders at follow-up using logistic regression models, adjusted for age, sex, and job group, while excluding respondents with depressive disorders at baseline. We analyzed onset of LTSA with Cox regression models, adjusted for age, sex, and job group, while excluding respondents with previous LTSA. Results: The general pattern of the results followed our hypotheses as high job demands, poorly organized working conditions, poor relations to colleagues and superiors, and negative reactions to the work situation predicted onset of depressive disorders at follow-up and onset of LTSA during follow-up. Analyzing onset of depressive disorders and onset of LTSA, we found risk estimates that deviated from unity in most of the investigated associations. Overall, we found higher risk estimates when analyzing onset of depressive disorders compared with onset of LTSA. Conclusions: The analyses provide support for the predictive validity of most DPQ-measures. Results suggest that the DPQ constitutes a useful tool for identifying risk factors for depression and LTSA in the psychosocial work environment. abstract_id: PUBMED:26077204 Effect of Psychosocial Work Environment on Sickness Absence Among Patients Treated for Ischemic Heart Disease. Introduction: During the last decades mortality has declined in patients with coronary heart disease due to improvements in treatments and changes in life style, resulting in more people living with chronic heart disease. This implies that focus on rehabilitation and re-integration to the work-force becomes increasingly important. Previous studies among healthy workers suggest that the psychosocial working environment is associated with sickness absence. Whether the psychosocial working environment plays a role for patients with existing cardiovascular disease on return to work and sickness absence is less studied. Methods: A cohort of patients under 67 years and treated with percutaneous coronary intervention (PCI) was established in 2006. Three months after the procedure the patients (n = 625) answered a questionnaire about their psychosocial working environment. Patients were followed in registers for the following year. We examined the association between psychosocial working environment and sickness absence at 3 months, 1 year and new sick-listings during the first year with logistic regression. Results: A total of 528 patients had returned to work 3 months after the PCI, while 97 was still sick-listed. After 1 year one was dead, 465 were working and 85 were receiving health related benefits, while 74 had left the workforce permanently. A number of 106 patients were sick-listed during the whole first year or had left the workforce permanently. After the initial return to work, 90 experienced a new sickness absence during the first year while the remaining 429 did not. High work pace, low commitment to the workplace, low recognition (rewards) and low job control were associated with sickness absence at 3 months, but not after 1 year. Low job control as well as job strain (combination of high demands and low control) was associated with new sick-listings. Conclusion: The psychosocial working environment was associated with sickness absence 3 months after the PCI, but not 1 year after. abstract_id: PUBMED:27087828 Standardized assessment of psychosocial factors and their influence on medically confirmed health outcomes in workers: a systematic review. Previous studies of psychosocial work factors have indicated their importance for workers' health. However, to what extent health problems can be attributed to the nature of the work environment or other psychosocial factors is not clear. No previous systematic review has used inclusion criteria based on specific medical evaluation of work-related health outcomes and the use of validated instruments for the assessment of the psychosocial (work) environment. The aim of this systematic review is to summarize the evidence assessing the relationship between the psychosocial work environment and workers' health based on studies that used standardized and validated instruments to assess the psychosocial work environment and that focused on medically confirmed health outcomes. A systematic review of the literature was carried out by searching the databases PubMed, B-ON, Science Direct, Psycarticles, Psychology and Behavioral Sciences Collection and the search engine (Google Scholar) using appropriate words for studies published from 2004 to 2014. This review follows the recommendations of the Statement for Reporting Systematic Reviews (PRISMA). Studies were included in the review if data on psychosocial validated assessment method(s) for the study population and specific medical evaluation of health-related work outcome(s) were presented. In total, the search strategy yielded 10,623 references, of which 10 studies (seven prospective cohort and three cross-sectional) met the inclusion criteria. Most studies (7/10) observed an adverse effect of poor psychosocial work factors on workers' health: 3 on sickness absence, 4 on cardiovascular diseases. The other 3 studies reported detrimental effects on sleep and on disease-associated biomarkers. A more consistent effect was observed in studies of higher methodological quality that used a prospective design jointly with the use of validated instruments for the assessment of the psychosocial (work) environment and clinical evaluation. More prospective studies are needed to assess the evidence of work-related psychosocial factors on workers´ health. abstract_id: PUBMED:25022281 Psychosocial work conditions associated with sickness absence among hospital employees. Background: Meaningfulness of the job, collaboration among colleagues, trustworthiness of the closest superior and bullying have previously been shown to be major covariates of intention to quit the job. Aims: To test if these elements of the psychosocial work environment are also the most essential covariates of sickness absence. Methods: A cross-sectional questionnaire study of hospital employees which sought information on elements of the psychosocial work environment, general health status, life style, age, gender and profession. Data on sickness absence were obtained from the employer's salary database. Results: A total of 1809 hospital employees took part with a response rate of 65%. The mean age was 43 (range: 20-69) and 75% were female. Totally, 363 study participants (20%) had at least 14 days sickness absence (defined as high absence) during the preceding year. Associations between high sickness absence and 29 psychosocial work elements were analysed, adjusting for relevant confounders. Following multiple logistic regression analysis, three elements had an independent statistically significant association with high sickness absence: no exposure to bullying (odds ratio (95% confidence interval) 0.50 (0.33-0.77)), high meaningfulness of the job (0.71 (0.52-0.97)) and high trustworthiness of the closest superior (0.70 (0.54-0.92)). Conclusions: Elements of the psychosocial work environment which have previously been shown to have a significant independent association with intention to quit the job were also the most essential covariates of high sickness absence. abstract_id: PUBMED:32907881 Associations between physical and psychosocial work environment factors and sickness absence incidence depend on the lengths of the sickness absence episodes: a prospective study of 27 678 Danish employees. Objectives: This study examined if the association between work environment factors and sickness absence (SA) depended on the inclusion or exclusion of short-term SA episodes. Methods: We linked the 'Work Environment and Health in Denmark' survey with the 'Danish Register of Work Absences' (n=27 678). Using covariate adjusted Cox regression, we examined the associations between work environment factors and SA by changing the cut-off points for the length of the SA episodes, for example, episodes ≥1 day, ≥6 days and ≥21 days. We examined three physical work environment factors: 'Back bend or twisted', 'Lifting or carrying', 'Wet hands' and three psychosocial work environment factors: 'Poor influence', 'Role conflicts' and 'Bullying'. Results: 'Back bend or twisted' and 'Lifting or carrying' had small significant HRs for SA episodes ≥1 day and large and highly significant HRs for SA episodes ≥6 days and ≥21 days. 'Wet hands' had small significant HRs for SA episodes ≥1 day for both sexes and large and highly significant HR for ≥6 days for women. HRs of all three psychosocial factors were highly significant for SA episodes ≥1 day and ≥6 days for both sexes, and 'Poor influence' and 'Role conflicts' were significant for SA episodes ≥21 days for women. Conclusions: The physical work factors had higher associations with SA when SA episodes of 1-5 days were excluded and focus was on SA episodes ≥6 days. The psychosocial work factors were strongly associated with SA both with and without SA episodes of 1-5 days included in the analyses. abstract_id: PUBMED:30486752 Can a poor psychosocial work environment and insufficient organizational resources explain the higher risk of ill-health and sickness absence in human service occupations? Evidence from a Swedish national cohort. Aim: The aim of this study was to investigate differences in burnout, self-rated health (SRH) and sickness absence between human service occupations (HSOs) and other occupations, and whether they can be attributed to differences in psychosocial work environment and organizational resources. Methods: Data were derived from the Swedish Longitudinal Occupational Survey of Health, an approximately representative sample of the Swedish working population ( n = 4408). Employment in HSOs, psychosocial work environment and organizational resources in 2012 predicted relative risks of sickness absence, burnout and suboptimal SRH in 2014 using modified Poisson regressions. The psychosocial work factors' and organizational resource variables' relative importance were estimated by adding them to the models one by one, and with population attributable fractions (PAFs). Results: Employment in HSOs was associated with a higher risk of sickness absence and the risk was explained by psychosocial and organizational factors, particularly high emotional demands, low work-time control and exposure to workplace violence. Employment in HSOs was not associated with burnout after sociodemographic factors were adjusted for, and furthermore not with SRH. A lower risk of suboptimal SRH was found in HSOs than in other occupations with equivalent psychosocial work environment and organizational resources. PAFs indicated that psychosocial work environment and organizational resource improvements could lead to morbidity reductions for all outcomes; emotional demands were more important in HSOs. Conclusions: HSOs had higher risks of sickness absence and burnout than other occupations. The most important work factors to address were high emotional demands, low work-time control, and exposure to workplace violence. abstract_id: PUBMED:29343243 Retrospectively assessed psychosocial working conditions as predictors of prospectively assessed sickness absence and disability pension among older workers. Background: The aim was to explore the association between retrospectively assessed psychosocial working conditions during working life and prospectively assessed risk of sickness absence and disability pension among older workers. Methods: The prospective risk of register-based long-term sickness absence (LTSA) and disability pension was estimated from exposure to 12 different psychosocial work characteristics during working life among 5076 older workers from the CAMB cohort (Copenhagen Aging and Midlife Biobank). Analyses were censored for competing events and adjusted for age, gender, physical work environment, lifestyle, education, and prior LTSA. Results: LTSA was predicted by high levels of cognitive demands (HR 1.31 (95% CI 1.10-1.56)), high levels of emotional demands (HR 1.26 (95% CI 1.07-1.48)), low levels of influence at work (HR 1.30 (95% CI 1.03-1.64)), and high levels of role conflicts (HR 1.34 (95% CI 1.09-1.65)). Disability pension was predicted by low levels of influence at work (HR 2.73 (95% CI 1.49-5.00)) and low levels of recognition from management (HR 2.04 (95% CI 1.14-3.67)). Conclusions: This exploratory study found that retrospectively assessed high cognitive demands, high and medium emotional demands, low influence at work, low recognition from management, medium role clarity, and high role conflicts predicted LTSA and/or disability pension. abstract_id: PUBMED:24639009 Psychosocial factors at work and sickness absence: results from the French national SUMER survey. Objective: This study aims at exploring the associations between psychosocial work factors and sickness absence. Methods: The sample from the French National Survey SUMER 2010 included 46,962 employees. Sickness absence spells and days within the last year were studied as two outcomes. Psychosocial work factors included psychological demands, decision latitude, social support, reward, working time, and workplace violence variables. Covariates were age, occupation, economic activity, and other occupational exposures. Results: For both genders, low latitude, low reward, shift work, bullying, and verbal abuse were risk factors of absence spells while long working hours were a protective factor. High demands, low support, and physical violence were risk factors for women. Low support and bullying for both genders, high demands for women, and low reward, long working hours, and physical violence for men increased absence duration. Conclusions: This study highlights the importance of psychosocial work factors as risk factors of sickness absence. Answer: Yes, psychosocial work environment factors measured with scales from the Copenhagen Psychosocial Questionnaire (COPSOQ) do predict register-based sickness absence of 3 weeks or more in Denmark. A study analyzing the predictive validity of 18 psychosocial work environment scales from the COPSOQ II found that sickness absence during follow-up was predicted by a one standard deviation increase on the scales of cognitive demands, emotional demands, and role conflicts. After adjusting for multiple testing, the effects of emotional demands and role conflict remained statistically significant, indicating that these selected psychosocial work environment factors from the COPSOQ can predict register-based sickness absence in the Danish workforce (PUBMED:21172770).
Instruction: Nephrogenic adenoma in renal transplant recipients: a truly benign lesion? Abstracts: abstract_id: PUBMED:9801094 Nephrogenic adenoma in renal transplant recipients: a truly benign lesion? Objectives: Nephrogenic adenoma is a benign metaplastic lesion of the urinary bladder, reported to occur as a response to inflammation, trauma, intravesical therapies, and after renal transplantation. The aim of this study was to evaluate on the basis of chromosomal analysis whether nephrogenic adenoma really is benign. Methods: Twelve renal transplant recipients with histologically verified nephrogenic adenoma were analyzed for numerical aberrations of chromosomes 7, 9, and 17. Results were related to total DNA content, p53 and Ki-67 positivity, and clinical outcome. Ten patients with superficial bladder cancer and 10 healthy renal transplant recipients formed the control groups. Results: All 12 patients with nephrogenic adenoma had monosomy 9 in a mean of 24.3% (range 20% to 30%) of the evaluated cells; 3 patients had an additional trisomy 7 in a mean of 8% (range 6% to 10%) of the counted cells. Chromosome 1 7 was disomic in all patients. DNA histograms were diploid in 11 of the 12 patients and aneuploid in 1 patient. No p53 and Ki-67 positivity was present in this group. All patients with superficial bladder cancer had monosomy 9 in a mean of 79.8% (range 75% to 85%) of the counted cells. Two patients were found to have an additional trisomy 7 in 50% and 65% of the cells, respectively. The latter had an aneuploid histogram; the others had haploid/diploid histograms. p53 was negative in all specimens. Ki-67 positivity was present in 70% of these patients. All healthy transplant recipients had disomic chromosomal patterns according to diploid DNA histograms and negative immunocytochemical results. Conclusions: Even if in a lower percentage of cells, aberrations of chromosome 7 and 9 were detected in nephrogenic adenoma. It therefore cannot be excluded that nephrogenic adenomas in immunosuppressed renal transplant recipients may develop into malignant lesions. abstract_id: PUBMED:21705821 Prostatic nephrogenic adenoma. Case report. Objective: We present a case of prostatic urethra nephrogenic adenoma as an incidental finding following transurethral resection of the prostate. Method/result: It is an incidental diagnosis of nephrogenic adenoma of prostatic urethra in a 50-year-old male operated for benign prostatic hyperplasia by means of transurethral resection. Conclusions: Nephrogenic adenoma is an infrequent and benign lesion of the urinary tract, associated with a previous history of trauma or irritation on the urothelium. Predisposing factors include infections, calculi, surgery, trauma and kidney transplantation. abstract_id: PUBMED:15533502 Nephrogenic adenoma of bladder after ibuprofen abuse. Nephrogenic adenoma is an infrequent benign lesion of the urinary system that occurs in patients with a history of genitourinary surgery, stone disease, trauma, chronic urinary tract infection, or renal transplantation. We report the first case of nephrogenic adenoma of the bladder in a 53-year-old man with a 5-year history of ibuprofen abuse for chronic arthritis. We stress the importance of investigating the analgesic abuser for nephrogenic adenoma if microhematuria and/or irritative lower urinary tract symptoms are present. abstract_id: PUBMED:17268911 Nephrogenic adenoma of the ureter: case report. Nephrogenic adenoma is a rare metaplastic benign lesion of urothelium in response to genitourinary tract procedures, chronic infection, trauma, urinary calculi and immunosuppressive therapy after renal transplantation. It has been reported to occur throughout the urinary tract especially, in the urinary bladder. We report a case of nephrogenic adenoma arising from the lower end of the right ureter with clinical and radiological features suspicious of carcinoma. abstract_id: PUBMED:1090056 Nephrogenic adenoma of bladder in immunosuppressed renal transplantation. A case of nephrogenic adenoma is reported in an immunosuppressed renal transplant recipient. The nature of the lesion is discussed, and a possible relationship between benign tumors and impaired immunologic surveillance is considered. abstract_id: PUBMED:18043925 Recurrent nephrogenic adenoma in a 10-year-old boy with prune belly syndrome : a case presentation. Nephrogenic adenoma is a rare benign lesion of the urinary tract that is associated with a history of irritation or injury of the urothelium. Predisposing factors include infection, calculi, surgery, trauma, and renal transplantation. Nephrogenic adenoma commonly presents with lower urinary tract symptoms or hematuria. We present the case of recurrent nephrogenic adenoma in a 10-year-old boy with a history of prune belly syndrome and discuss management of this disease in the pediatric population. To our knowledge this represents the first reported case of recurrent nephrogenic adenoma associated with prune belly syndrome. abstract_id: PUBMED:32439556 A Case Report of an Obstructing Ureteral Nephrogenic Adenoma in a Child Managed With Open Ileal Ureter. Nephrogenic adenoma is a rare, benign lesion that can be encountered anywhere along the urinary tract. It is associated with genitourinary trauma, chronic inflammation, genitourinary surgery, renal transplant, urolithiasis, and radiation. In children, these lesions are almost exclusively found in the bladder. However, we report an unusual case of a 15-year-old boy with no prior urologic history who presented with an obstructing right ureteral nephrogenic adenoma that required an ileal ureter interposition and right ureterectomy. abstract_id: PUBMED:33123404 Nephrogenic Adenoma of the Urinary Bladder after Kidney Transplantation: Long-Term Follow-Up. Nephrogenic adenoma is a rare lesion that consists of epithelial cells arranged in tubular form, resembling tubules in the renal medulla, and is found usually in the urinary bladder although it can occur anywhere in the transitional epithelium of the lower urinary tract. The first case of nephrogenic adenoma of the urinary bladder was reported before the first kidney transplantation, and the lesion has been reported in patients with and without renal transplantation. The origin of cells in nephrogenic adenoma is debated and has been postulated to arise from cells of embryonic origin or from metaplasia secondary to chronic irritation or from implantation of allograft cells in patients with kidney transplantation. The long-term outcome and potential to convert into malignancy are not established, and therefore, there are no recommendations on how to follow up these patients. We present a case of a patient who was found to have nephrogenic adenoma of the urinary bladder during his second kidney transplantation from a cadaveric donor. He had undergone living donor kidney transplantation previously which subsequently failed. The patient did not manifest any symptoms of nephrogenic adenoma. During a follow-up period of 5 years, he has not manifested any symptoms related to nephrogenic metaplasia. Histopathological examination 5 years after the second transplantation did not show any malignant change. It can be concluded that nephrogenic adenoma is likely to behave in benign fashion post kidney transplantation. abstract_id: PUBMED:24686831 Nephrogenic adenoma of the urinary bladder: a report of three cases and a review of the literature. Background: Nephrogenic adenoma (NA) is a rare, benign disease of the urinary tract, usually as a response to chronic irritation or trauma. Its diagnosis, staging, and treatment are not well established. Case Report: We report on 3 cases of nephrogenic adenoma of the urinary bladder treated in our hospital between February 2011 and December 2012 to assess our experience and clinical outcome updating and reviewing the literature concerning this issue. All patients had undergone previous open urosurgery. Two patients had kidney transplantation. Gross hematuria and microhematuria were found in 2 patients. One patient had recurrent urinary tract infection. One patient had NA associated with transitional cell carcinoma (TCC). Recurrent nephrogenic adenomas were diagnosed in 2 patients (time to disease relapse was 5 and 9 months). All nephrogenic adenomas and recurrent tumors were treated with transurethral resection. Conclusions: Although NA is a benign metaplastic lesion of the urothelium, its recurrence rate is relatively high, thus careful and regular follow-up is necessary. Endoscopic characteristics of NA are not specific and a definite diagnosis must be made after histological analysis of resected specimens. abstract_id: PUBMED:29198684 Nephrogenic Adenoma Complicating Renal Transplantation: A Case Report and Discussion. Nephrogenic adenoma (NA) is a benign adenomatous lesion of the urinary tract. Long considered to be a rare phenomenon, case series from the renal transplant population suggest that it may be much more common within this group. Although NA is considered to be a lesion with low premalignant potential, hematuria, lower urinary tract symptoms, and recurrent urinary tract infections (UTIs) are frequently observed in the context of NA. Furthermore, after resection of NA, lesion recurrence and persistent symptoms are frequently observed. Here we present the case of a 69-year-old male renal transplant recipient with NA and associated recurrent UTIs despite cystoscopic resection of the primary lesion. This case is illustrative of the clinical impact of NA and the need for ongoing work into the development of strategies to manage this problematic phenomenon. Answer: Nephrogenic adenoma is generally considered a benign lesion of the urinary tract, often associated with a history of trauma, irritation, surgery, or renal transplantation (PUBMED:21705821; PUBMED:15533502; PUBMED:17268911; PUBMED:18043925; PUBMED:32439556). However, the benign nature of nephrogenic adenoma in renal transplant recipients has been questioned due to findings of chromosomal aberrations. A study involving renal transplant recipients with histologically verified nephrogenic adenoma found that all patients had monosomy 9 in a significant percentage of evaluated cells, and some had additional trisomy 7 (PUBMED:9801094). These chromosomal abnormalities raise concerns about the potential for nephrogenic adenomas to develop into malignant lesions, especially in immunosuppressed individuals. The study's findings suggest that nephrogenic adenoma may not be entirely benign in this patient population, as chromosomal aberrations are often associated with malignancy. Furthermore, the long-term outcome and potential for nephrogenic adenoma to convert into malignancy are not well established, leading to uncertainty about how to follow up with these patients (PUBMED:33123404). Although a case report indicated that a patient with nephrogenic adenoma post kidney transplantation did not show any malignant change during a 5-year follow-up, the possibility of benign behavior post-transplantation cannot be conclusively determined (PUBMED:33123404). In summary, while nephrogenic adenoma is typically considered benign, the presence of chromosomal abnormalities in renal transplant recipients suggests that there may be a risk of malignant transformation in this population. Therefore, careful and regular follow-up is recommended for renal transplant recipients with nephrogenic adenoma (PUBMED:9801094; PUBMED:33123404; PUBMED:24686831).
Instruction: Anaphylaxis to apple: is fasting a risk factor for LTP-allergic patients? Abstracts: abstract_id: PUBMED:21114199 Anaphylaxis to apple: is fasting a risk factor for LTP-allergic patients? Background: Primary apple allergy is frequent in Mediterranean countries where hypersensitivity to lipid transfer protein (LTP) is common. Due to its stability upon pepsin digestion, LTP may cause systemic allergic reactions. This study investigated the potential risk associated with an isolated intake of apple while fasting in LTP-hypersensitive patients with clinical allergy to peach but not to apple. Patients And Methods: Based on the observation of 6 patients who experienced 7 apple-induced anaphylactic reactions that in 6 cases followed the ingestion of the fruit after fasting, open food challenges were carried out in 12 patients LTP-hypersensitive patients with peach allergy but tolerant to apple. Results: Four out of the 12 patients (33%) reacted to apple upon oral challenge. Conclusion: Fasting seems to play a relevant role in the clinical expression of allergy to LTP. It is possible that in an empty gastrointestinal tract the allergen is absorbed more rapidly. Alternatively, pepsin might digest the food matrix more efficiently, thus increasing the concentration of the purified allergen that comes in contact with the gut mucosa. abstract_id: PUBMED:36902611 Combined LTP Sublingual and Oral Immunotherapy in LTP Syndrome: Efficacy and Safety. Introduction: SLIT for the treatment of plant food allergies has been demonstrated to be safe but less effective than OIT, but the latter is associated with more adverse reactions. The aim of the study was to evaluate the efficacy and safety of a new protocol starting with SLIT-peach followed by OIT with commercial peach juice in patients with LTP syndrome. Methods: This was a prospective, noncontrolled, open study on patients with LTP syndrome who are not sensitized to storage proteins. SLIT peach ALK was followed by OIT with Granini® peach juice after 40 days of the SLIT maintenance phase. At home, the Granini® juice dose was progressively increased during the 42 days until reaching 200 ml. After achieving the maximum dose, an open oral food challenge was carried out with the food that had caused the most severe reaction. If negative, the patient was instructed to progressively introduce the foods that were avoided before starting immunotherapy at home. Patients were reviewed 1 month later. The quality-of-life questionnaire FAQLQ-AF was completed at the beginning of the study and one month after the final challenge. Results: Forty-five patients were included, most of them with LTP anaphylaxis. Peach SLIT was well tolerated in 80.5%, and OIT with Granini® was well tolerated in 85%, with no severe adverse reactions. The final provocation was successful in 39/45 (86.6%). One month after the final provocation, 42/45 (93.3%) patients had no dietary restrictions. FAQLA-AF was significantly reduced. Conclusions: This combination of peach SLIT and OIT with commercial peach juice provides a new, fast, effective, and safe immunotherapy option for selected patients with LTP syndrome who are not allergic to storage proteins, improving their quality of life. This study suggests that cross-desensitization relative to the nsLTPs of several plant foods can be achieved by using Prup3. abstract_id: PUBMED:27609533 Lettuce Allergy Is a Lipid Transfer Syndrome-Related Food Allergy With a High Risk of Severe Reactions. Background And Objective: Lipid transfer protein (LTP) sensitization is the most common cause of food allergy in the Mediterranean area, with peach allergy acting as the primary sensitizer in most cases. Lettuce has been described as a common offending food in patients with LTP syndrome. The aim of the study was to investigate the frequency and clinical expression of LTP syndrome in a sample of lettuceallergic patients. Methods: We determined specific IgE to Pru p 3 and lettuce in a sample of 30 patients with a diagnosis of lettuce allergy. Symptoms elicited by other LTP-containing plant-derived foods and the presence of cofactors were assessed. Results: The clinical symptoms of lettuce allergy were frequently severe, with 18 of the 30 patients experiencing anaphylaxis. All the patients had allergic reactions to other plant foods. Cofactors were involved in the clinical reactions of 13 of the 30 patients. Sensitization to pollens was found in 90% of patients. Conclusions: Lettuce allergy is found not as an isolated condition but in the context of LTP syndrome and it is characterized by severe reactions and frequent cofactor association. abstract_id: PUBMED:36792201 Evaluating Risk Factors for Developing Allergic Reactions during Plasma Exchange Using Fresh-frozen Plasma: A Single-center Retrospective Study. Objective Allergic reactions are a severe complication of plasma exchange (PEx). Few reports have analyzed allergic reactions during PEx using fresh-frozen plasma (FFP) as a replacement solution. We therefore clarified the relationship between risk and exacerbation factors that lead to the onset of PEx-related allergic reactions, particularly PEx, using FFP, and examined whether or not allergic reactions were predictable. Methods This retrospective study included 88 consecutive patients who underwent PEx with FFP as a replacement solution at Kitasato University Hospital. The patients were grouped according to the presence of allergic reactions and compared. Data were analyzed using the χ2 test, Mann-Whitney U test, and a binomial logistic analysis. Statistical analyses were performed using EZR software program, version 1.54, with p&lt;0.05 considered statistically significant. Results There were 44 allergic reaction cases. The average time to the onset of an allergic reactions was 63.5 (45-93) minutes. The allergic reaction-onset group had significantly higher average albumin (Alb) levels than did the non-allergic reaction-onset group. The binomial logistic analysis identified Alb levels as independent risk factors for allergic reactions. The receiver operating characteristic analysis identified an Alb level ≥3.4 g/dL as a risk factor for allergic reactions (area under the curve: 0.731; 95% confidence interval: 0.622-0.84). Conclusion Allergic reaction onset occurred approximately one hour after PEx initiation in the critical period. A serum Alb level ≥3.4 g/dL was identified as a risk factor for predicting allergic reactions. Patients with Alb levels ≥3.4 g/dL at the first PEx should be monitored for allergic reaction symptoms. abstract_id: PUBMED:35807887 Epinephrine Auto-Injector Prescription and Use: A Retrospective Analysis and Clinical Risk Assessment of Adult Patients Sensitized to Lipid Transfer Protein. Lipid transfer proteins (LTPs) are widely widespread plant food allergens which represents the main cause of food allergy in adults living in the Mediterranean basin. The purpose of this study was to investigate in LTP patients the actual use of prescribed epinephrine auto-injector and appropriateness of its prescription. In addition, we investigated in these patients: (1) occurrence of new food reaction in the following three years after to diagnosis; (2) need and number of access to emergency services; (3) presence of possible predictive factors to further food reactions. One-hundred sixty-five adult patients sensitized to LTPs have been included. During follow-up, we recorded 68 further reactions, most of them (77.9%) characterized by local symptoms; rarely the patients required an emergency-department visits (16.1%) and only one patient (1.7%) used the epinephrine auto-injector. The patients with a previous history of anaphylaxis at baseline turned back to access to emergency services also during the follow-up (p = 0.006). The majority of patients with recorded systemic reactions (p = 0.004) and treated in an emergency room (p = 0.028) did not have any co-factor-enhanced at diagnosis. We noted an association between platanus pollen sensitization and severity of further reactions during the follow-up (p = 0.026). Epinephrine auto-injector were prescribed to 108/165 patients (65.5%) with an over-prescription rate of 25%. The unforeseeable clinical presentation of LTP allergic reactions and the eventual role played by the cofactor make necessary schedule a follow-up to monitor the patients over time and to assess the actual use of epinephrine auto injectors prescribed. abstract_id: PUBMED:3394594 Anaphylactic reactions incidence in allergic and atopic patients. One thousand and thirty penicillin allergics of immediate type and 1344 pollen allergic patients were examined. It was noticed that none of the penicillin allergics suffered from pollen allergy and none of the atopic patients allergic to pollen had penicillin induced allergy of immediate type. The data obtained were processed through the chi square test in correlation with the incidence of penicillin and pollen induced allergy among the normal population. The conclusion was drawn that atopic patients with pollen allergy show a lower risk than the normal population to have allergic reactions of immediate type to penicillin. abstract_id: PUBMED:12872677 European hornet (Vespa crabro) sting: a new risk factor for life-threatening reaction in hymenoptera allergic patients? Background: Severity of a previous reaction, adult age, male gender and honeybee sting are risk factors for severe systemic reactions after hymenoptera stings. The aim of the study was to assess the association between the Vespa crabro sting and severe systemic reactions. Methods: One hundred fifty seven hymenoptera allergic patients with a positive case history for systemic reactions were selected on the basis of unequivocal identification of the stinging insect. In 97 patients the culprit insect was Vespula spp., in 35 was Vespa crabro in the remaining 25 patients was Apis mellifera. The relative risk for a life-threatening reactions after a sting was evaluated for each hymenoptera species. Results: While the percentage of life-threatening reactions was similar both in Apis mellifera (24%) and in Vespula spp. Allergic patients groups (27.8%), a very high prevalence (81.2%) was documented in Vespa crabro allergic patients group. The relative risk for life-threatening reactions after a Vespa crabro sting was about three times higher (RR = 2.74--CI 95% 1.93-3.89--R &lt; 0.0001) than it was for a honeybee or yellow jacket sting. The increase of the risk for life-threatening reactions after a Vespa crabro sting was independent from the age of patients. Conclusion: Vespa crabro sting seems to be a new risk factor for life-threatening reactions after hymenoptera sting. abstract_id: PUBMED:34296844 Allergy to lipid transfer proteins (LTP) in a pediatric population. Summary: Background. Lipid transfer proteins (LTP) are considered important plant food allergens in the Mediterranean area, but little is known about LTP allergy in pediatric age. Our aim was to characterize LTP allergy in children.Methods. We reviewed the clinical data from all children evaluated in our department with LTP allergy. From the 76 patients with LTP allergy, 26c hildren were included, 50% female, median age 10 years (1-17). Symptoms included urticaria in 58% (n = 15), anaphylaxis in 46% (n = 12) and OAS in 42% (n = 11). Results. Multiple reactions with different foods occurredin 69%. Cofactors were reported in 27% (n = 7). All patients had positive SPT to peach LTP extract and sIgE Pru p 3. No association between the occurrence of severe reactions and sIgE to Pru p 3 (p = 0.462), sIgE to Cor a 8(p = 0.896), SPT to peach LTP extract (p = 0.846) or the number of positive SPT to fruits/tree nuts (p = 0.972; p = 0.676) was found. Ninety-two percent of the patients tolerated fruits from Rosacea family without peel. Twelve percent reported reactions to new LTP containing foods during follow-up. LTP allergy can occur since early childhood. Conclusions. Since anaphylaxisis common and cofactors act as severity enhancers, it is fundamental to recognizeLTP allergy in children. Currently available diagnostic tests (SPT and sIgE) cannot accurately predict food tolerance or anticipate reaction severity. abstract_id: PUBMED:38446078 Food-dependent exercise-induced allergic reactions in Lipid Transfer Protein (LTP) hypersensitive subjects: new data and a critical reappraisal. Summary: Background. Lipid transfer protein is the main cause of both primary food allergy and food-dependent exercise-induced allergic reactions (FDEIAR) in Italy. What characterizes LTP-hypersensitive patients with FDEIAR is still unclear. We investigated the key characteristics of LTP-hypersensitive patients with or without FDEIAR in a large cohort of individuals sensitized to this allergen. Methods. 1,203 food-allergic patients, diagnosed on the basis of unequivocal clinical history and presence of circulating food allergen-specific IgE were studied. Serum IgE reactivity was assessed using the Allergen ExplorerALEX® system (Macroarray Diagnostics, Vienna, Austria). Association of specific IgE reactivities with FDEIAR was investigated, and patients with and without FDEIAR sensitized to LTP were compared. Results. 116 subjects (9.6%) had FDEIAR. Among these, 77 (66.3%) were LTP-reactors and 16 (13.8%) were sensitized to Tri a 19 (omega-5-gliadin). Different LTPs and omega-5-gliadin emerged as the sole allergens clearly associated with FDEIAR. Severity of allergic reactions was paralleled the level of specific IgE to LTPs. Patients with FDEIAR showed significantly lower IgE levels than their counterparts with food allergy at rest, and displayed nearly identical IgE levels regardless of the severity of allergic reactions induced by exercise. Conclusions. FDEIAR are associated with specific allergens. Specific IgE levels in LTP-hypersensitive patients with FDEIAR show an intermediate titer between those simply sensitized and those showing classic food allergy. abstract_id: PUBMED:27138061 Can we identify patients at risk of life-threatening allergic reactions to food? Anaphylaxis has been defined as a 'severe, life-threatening generalized or systemic hypersensitivity reaction'. However, data indicate that the vast majority of food-triggered anaphylactic reactions are not life-threatening. Nonetheless, severe life-threatening reactions do occur and are unpredictable. We discuss the concepts surrounding perceptions of severe, life-threatening allergic reactions to food by different stakeholders, with particular reference to the inclusion of clinical severity as a factor in allergy and allergen risk management. We review the evidence regarding factors that might be used to identify those at most risk of severe allergic reactions to food, and the consequences of misinformation in this regard. For example, a significant proportion of food-allergic children also have asthma, yet almost none will experience a fatal food-allergic reaction; asthma is not, in itself, a strong predictor for fatal anaphylaxis. The relationship between dose of allergen exposure and symptom severity is unclear. While dose appears to be a risk factor in at least a subgroup of patients, studies report that individuals with prior anaphylaxis do not have a lower eliciting dose than those reporting previous mild reactions. It is therefore important to consider severity and sensitivity as separate factors, as a highly sensitive individual will not necessarily experience severe symptoms during an allergic reaction. We identify the knowledge gaps that need to be addressed to improve our ability to better identify those most at risk of severe food-induced allergic reactions. Answer: Yes, fasting appears to be a risk factor for LTP allergic patients experiencing anaphylaxis to apple. A study investigated the potential risk associated with the isolated intake of apple while fasting in LTP-hypersensitive patients who had a clinical allergy to peach but not to apple. The results showed that four out of the twelve patients (33%) reacted to apple upon oral challenge when they were fasting. The study concluded that fasting seems to play a relevant role in the clinical expression of allergy to LTP, suggesting that in an empty gastrointestinal tract, the allergen may be absorbed more rapidly. Alternatively, it was proposed that pepsin might digest the food matrix more efficiently in a fasting state, thus increasing the concentration of the purified allergen that comes in contact with the gut mucosa (PUBMED:21114199).
Instruction: Does it matter how we refer to individuals with substance-related conditions? Abstracts: abstract_id: PUBMED:20005692 Does it matter how we refer to individuals with substance-related conditions? A randomized study of two commonly used terms. Objective: Stigma is a frequently cited barrier to help-seeking for many with substance-related conditions. Common ways of describing individuals with such problems may perpetuate or diminish stigmatizing attitudes yet little research exists to inform this debate. We sought to determine whether referring to an individual as "a substance abuser" vs. "having a substance use disorder" evokes different judgments about behavioral self-regulation, social threat, and treatment vs. punishment. Method: A randomized, between-subjects, cross-sectional design was utilized. Participants were asked to read a vignette containing one of the two terms and to rate their agreement with a number of related statements. Clinicians (N=516) attending two mental health conferences (63% female, 81% white, M age 51; 65% doctoral-level) completed the study (71% response rate). A Likert-scaled questionnaire with three subscales ["perpetrator-punishment" (alpha=.80); "social threat" (alpha=.86); "victim-treatment" (alpha=.64)] assessed the perceived causes of the problem, whether the character was a social threat, able to regulate substance use, and should receive therapeutic vs. punitive action. Results: No differences were detected between groups on the social threat or victim-treatment subscales. However, a difference was detected on the perpetrator-punishment scale. Compared to those in the "substance use disorder" condition, those in the "substance abuser" condition agreed more with the notion that the character was personally culpable and that punitive measures should be taken. Conclusions: Even among highly trained mental health professionals, exposure to these two commonly used terms evokes systematically different judgments. The commonly used "substance abuser" term may perpetuate stigmatizing attitudes. abstract_id: PUBMED:30875650 Substance abuse and white matter: Findings, limitations, and future of diffusion tensor imaging research. Individuals who abuse substances often differ from nonusers in their brain structure. Substance abuse and addiction is often associated with atrophy and pathology of grey matter, but much less is known about the role of white matter, which constitutes over half of human brain volume. Diffusion tensor imaging (DTI), a method for non-invasively estimating white matter, is increasingly being used to study addiction and substance abuse. Here we review recent DTI studies of major substances of abuse (alcohol, opiates, cocaine, cannabis, and nicotine substance abuse) to examine the relationship, specificity, causality, and permanence of substance-related differences in white matter microstructure. Across substance, users tended to exhibit differences in the microstructure of major fiber pathways, such as the corpus callosum. The direction of these differences, however, appeared substance-dependent. The subsample of longitudinal studies reviewed suggests that substance abuse may cause changes in white matter, though it is unclear to what extent such alterations are permanent. While collectively informative, some studies reviewed were limited by methodological and technical approach. We therefore also provide methodological guidance for future research using DTI to study substance abuse. abstract_id: PUBMED:33705253 Perceptions diverge on aspects related to substance use disorder: An analysis of individuals in recovery, physicians, nurses, and medical students. Background: Interactions with healthcare workers can provide effective entrance into treatment, ensuring retention and lifelong recovery for individuals with Substance Use Disorder (SUD). Healthcare providers approach the challenges of patient management with different skills, comfort levels, and viewpoints. Individuals in recovery also provide crucial perspectives relevant to the complex aspects of the drug epidemic. The purpose of this study was to determine if perceptions of SUD diverge among individuals in recovery, physicians, nurses and medical students. Methods: A survey consisting of 29 Likert statements was deployed to physicians, nurses, medical students, and persons with SUD in recovery. Respondents were asked to rate their level of agreement on statements about SUD such as treatment, stigma, medications for opioid use disorder (MOUD), naloxone kits, safe injection sites, and methamphetamine usage. Separate Welch's analysis of variances (ANOVAs) were conducted to determine differences between the respondent groups and each statement. For any statistically significant findings, Games-Howell post-hoc analyses were employed. Results: A total of 523 individuals provided survey responses: individuals in recovery (n = 111), physicians (n = 113), nurses (n = 206), and medical students (n = 93). Survey results revealed the majority of items had statistically significant differences in respondent groups. Perceptions diverged on items related to treatment, stigma, MOUD, take-home naloxone kits, safe injection sites, needle exchange programs, and methamphetamine. Conclusion: As healthcare providers and policymakers develop treatment strategies to engage those with SUD in quality treatment, they will benefit from understanding how different viewpoints on SUD affect treatment for these individuals. These attitudes impact stigma, willingness to prescribe new treatments, and development of clinical relationships. The insight from this study allows for important discussions on the substance use health crisis and further inquiry on why these differences exist and how the diverging viewpoints may impact the lives of persons with SUD. abstract_id: PUBMED:26833821 Characterization of white matter integrity deficits in cocaine-dependent individuals with substance-induced psychosis compared with non-psychotic cocaine users. With sufficient drug exposure, some individuals develop transient psychotic symptoms referred to as 'substance-induced psychosis' (SIP), which closely resemble the symptoms observed in schizophrenia spectrum disorders. The comparability in psychotic presentation between SIP and the schizophrenias suggests that similar underlying neural deficits may contribute to the emergence of psychosis across these disorders. Only a small number of studies have investigated structural alterations in SIP, and all have been limited to volumetric imaging methods, with none controlling for the effects of chronic drug exposure. To investigate white matter abnormalities associated with SIP, diffusion tensor imaging was employed in a group of individuals with cocaine-associated psychosis (CAP; n = 24) and a cocaine-dependent non-psychotic (CDN) group (n = 43). Tract-based spatial statistics was used to investigate group differences in white matter diffusion parameters. The CAP group showed significantly lower fractional anisotropy values than the CDN group (p &lt; 0.05) in voxels within white matter tracts of fronto-temporal, fronto-thalamic and interhemispheric pathways. The greatest differences in white matter integrity were present in the corpus callosum, corona radiata, bilateral superior longitudinal fasciculi and bilateral inferior longitudinal fasciculi. Additionally, the CAP group had voxels of significantly higher radial diffusivity in a subset of the previously mentioned pathways. These results are the first description of white matter integrity abnormalities in a SIP sample and indicate that differences in these pathways may be a shared factor in the expression of different forms of psychosis. abstract_id: PUBMED:29569312 Substance-use initiation moderates the effect of stress on white-matter microstructure in adolescents. Background: While childhood stress may contribute risk to substance-use initiation and differences in brain white-matter development, understanding of the potential impact of substance-use initiation on the relationship between experienced stress and white-matter microstructure remains limited. Objectives: This study examined whether substance-use initiation moderated the effect of perceived stress on white-matter differences using measures of primary white-matter fiber anisotropy. Methods: Forty adolescents (age 14.75 ± .87 years) were assessed on the Perceived Stress Scale, and 50% were determined to have presence of substance-use initiation. White-matter microstructure was examined using primary-fiber orientations anisotropy, which may reflect white-matter integrity, modeled separately from other fiber orientations in the same voxels. Analyses were conducted on regions of interest previously associated with childhood stress and substance use. Results: Lower perceived stress and presence of substance-use initiation were related to greater right cingulum primary-fiber measures. Substance-use-initiation status moderated the association between perceived stress and right cingulum primary-fiber measures, such that higher perceived stress was associated with lower right cingulum primary-fiber anisotropy in adolescents without substance-use initiation, but not in those with substance-use initiation. Conclusions And Scientific Significance: Findings in primary-fiber anisotropy suggest differences in right cingulum white-matter integrity is associated with substance-use initiation in higher-stress adolescents. This reflects a possible pre-existing risk factor, an impact of early substance use, or a combination thereof. Examination of potential markers associated with substance-use initiation in white-matter microstructure among stress-exposed youth warrant additional investigation as such biomarkers may inform efforts relating to tailored interventions. (Am J Addict 2018;27:217-224). abstract_id: PUBMED:30336705 Mega-Analysis of Gray Matter Volume in Substance Dependence: General and Substance-Specific Regional Effects. Objective: Although lower brain volume has been routinely observed in individuals with substance dependence compared with nondependent control subjects, the brain regions exhibiting lower volume have not been consistent across studies. In addition, it is not clear whether a common set of regions are involved in substance dependence regardless of the substance used or whether some brain volume effects are substance specific. Resolution of these issues may contribute to the identification of clinically relevant imaging biomarkers. Using pooled data from 14 countries, the authors sought to identify general and substance-specific associations between dependence and regional brain volumes. Method: Brain structure was examined in a mega-analysis of previously published data pooled from 23 laboratories, including 3,240 individuals, 2,140 of whom had substance dependence on one of five substances: alcohol, nicotine, cocaine, methamphetamine, or cannabis. Subcortical volume and cortical thickness in regions defined by FreeSurfer were compared with nondependent control subjects when all sampled substance categories were combined, as well as separately, while controlling for age, sex, imaging site, and total intracranial volume. Because of extensive associations with alcohol dependence, a secondary contrast was also performed for dependence on all substances except alcohol. An optimized split-half strategy was used to assess the reliability of the findings. Results: Lower volume or thickness was observed in many brain regions in individuals with substance dependence. The greatest effects were associated with alcohol use disorder. A set of affected regions related to dependence in general, regardless of the substance, included the insula and the medial orbitofrontal cortex. Furthermore, a support vector machine multivariate classification of regional brain volumes successfully classified individuals with substance dependence on alcohol or nicotine relative to nondependent control subjects. Conclusions: The results indicate that dependence on a range of different substances shares a common neural substrate and that differential patterns of regional volume could serve as useful biomarkers of dependence on alcohol and nicotine. abstract_id: PUBMED:38422868 Co-occurrence of mental illness and substance use among US pregnant individuals, 2012-2021. Aim: Substance use disorders are increasingly prevalent among pregnant individuals, with evident risks of adverse perinatal outcomes. This study examines substance use (tobacco, alcohol and marijuana) among pregnant individuals with mental illness. Methods: A national representative sample of pregnant individuals were derived from 2012 to 2021 National Survey of Drug Use and Health data. Associations of past-year mental illness with past-month polysubstance use and each substance use were analyzed by logistic regression models, with complex sampling weights and survey year. Results: Among 6801 pregnant individuals, 16.4% reported having any mental illness (AMI) in 2012 and 2013, increasing to 23.8% in 2020-2021; and SMI increased from 3.3% to 9.4%. Polysubstance use increased disproportionately among those with severe mental illness (SMI), from 14.0% to 18.6%. Pregnant individuals with greater severity of mental illness had higher odds of polysubstance use (Adjusted Odds Ratio, 95% CI: AMI but no SMI vs. without AMI: 1.59 [1.04, 2.44]; SMI vs. without AMI: 5.48 [2.77, 10.82]). Conclusions: Pregnant individuals with greater severity of mental illness were more likely to engage in substance use. Evidence-based educational, screening and treatment services, and public policy changes are warranted to mitigate the harmful health outcomes of substance use among US pregnant individuals with mental illness. abstract_id: PUBMED:28861547 A Systematic Review of Interventions to Reduce Problematic Substance Use Among Transgender Individuals: A Call to Action. Persons who are transgender (i.e., individuals who are assigned one sex at birth, but who do not identify with that sex) are at elevated risk for developing problematic substance use. Recent studies indicate that transgender persons have high rates of alcohol use, illicit drug use, and nonmedical use of prescription drugs and evince more severe misuse of these substances compared with nontransgender individuals. Despite the high rates of substance use among transgender persons and the multiple conceptual and narrative recommendations for substance use treatments, there is a lack of consensus or awareness of empirically tested interventions and programs effective for this population. Thus, it is critical to examine current substance use interventions for transgender individuals to identify gaps in the field and to immediately put forth efforts to reduce problematic substance use. This systematic review is the first to attempt a comprehensive identification and synthesis of the available evidence on interventions for reducing problematic substance use among transgender persons. Reflective of the state of the field regarding transgender care for substance use, we found a deficiency of studies to include in this systematic review (n=2). Perhaps the most important conclusion of this review is that well-designed, theoretically informed culturally sensitive research focused on developing and rigorously testing interventions for substance use among transgender individuals is alarmingly scarce. This review discusses barriers to intervention design and synthesizes treatment recommendations for future work. abstract_id: PUBMED:23082831 Self-reported psychotic disorders among Individuals with substance use disorders: findings from the National Epidemiologic Survey on Alcohol and Related Conditions. Background And Objectives: Comorbidity of substance use disorders (SUDs) and psychotic disorders (PDs) presents many challenges in diagnosis and treatment. Most reports to-date focus on the prevalence of SUDs among clinical populations of patients with PDs, and there is a lack of data pertaining to rates of PDs among individuals with substance use and SUDs. Methods: We analyzed data on 43,093 respondents age 18 and above from the National Epidemiologic Survey on Alcohol and Related Conditions, a nationally representative US survey (Wave 1, 2001-2002). Cross-tabulations were used to derive prevalence estimates of PDs among individuals with 12-month substance use or SUDs across 10 categories of substances. Odds ratios (ORs) were derived from bivariate logistic regression analyses to examine the relationships between lifetime PDs and 12-month substance use or SUDs for the specific categories of substances. Results: Among individuals with 12-month substance use, prevalence of PDs was found to be elevated in 8 of 10 categories of substances, particularly among amphetamine (OR = 8.8) and cocaine (OR = 10.3) users compared to nonusers. Among individuals with SUDs, prevalence of PDs was elevated in 9 of 10 categories of substances compared to individuals without SUDs. Conclusions And Scientific Significance: Our findings on the increased rates of PDs among substance users and individuals with SUDs across a wide range of substances emphasize the importance of screening for PDs while treating patients with substance use and SUDs. This may allow for early intervention and adequate referral to appropriate settings. abstract_id: PUBMED:24867528 Assessment of whole brain white matter integrity in youths and young adults with a family history of substance-use disorders. Individuals with a family history of substance use disorders (FH+) are at a greater risk of developing substance use disorders than their peers with no such family histories (FH-) and this vulnerability is proportional to the number of affected relatives (FH density). The risk for developing substance use disorders peaks during adolescence to early adulthood in the general population, and that is thought to be related to delayed maturation of frontocortical and frontostriatal functional circuits. We hypothesized that FH+ youth and young adults have impaired myelination of frontocortical and frontostriatal white matter tracts. We examined fractional anisotropy (FA) data in 80 FH+ and 34 FH- youths (12.9 ± 1.0 years) and in 25 FH+ and 30 FH- young adults (24.3 ± 3.4 years). FH+ youths had lower FA values in both frontocortical and frontostriatal tracts as well as parietocortical tracts including the anterior, superior and posterior corona radiata and the superior frontal-occipital fasciculus. Moreover, FA values in these tracts were negatively correlated with FH density. FH+ adults had lower FA values in two frontocortical tracts: the genu of the corpus callosum and anterior corona radiata and also significant negative correlations between FA and FH density in these same tracts. In both groups, lower FA values corresponded to higher radial diffusivity suggesting reduced axonal myelination. We interpreted our findings as evidence for impaired myelination of frontal white matter that was proportional to FH density. Our data suggest that deficits may partially resolve with age, paralleling an age-related decline in risk for developing substance use disorders. Answer: Yes, it does matter how we refer to individuals with substance-related conditions. A study found that the term used to describe individuals with such problems can influence the attitudes and judgments of even highly trained mental health professionals. When participants in the study were exposed to the term "substance abuser," they agreed more with the notion that the character in a vignette was personally culpable and that punitive measures should be taken, compared to when the term "having a substance use disorder" was used. This suggests that the term "substance abuser" may perpetuate stigmatizing attitudes (PUBMED:20005692).
Instruction: Does Race Influence Health-related Quality of Life and Toxicity Following Proton Therapy for Prostate Cancer? Abstracts: abstract_id: PUBMED:24710124 Does Race Influence Health-related Quality of Life and Toxicity Following Proton Therapy for Prostate Cancer? Objective: This matched-paired analysis explores disparities in health-related quality of life (QOL) and common toxicities between African American (AA) and white patients following proton therapy for prostate cancer at our institution. Materials And Methods: A total of 1536 men with clinically localized prostate cancer were treated from 2006 to 2009 with definitive proton therapy to a median dose of 78 Gy +/- androgen deprivation therapy. A cohort of 92 consecutively treated AA men was matched to a cohort of 92 white men on the basis of National Comprehensive Cancer Network risk category and age. The 2 groups were compared with regard to comorbidities, demographics, and treatment regimen. Differences in genitourinary and gastrointestinal (GI) toxicity according to the Common Terminology Criteria for Adverse Events scale and QOL data from the Expanded Prostate Index Composite 26-question questionnaire were reported. Results: Median follow-up was 2.1 years. Baseline patient and treatment characteristics were similar between the 2 groups with the exception of prostate-specific antigen ≥10 (32% for AAs vs. 20% for whites; P=0.068) and use of androgen deprivation therapy (26% for AAs vs. 21% for whites; P=0.38). No difference in Expanded Prostate Index Composite 26-question sexual summary, urinary incontinence, urinary obstruction, or bowel summary scores was detected between the 2 groups, nor was there a difference in grade 2 or higher GI toxicity (P=0.45). AAs had a statistically nonsignificant higher absolute incidence of late grade 3 genitourinary toxicity (4.4% vs. 0%; P=0.12). Conclusions: After 2 years, there were no disparities in health-related QOL, physician-reported Common Terminology Criteria for Adverse Events GI toxicity, or biochemical relapse. Longer follow-up is needed to confirm these findings. abstract_id: PUBMED:32129447 Health-related quality of life in Japanese patients with prostate cancer following proton beam therapy: an institutional cohort study. Objective: Many treatment options have guaranteed long-term survival in patients with localized prostate cancer and health-related quality of life has become a greater concern for those patients. The purpose of this study was to reveal the health-related quality of life after proton beam therapy and to clarify the differences from other treatment modalities for prostate cancer. Methods: Between January 2011 and April 2016, 583 patients were enrolled in the study and health-related quality of life outcomes using the Expanded Prostate Cancer Index Composite questionnaire were evaluated and compared with previous research targeted at Japanese patients. Results: We found a significant decrease in the least square mean scores for urinary and bowel domains excluding the incontinence subscale after proton beam therapy (P &lt; 0.0001) and recovery at a year following treatment. The scores for sexual function in patients without androgen deprivation therapy decreased each year after proton beam therapy (P &lt; 0.0001). The scores for hormones in patients without androgen deprivation therapy remained high and those of patients with androgen deprivation therapy were lower before treatment but were comparable to those of non-androgen deprivation therapy patients at 2 years post-treatment. We found that the impact of radiotherapy including proton beam therapy on urinary condition and sexual function was lower than that of surgery. Conclusions: For the first time in Japan, we investigated health-related quality of life using Expanded Prostate Cancer Index Composite questionnaires in patients with prostate cancer after proton beam therapy and compared it with other treatment modalities. abstract_id: PUBMED:30223877 Early toxicity and patient reported quality-of-life in patients receiving proton therapy for localized prostate cancer: a single institutional review of prospectively recorded outcomes. Background: We report prospectively captured clinical toxicity and patient reported outcomes in a single institutional cohort of patients treated for prostate cancer with proton beam therapy (PBT). This is the largest reported series of patients treated mostly with pencil beam scanning PBT. Methods: We reviewed 231 patients treated on an IRB approved institutional registry from 2013 to 2016; final analysis included 192 patients with &gt; 1-year of follow-up. Toxicity incidence was prospectively captured and scored using CTCAE v4.0. International Prostate Symptoms Score (IPSS), Sexual Health Inventory for Men (SHIM) score, and Expanded Prostate Cancer Index Composite (EPIC) bowel domain questionnaires were collected at each visit. Univariate Cox regression was used to explore associations of grade 2+ toxicity with clinical, treatment, and dosimetric variables. Results: Median follow-up was 1.7 years. Grade 3 toxicity was seen in 5/192 patients. No grade 4 or 5 toxicity was seen. Patient reported quality-of-life showed no change in urinary function post-radiation by IPSS scores. Median SHIM scores declined by 3.7 points at 1-year post-treatment without further decrease beyond year 1. On univariate analysis, only younger age (HR = 0.61, p = 0.022) was associated with decreased sexual toxicity. EPIC bowel domain scores declined from 96 at baseline (median) by an average of 5.4 points at 1-year post-treatment (95% CI: 2.5-8.2 points, p &lt; 0.001), with no further decrease over time. Bowel toxicity was mostly in the form of transient rectal bleeding and was associated with anticoagulation use (HR = 3.45, p = 0.002). Conclusions: Grade 3 or higher toxicity was rare at 2-years after treatment with PBT for localized prostate cancer. Longer follow-up is needed to further characterize late toxicity and biochemical control. Trial Registration: NCT, NCT01255748 . Registered 1 January 2013. abstract_id: PUBMED:22253020 Erectile function, incontinence, and other quality of life outcomes following proton therapy for prostate cancer in men 60 years old and younger. Background: This study sought to evaluate patient-reported health-related quality of life following proton therapy for prostate cancer in men ≤ 60 years old. Methods: Between August 2006 and January 2010, 262 hormone-naive men ≤ 60 years old were treated with definitive proton therapy for prostate cancer. Before treatment and every 6 months after treatment, patients filled out the Expanded Prostate Index Composite (EPIC) and the International Index of Erectile Function (IIEF) questionnaires. Potency was defined as successful sexual intercourse in the prior month or an EPIC sexual summary (SS) score ≥ 60. Results: Median follow-up was 24 months; 90% of men completed follow-up EPIC forms within the last year. For EPIC urinary, bowel, and hormone subscales, the average decline from baseline to 2 years was ≤5 points, except for bowel function (5.2 points). SS scores declined 12.6 points after 2 years. Potency rates declined by 11% from baseline at 2 years, but 94% of men were potent with a baseline IIEF &gt; 21, body mass index &lt; 30, and no history of diabetes. At 2 years after treatment, only 1.8% of men required a pad for urge incontinence. On multivariate analysis, factors associated with a significant decline in SS score were mean penile bulb dose ≥40 cobalt Gy equivalents (P = .012) and radiation dose ≥ 80 cobalt Gy equivalents (P = .017); only diabetes was significantly associated with impotence (P = .015). Conclusions: Young men undergoing proton therapy for treatment of prostate cancer have excellent outcomes with respect to erectile dysfunction, urinary incontinence, and other health-related quality of life parameters during the first 2 years after treatment. Longer follow-up is needed to confirm these findings. abstract_id: PUBMED:27084658 Five-Year Biochemical Results, Toxicity, and Patient-Reported Quality of Life After Delivery of Dose-Escalated Image Guided Proton Therapy for Prostate Cancer. Purpose: To report clinical outcomes in patients treated with image guided proton therapy (PT) for localized prostate cancer. Methods And Materials: The medical records of 1327 men were reviewed. Each man was enrolled on an outcomes tracking study. Dual enrollment on a prospective clinical trial was allowed. Each patient was treated for localized prostate cancer with PT at our institution between 2006 and 2010. Ninety-eight percent of patients received 78 Gy (radiobiological equivalent [RBE]) or higher; 18% received androgen deprivation therapy (ADT). The 5-year freedom from biochemical progression (FFBP), distant metastasis-free survival, and cause-specific survival rates are reported for each risk group. Data on patient-reported quality of life and high-grade toxicities were prospectively collected and reported. A multivariate analysis was performed to identify clinical predictors of biochemical failure and urologic toxicity. Results: The median follow-up time was 5.5 years. The 5-year FFBP rates were 99%, 94%, and 74% in low-risk, intermediate-risk, and high-risk patients, respectively. The actuarial 5-year rates of late grade 3+ Common Terminology Criteria for Adverse Events, version 4.0, gastrointestinal (GI) and genitourinary (GU) toxicity were 0.6% and 2.9%, respectively. Multivariate analysis showed a significant correlation between grade 3+ GU toxicity and pretreatment prostate reductive procedures (P&lt;.0001), prostate volume (P=.0085), pretreatment α-blockers (P=.0067), diabetes (P=.0195), and dose-volume histogram parameters (P=.0208). The median International Prostate Symptom Scores pretreatment scores and scores at 5 years after treatment were 7 and 7, respectively. The mean Expanded Prostate Cancer Index Composite (EPIC) scores significantly declined for sexual summary for patients not receiving ADT (from 67 to 53) between baseline and 5 years. Conclusions: Image guided PT provided excellent biochemical control rates for patients with localized prostate cancer. The actuarial rates of high-grade toxicity were low after PT. From pretreatment to 5 years of follow-up, a significant decline was found only in mean EPIC sexual summary scores. Prospective clinical studies are needed to determine the comparative effectiveness of PT and other radiation treatment strategies. abstract_id: PUBMED:33471548 Focal Boost to the Intraprostatic Tumor in External Beam Radiotherapy for Patients With Localized Prostate Cancer: Results From the FLAME Randomized Phase III Trial. Purpose: This study investigates whether focal boosting of the macroscopic visible tumor with external beam radiotherapy increases biochemical disease-free survival (bDFS) in patients with localized prostate cancer. Patients And Methods: In the phase III, multicenter, randomized controlled Focal Lesion Ablative Microboost in Prostate Cancer trial, 571 patients with intermediate- and high-risk prostate cancer were enrolled between 2009 and 2015. Patients assigned to standard treatment received 77 Gy (fractions of 2.2 Gy) to the entire prostate. The focal boost arm received an additional simultaneous integrated focal boost up to 95 Gy (fractions up to 2.7 Gy) to the intraprostatic lesion visible on multiparametric magnetic resonance imaging. Organ at risk constraints were prioritized over the focal boost dose. The primary end point was 5-year bDFS. Secondary end points were disease-free survival (DFS), distant metastases-free survival, prostate cancer-specific survival, overall survival, toxicity, and health-related quality of life. Results: Median follow-up was 72 months. Biochemical DFS was significantly higher in the focal boost compared with the standard arm (hazard ratio 0.45, 95% CI, 0.28 to 0.71, P &lt; .001). At 5-year follow-up bDFS was 92% and 85%, respectively. We did not observe differences in prostate cancer-specific survival (P = .49) and overall survival (P = .50). The cumulative incidence of late genitourinary and GI toxicity grade ≥ 2 was 23% and 12% in the standard arm versus 28% and 13% in the focal boost arm, respectively. Both for late toxicity as health-related quality of life, differences were small and not statistically significant. Conclusion: The addition of a focal boost to the intraprostatic lesion improved bDFS for patients with localized intermediate- and high-risk prostate cancer without impacting toxicity and quality of life. The Focal Lesion Ablative Microboost in Prostate Cancer study shows that a high focal boost strategy to improve tumor control while respecting organ at risk dose constraints is effective and safe. abstract_id: PUBMED:35158785 Patient-Reported Quality of Life Outcomes after Moderately Hypofractionated and Normofractionated Proton Therapy for Localized Prostate Cancer. We retrospectively evaluated the three-year patient-reported quality of life (QOL) after moderately hypofractionated proton therapy (MHPT) for localized prostate cancer in comparison with that after normofractionated PT (NFPT) using the Expanded Prostate Cancer Index Composite-50. Patients who received MHPT (60-63 Gy (relative biological effectiveness equivalents; RBE)/20-21 fractions) (n = 343) or NFPT (74-78 Gy (RBE)/37-39 fractions) (n = 296) between 2013 and 2016 were analyzed. The minimum clinically important difference (MCID) threshold was defined as one-half of a standard deviation of the baseline value. The median follow-up was 56 months and 83% completed questionnaires at 36 months. Clinically meaningful score deterioration was observed in the urinary domain at 1 month in both groups and in the sexual domain at 6-36 months in the NFPT group, but not observed in the bowel domain. At 36 months, the mean score change for urinary summary was -0.3 (MHPT) and -1.6 points (NFPT), and that for bowel summary was +0.1 and -2.0 points; the proportion of patients with MCID was 21% and 24% for urinary summary and 18% and 29% for bowel summary. Overall, MHPT had small negative impacts on QOL over three years, and the QOL after MHPT and NFPT was similar. abstract_id: PUBMED:28463150 External Beam Radiation Therapy or Brachytherapy With or Without Short-course Neoadjuvant Androgen Deprivation Therapy: Results of a Multicenter, Prospective Study of Quality of Life. Purpose: The long-term effects of neoadjuvant androgen deprivation therapy (NADT) with radiation therapy on participant-reported health-related quality of life (HRQOL) have not been characterized in prospective multicenter studies. We evaluated HRQOL for 2 years among participants undergoing radiation therapy (RT) with or without NADT for newly diagnosed, early-stage prostate cancer. Methods And Materials: We analyzed longitudinal cohort data from the Prostate Cancer Outcomes and Satisfaction with Treatment Quality Assessment Consortium to ascertain the HRQOL trajectory of men receiving NADT with external beam RT (EBRT) or brachytherapy. HRQOL was measured using the expanded prostate cancer index composite 26-item questionnaire at 2, 6, 12, and 24 months after the initiation of NADT. We used the χ2 or Fisher exact test to compare the shift in percentages between groups that did or did not receive NADT. Analyses were conducted at the 2-sided 5% significance level. Results: For subjects receiving EBRT, questions regarding the ability to have an erection, ability to reach an orgasm, quality of erections, frequency of erections, ability to function sexually, and lack of energy were in a significantly worse dichotomized category for the patients receiving NADT. Comparing the baseline versus 24-month outcomes, 24%, 23%, and 30% of participants receiving EBRT plus NADT shifted to the worse dichotomized category for the ability to reach an orgasm, quality of erections, and ability to function sexually compared with 14%, 13%, and 16% in the EBRT group, respectively. Conclusions: Compared with baseline, at 2 years, participants receiving NADT plus EBRT compared with EBRT alone had worse HRQOL, as measured by the ability to reach orgasm, quality of erections, and ability to function sexually. However, no difference was found in the ability to have an erection, frequency of erections, overall sexual function, hot flashes, breast tenderness/enlargement, depression, lack of energy, or change in body weight. The improved survival in intermediate- and high-risk patients receiving NADT and EBRT necessitates pretreatment counseling of the HRQOL effect of NADT and EBRT. abstract_id: PUBMED:29642619 Proton Beam Therapy Alone for Intermediate- or High-Risk Prostate Cancer: An Institutional Prospective Cohort Study. The role of proton beam therapy (PBT) as monotherapy for localized prostate cancer (PCa) remains unclear. The purpose of this study was to evaluate the efficacy and adverse events of PBT alone for these patients. Between January 2011 and July 2014, 218 patients with intermediate- and high-risk PCa who declined androgen deprivation therapy (ADT) were enrolled to the study and were treated with PBT following one of the following protocols: 74 Gray (GyE) with 37 fractions (fr) (74 GyE/37 fr), 78 GyE/39 fr, and 70 GyE/28 fr. The 5-year progression-free survival rate in the intermediate- and high-risk groups was 97% and 83%, respectively (p = 0.002). The rate of grade 2 or higher late gastrointestinal toxicity was 3.9%, and a significant increased incidence was noted in those who received the 78 GyE/39 fr protocol (p &lt; 0.05). Grade 2 or higher acute and late genitourinary toxicities were observed in 23.5% and 3.4% of patients, respectively. Our results indicated that PBT monotherapy can be a beneficial treatment for localized PCa. Furthermore, it can preserve the quality of life of these patients. We believe that this study provides crucial hypotheses for further study and for establishing new treatment strategies. abstract_id: PUBMED:26883560 Proton Therapy as Salvage Treatment for Local Relapse of Prostate Cancer Following Cryosurgery or High-Intensity Focused Ultrasound. Purpose: Local recurrence of prostate cancer after cryosurgery (CS) and high-intensity focused ultrasound (HIFU) is an emerging problem for which optimal management is unknown. Proton therapy (PT) may offer advantages over other local therapeutic options. This article reviews a single institution's experience using PT for salvage of local recurrent disease after HIFU or CS. Methods And Materials: We reviewed the medical records of 21 consecutive patients treated with salvage PT following a local recurrence of prostate cancer after CS (n=12) or HIFU (n=9) between January 2007 and July 2014. Patients were treated to a median dose of 74 Gy(relative biological effectiveness [RBE]; range: 74-82 Gy[RBE]) and 8 patients received androgen deprivation therapy with radiation therapy. Patients were evaluated for quality of life (QOL) by using the Expanded Prostate Index Composite questionnaire and toxicity by using Common Terminology Criteria for Adverse Events, version 3.0, weekly during treatment, every 6 months for 2 years after treatment, and then annually. Results: Median follow-up was 37 months (range: 6-95 months). The 3-year biochemical progression-free survival (bPFS) rate was 77%. The 3-year grade 3 toxicity rate was 17%; however, 2 of these patients had pre-existing grade 3 GU toxicities from their HIFU/CRYO prior to PT. At 1 year, bowel summary, urinary incontinence, and urinary obstructive QOL scores declined, but only the bowel QOL score at 12 months met the minimally important difference threshold. Conclusions: PT achieved a high rate of bPFS with acceptable toxicity and minimal changes in QOL scores compared with baseline pre-PT functions. Although most patients have done fairly well, the study size is small, follow-up is short, and early results suggest that outcomes with PT for salvage after HIFU or CS failure are inferior to outcomes with PT given in the de novo setting with respect to disease control, toxicity, and QOL. Answer: The study presented in abstract PUBMED:24710124 investigated whether there were disparities in health-related quality of life (QOL) and common toxicities between African American (AA) and white patients following proton therapy for prostate cancer. The results of this matched-paired analysis, which included a cohort of 92 AA men matched to 92 white men, showed that after a median follow-up of 2.1 years, there were no disparities in health-related QOL, physician-reported Common Terminology Criteria for Adverse Events gastrointestinal (GI) toxicity, or biochemical relapse between the two groups. Although African Americans had a higher, but not statistically significant, absolute incidence of late grade 3 genitourinary toxicity compared to white patients (4.4% vs. 0%; P=0.12), the study concluded that there were no disparities in the outcomes measured after 2 years. Longer follow-up was suggested to confirm these findings. Therefore, based on the findings from this study, race does not appear to influence health-related quality of life and toxicity following proton therapy for prostate cancer after 2 years of follow-up. However, it is important to note that the study calls for longer-term follow-up to fully confirm the absence of disparities over a more extended period.
Instruction: Does the usual dietary intake of patients with depression require vitamin-mineral supplementation? Abstracts: abstract_id: PUBMED:24946436 Does the usual dietary intake of patients with depression require vitamin-mineral supplementation? Introduction: Research on diet and nutrition of patients with depression show that their eating habits are frequently irrational and result in the inconsistent supply of nutrients, especially vitamins and minerals, the deficiency of which leads to nervous system dysfunction. Aim: The aim of the study was to evaluate the content of selected vitamins and minerals in daily food rations of patients suffering from recurrent depressive disorders. Methods: The study involved a group of 69 people (54 women and 15 men, aged 18-65 years, mean age of women 45.7 +/- 12.2 years, men 46.0 +/- 12.2 years), treated for recurrent depressive disorders. A questionnaire designed in the Department of Dietetics and Clinical Nutrition, Medical University of Bialystok was used to collect dietary data. The quantitative assessment of eating habits used a 24-hour diet recall including 3 weekdays and 1 weekend day. Results: The study showed that the supply of most nutrients assessed was inconsistent with recommendations. Conclusions: The results indicate that the need for vitamin-mineral supplementation should be considered individually. Nutritional education related to the proper choice of groups of food products is indicated at the time of clinical improvement to ensure the optimum supply of vitamins and minerals. abstract_id: PUBMED:25780558 A practical approach to vitamin and mineral supplementation in food allergic children. Background: The management of food allergy in children requires elimination of the offending allergens, which significantly contribute to micronutrient intake. Vitamin and mineral supplementation are commonly suggested as part of dietary management. However a targeted supplementation regime requires a complete nutritional assessment, which includes food diaries. Ideally these should be analysed using a computerised program, but are very time consuming. We therefore set out to evaluate current practice of vitamin and mineral supplementation in a cohort of children with non-Immunoglobulin E (IgE) mediated food allergies. Methods: This prospective, observational study recruited children aged 4 weeks - 16 years, who required to follow an elimination diet for non-IgE mediated allergies. Only children that improved according to a symptom score and were on a vitamin and/or mineral supplement were included. A 3-day food diary including vitamin and mineral supplementation was recorded and analysed using Dietplan computer program. We assessed dietary adequacy with/without the supplement using the Dietary Reference Values. Results: One hundred-and-ten children had completed food diaries and of these 29% (32/110) were taking vitamin and/or mineral supplements. Children on hypoallergenic formulas were significantly (p = 0.007) less likely to be on supplements than those on alternative over-the-counter milks. Seventy-one percent had prescribable supplements, suggested by a dietitian/physician. Sixty percent of those without a vitamin supplement had a low vitamin D intake, but low zinc, calcium and selenium was also common. Of the supplemented cohort many continued to be either under or over-supplemented. Conclusion: This study has raised the question for the first time, whether clinicians dealing with paediatric food allergies should consider routine vitamin and/or mineral supplements in the light of deficient intake being so common in addition to being so difficult to predict. abstract_id: PUBMED:28208814 Adequacy of Usual Vitamin and Mineral Intake in Spanish Children and Adolescents: ENALIA Study. Background: The National Dietary Survey on the Child and Adolescent Population in Spain (ENALIA) provides data to assess the usual micronutrient intake among Spanish infants, children, and adolescents. Methods: Cross-sectional survey (November 2012-July 2014) of a representative sample of Spanish children and adolescents (six months-17 years) (n = 1862). Dietary information was collected using two non-consecutive one-day food diaries (six months-10 years old) or two 24 h dietary recalls (11 years and older) separated by at least 14 days. Estimates were calculated using the Iowa State University method and PC-SIDE software (version 1.0, department of statistics, center for agricultural and rural development, Ames, IA, USA) to account for within- and between-person variation. Results: Usual intake of vitamin D was insufficient in practically all individuals. Vitamin E, folate, and calcium were insufficient, especially from nine years of age, and magnesium and iodine from 14 years of age. The percentage of subjects with insufficient intakes was higher among females. Sodium intake was excessive in a considerable percentage of the population, especially in males, and it increased with age. Finally, over half of children under four years of age had zinc usual intakes that exceeded the Tolerable Upper Level. Conclusion: Vitamin and mineral intake in Spain should be improved, especially in late childhood and adolescence. Nutritional intervention and educational strategies are needed to promote healthy eating habits and correct micronutrient inadequacies in Spanish children and adolescents. abstract_id: PUBMED:35548582 Associations of Dietary Vitamin A and Beta-Carotene Intake With Depression. A Meta-Analysis of Observational Studies. Objective: To clarify the associations of dietary vitamin A and beta-carotene intake with depression based on a meta-analysis of observational studies. Methods: An extensive literature search on February 2022 (PubMed, Web of Science and Embase) was employed to identify observational studies on the associations of dietary vitamin A and beta-carotene intake with depression. The pooled relative risk (RR) of depression for the highest vs. lowest dietary vitamin A and beta-carotene intake category, and the standard mean difference (SMD) of dietary vitamin A and beta-carotene intake for depression vs. control subjects, were calculated. Results: A total of 25 observational studies (100,955 participants), which included 24 cross-sectional/case-control and 1 prospective cohort study, were included in this study. The overall multi-variable adjusted RR demonstrated that dietary vitamin A intake was inversely associated with depression (RR = 0.83, 95%CI: 0.70-1.00; P = 0.05). In addition, the combined SMD showed that the dietary vitamin A intake in depression was also lower than that in control subjects (SMD = -0.13, 95%CI: -0.18 to -0.07; P &lt; 0.001). On the other hand, the overall multi-variable adjusted RR indicated that dietary beta-carotene intake was negatively associated with depression (RR = 0.63, 95%CI: 0.55-0.72; P &lt; 0.001). The combined SMD showed that the dietary beta-carotene intake in depression was also lower than that in control subjects (SMD = -0.34, 95%CI: -0.48 to -0.20; P &lt; 0.001). Conclusion: Our results suggest that both dietary vitamin A and beta-carotene intake is inversely associated with depression. However, due to the limited evidence, further prospective cohort studies are still needed. abstract_id: PUBMED:34345960 The Need and Safety of Mineral Supplementation in Adults with Obesity Post Bariatric Surgery-Sleeve Gastrectomy (SG). Purpose: Most of the research indicated that daily dietary intake of minerals in SG patients was lower than the current recommendations. The aim of the study was to assess the need and safety of a mineral supplementation practice in adults with obesity, at 3, 6, and 9 months post bariatric surgery-sleeve gastrectomy (SG). Methods: The study included 24 women and 6 men. Based on a 4-day food record questionnaire, mineral and calorie intake was calculated at 3, 6, and 9 months after bariatric surgery (SG). Furthermore, an interview on supplement intake was also conducted. Results: It was found that in both men and women, there was a dietary intake deficiency of calcium (97% of respondents), potassium (97%), magnesium (83%), sodium (60%), and zinc (53%). In women, the deficiencies also included iron (50%) and copper (29%). Only 72% of the patients took dietary supplements. The applied supplementation did not adjust for the required intake of calcium in all of the patients, as well as the intake of magnesium in the male patients. Low intake of sodium and potassium were not supplemented and should be corrected by diet modification. The patients did not require supplementation of phosphorus or manganese, while male patients did not require iron or copper supplementation. The dietary and/or supplemental intake of minerals did not exceed the tolerable upper intake level (UL). Conclusion: The results of the study confirm the need to implement personalized mineral supplementation for bariatric surgery patients. abstract_id: PUBMED:33969143 Evaluation of the Effect of Vitamin D Supplementation on Anthropometric Indicators and Dietary Intake of Patients with Type 2 Diabetes. Background: Various studies have shown that diabetes and its complications are associated with vitamin D deficiency. Due to the possible role of vitamin D in reducing the complications of diabetes and the high prevalence of its deficiency in Iran, this study was designed to investigate the effect of vitamin D supplementation on anthropometric indices and dietary intake of patients with type 2 diabetes. Methods: This randomized clinical trial (RCT) study was performed on 74 patients with type 2 diabetes (T2DM). Patients randomly divided into two groups to receive vitamin D (VD) supplementation (100 μg or 4000 IU/day) or placebo for three months, randomization was based on the permutated-block method. Anthropometric indices including body weight (BW), body mass index (BMI) and waist circumference (WC) and physical activity, dietary intake were assessed by validated methods at the beginning and end of the trial. Results: VD supplementation had not any significant differences in anthropometric indices, dietary intake and physical activity between the two groups. Conclusion: Finally, it can be concluded, receiving 100 micrograms/day of VD for three months had no favourable effects on patients with T2DM. abstract_id: PUBMED:35464032 Associations of Dietary Vitamin C and E Intake With Depression. A Meta-Analysis of Observational Studies. Objective: The associations of dietary vitamin C and E intake with depression remains conflicting. This meta-analysis of observational study was therefore employed to clarify the issue further. Methods: An extensive literature review (PubMed, Web of Science and Embase) was performed in January 2022 to identify the observational studies on the associations of dietary vitamin C and E intake with depression. The pooled relative risk (RR) of depression for the highest versus lowest dietary vitamin C and E intake category, and the weighted mean difference (WMD) of dietary vitamin C and E intake for depression versus control subjects, were calculated. Results: A total of 25 observational studies (91966 participants) were included in this meta-analysis. The overall multi-variable adjusted RR demonstrated that dietary vitamin C intake was inversely associated with depression (RR = 0.72, 95% CI: 0.57 to 0.91; P = 0.005). In addition, the combined WMD showed that the dietary vitamin C intake in depression was lower than that in control subjects (WMD = -11.58, 95% CI: -14.88 to -8.29; P &lt; 0.001). Similarly, the overall multi-variable adjusted RR demonstrated that dietary vitamin E intake was negatively associated with depression (RR = 0.84, 95% CI: 0.72 to 0.98; P = 0.02). Moreover, the combined WMD showed that the dietary vitamin E intake in depression was also lower than that in control subjects (WMD = -0.71, 95% CI: -1.07 to -0.34; P &lt; 0.001). Conclusion: The results of this meta-analysis suggest that both dietary vitamin C and E intake is inversely associated with depression. However, due to the limited evidence, more well-designed prospective cohort studies are still needed. abstract_id: PUBMED:28531460 Dietary vitamin E and fat intake are related to Beck's depression score. Objective: There is a high prevalence of depressive disorders in all regions of the world. The importance of dietary factors in the causation of depression is suggested from epidemiologic studies in Western countries, but evidence from non-Western populations are lacking. We aimed to assess the relationship between dietary factors with depression scores in a cohort from north eastern Iran. Methods: A total of 7172 subjects (2725 men and 4447 women) were recruited. Dietary intake was assessed using a 24-h dietary recall questionnaire, and depressive symptoms were assessed using the Beck's depression questionnaire. Results: The age of the population samples were 49.3 ± 8.2 years for the male and 48.1 ± 8.0 years for the female subgroups. Crude intake of MUFA, SFA and TFA in patients was associated with depression scores. On the other hand, there were significant correlations between depression score and total energy adjusted intake of trans-fatty acid (TFA), cholesterol, vitamin E (p &lt; 0.01 for all parameters). Conclusion: There was an association between diet and depression score among a representative sample of individuals from north eastern Iran, with MUFA intake being inversely related, and vitamin E intake being directly related to Beck's depression score. However it cannot be determined whether this is related to the causation of depression in this cross sectional study. abstract_id: PUBMED:35956259 Adequate Dietary Intake and Vitamin D Supplementation: A Study of Their Relative Importance in Determining Serum Vitamin D and Ferritin Concentrations during Pregnancy. Vitamin D is essential for human health. However, it is not clear if vitamin D supplementation is necessary for all pregnant women. This study examines the relative importance of dietary patterns and vitamin D supplementation frequency in determining serum 25-hydroxyvitamin D (25(OH)D) and ferritin concentrations among pregnant women in Hong Kong, China. A total of 572 healthy women were recruited from antenatal clinics at 25-35 weeks pregnant. Participants completed an electronic version of the food frequency questionnaire and a web questionnaire on supplement use. Their blood samples were tested for serum 25(OH)D and ferritin. The associations of dietary patterns and vitamin D supplementation frequency with serum 25(OH)D and ferritin concentrations were analyzed using moderated hierarchical regression. Two dietary patterns were identified. The adequate dietary intake was characterized by the high probability of meeting recommended daily food group servings, whereas the inadequate dietary intake was characterized by inadequate consumption of vegetables, fruits, meat, fish, and eggs, or alternatives. The association between adequate dietary intake and serum ferritin concentrations was independent of vitamin D supplementation frequency (β = 0.05, p = 0.035), but dietary patterns interacted with vitamin D supplementation frequency to determine serum 25(OH)D concentrations (β = -13.22, p = 0.014). The current study presents evidence on the relative importance of dietary patterns and vitamin D supplementation in maintaining sufficient vitamin D and iron in pregnancy. Antenatal nutrition counselling services should be provided to pregnant women who show signs of inadequate dietary intake. abstract_id: PUBMED:31942873 Vitamin D - dietary intake, supplementation and metabolic status of Polish adults Objectives: Contrary to popular opinion on the preventive properties of vitamin D, results of previous studies have been inconclusive. The aim of this research was to evaluate the associations between the intake of vitamin D and metabolic abnormalities in a representative sample of Polish adults. Material And Methods: Within the framework of the Multi-Center National Population Health Examination Survey (referred to as WOBASZ), a random sample of 2381 adult residents of Poland (53.8% of whom were women) was examined. All the study subjects were extensively reviewed, including 24-h dietary recall. The intake of vitamin D was assessed on the basis of dietary and supplements reviews. Metabolic abnormalities were evaluated using measurements of waist circumference (WC), blood pressure (BP), serum triglycerides (TG), high-density lipoprotein cholesterol (HDL-C) and fasting glycemia. Metabolic syndrome was defined according to the International Diabetes Federation. Results: Of all the study participants, about 4.4 % of women and 2.6% of men declared a regular supplementation of vitamin D. Among women, a significant inverse correlation between vitamin D supplementation and the mean systolic BP was found (p &lt; 0.01). A more substantial relationship was noted after dividing the study subjects according to their body mass. Among obese men, there was a significant inverse relationship between vitamin D intake and the mean systolic BP (p &lt; 0.01) and diastolic BP (p &lt; 0.05), as well as a positive correlation with HDL-C (p &lt; 0.05). Among obese women, a negative correlation was found between vitamin D supplementation and the mean systolic BP (p &lt; 0.01) and diastolic BP (p &lt; 0.05), and a positive correlation with TG (p &lt; 0.05). Among non-obese male subjects, a negative correlation between vitamin D intake and WC was observed. Conclusions: The obtained findings suggest that the correlation between vitamin D intake and metabolic abnormalities may depend on the obesity status. A higher vitamin D intake may reduce BP and increase HDL-C in obese subjects. The positive relationship between vitamin D intake and TG concentration in women needs further investigation. Int J Occup Med Environ Health. 2020;33(1):107-18. Answer: The usual dietary intake of patients with depression has been found to be frequently irrational and results in an inconsistent supply of nutrients, particularly vitamins and minerals, which are essential for nervous system function. A study evaluating the content of selected vitamins and minerals in daily food rations of patients with recurrent depressive disorders found that the supply of most assessed nutrients was inconsistent with recommendations (PUBMED:24946436). This suggests that there may be a need for vitamin-mineral supplementation in these patients, but it should be considered individually, and nutritional education is also indicated to ensure the optimum supply of vitamins and minerals. Additionally, a meta-analysis of observational studies found that dietary vitamin A and beta-carotene intake is inversely associated with depression, indicating that higher intakes of these nutrients may be associated with a lower risk of depression (PUBMED:35548582). Similarly, another meta-analysis suggested that dietary vitamin C and E intake is inversely associated with depression (PUBMED:35464032). These findings further support the potential need for vitamin supplementation in patients with depression. However, it is important to note that while supplementation may be beneficial, it should be personalized and carefully managed to avoid over-supplementation, as seen in a cohort of children with food allergies where many continued to be either under or over-supplemented despite taking vitamin and/or mineral supplements (PUBMED:25780558). Moreover, the need and safety of mineral supplementation have been highlighted in specific populations, such as adults with obesity post bariatric surgery, where personalized mineral supplementation was deemed necessary (PUBMED:34345960). In conclusion, the evidence suggests that patients with depression may benefit from vitamin-mineral supplementation due to often inadequate dietary intake, but supplementation should be individualized and carefully monitored to ensure safety and effectiveness.
Instruction: Is beta-adrenergic-mediated airway relaxation of salmeterol antagonized by its solvent xinafoic acid? Abstracts: abstract_id: PUBMED:10378568 Is beta-adrenergic-mediated airway relaxation of salmeterol antagonized by its solvent xinafoic acid? Study Objective: Isolated case reports of asthmatic fatalities accompanied by the use of salmeterol have raised the question whether a paradoxical effect of salmeterol or its vehicle on the airways might contribute to these fatalities. We questioned whether salmeterol's solvent, xinafoic acid, has detrimental effects on the tone of airways or on beta-adrenoceptor binding. Materials And Methods: Basenji-greyhound dogs were anesthetized and their peripheral airways challenged with xinafoic acid via a wedged bronchoscope technique. Radioligand binding assays were performed in lung membranes prepared from these dogs. Results: In contrast to a methacholine control, xinafoic acid (0.001 to 1.0 mg/mL) aerosolized into the peripheral airways of anesthetized dogs did not increase airway resistance. Xinafoate alone had no significant effect on the specific binding of 125I-cyanopindolol to lung membranes and did not affect the affinity of salmeterol for the beta-adrenoceptor in the absence or presence of xinafoate, respectively (-log concentration that inhibits 50% [IC50] of the high-affinity site, 7.7+/-0.15 and 7.9+/-0.27; -log IC50 of the low-affinity site = 5.6+/-0.44 and 5.3+/-0.28 [n = 4]). Conclusion: These findings suggest that xinafoic acid, the solvent for salmeterol, does not have direct airway irritant effects, does not bind to beta-adrenoceptors, and does not impair the binding of salmeterol to beta-adrenoceptors. Thus, xinafoate is unlikely to contribute to the worsening of airway symptoms in asthmatics using salmeterol xinafoate. abstract_id: PUBMED:8882590 Extent of salmeterol-mediated reassertion of relaxation in guinea-pig trachea pretreated with aliphatic side chain structural analogues. 1. Salmeterol is a potent, selective and long acting beta 2-adrenoceptor agonist. In vitro, salmeterol exerts 'reassertion' relaxation of airways smooth muscle. Reassertion relaxation refers to the capacity of salmeterol to cause repeated functional antagonism of induced contraction when airway smooth muscle is intermittently exposed to, then washed free from, beta-adrenoceptor antagonists such as sotalol. The mechanism(s) underlying reassertion relaxation are unknown but may relate to high affinity binding of the long aliphatic side chain of salmeterol to an accessory site, distinct from the agonist recognition site, in or near the beta 2-adrenoceptor (exosite binding hypothesis). 2. In order to test the exosite hypothesis, three pure analogues of salmeterol, each exactly preserving the molecular structure of the aliphatic side chain but with zero or low efficacy at the beta 2-adrenoceptor were synthesized. The effect of pre-incubating guinea-pig tracheal smooth muscle with these analogues on salmeterol-induced reassertion relaxation was determined. 3. Computer Assisted Molecular Modelling of these molecules revealed that each of them exactly preserved the low energy linear conformation of the aliphatic side chain of salmeterol. Measurement of lipophilicity (octanol:water partition coefficient; log P) and direct partition into synthetic membranes (membrane partition coefficient; Kpmem) showed that all compounds had high affinity for lipids and membranes. In particular the biophysical properties of CGP 59162 (log P 1.89, Kpmem 16500) were very similar to salmeterol (log P 1.73, Kpmem 16800). 4. Two of the analogues, CGP 54103 and D 2543 (1 microM), which are structural mimics of the side chain of salmeterol, differing slightly in their length, did not prevent either the initial relaxation induced by salmeterol (0.1 microM) or the reassertion relaxation; however, it was not possible to determine whether either of these molecules occupied the beta 2-adrenoceptor. 5. The third analogue, CGP 59162, which has the substituents on the active saligenin head group of salmeterol in transposed positions, itself exerted a weak beta 2-adrenoceptor-mediated relaxation antagonized by ICI 118551 (beta 2-selective antagonist) but not CGP 20712 (beta 1-selective antagonist) and, at higher concentrations CGP 59162 caused reassertion relaxation suggesting that it may occupy and activate the beta 2-adrenoceptor in a manner analogous to salmeterol. 6. CGP 59162, at concentrations up to ten fold molar excess, did not prevent or reduce salmeterol-induced reassertion relaxation. 7. In conclusion these data are not consistent with the existence of a distinct 'exosite' recognising the aliphatic side chain of salmeterol mediating reassertion. abstract_id: PUBMED:19201814 Modulation of epidermal growth factor receptor binding to human airway smooth muscle cells by glucocorticoids and beta2-adrenergic receptor agonists. EGF receptors (EGFRs) are increased in airway smooth muscle in asthma, which may contribute to both their hyperproliferation and hypercontractility. Lysophosphatidic acid (LPA) is a candidate pathological agent in asthma and other airway diseases, and LPA upregulates EGFRs in human airway smooth muscle (HASM) cells. We tested whether therapeutic glucocorticoids and/or beta(2)-adrenergic receptor (beta(2)AR) agonists also alter EGFR binding in HASM cells. Exposure to glucocorticoids for 24 h induced a twofold increase in EGFR binding similar to that with LPA; fluticasone was markedly more potent than dexamethasone. The increase in EGFR binding by glucocorticoids required 24-h exposure, consistent with transcription-mediated effects. Although the increase in EGFR binding was blocked by the protein synthesis inhibitor cycloheximide for LPA, fluticasone, and dexamethasone, only LPA induced a significant increase in EGFR protein expression detected by immunoblotting. In contrast to the increased binding induced by the glucocorticoids, the beta(2)AR agonists isoproterenol, albuterol, and salmeterol all induced a decrease in EGFR binding. beta(2)AR agonist effects were multiphasic, with an initial decline at 2-4 h that reversed by 6 h and a second, somewhat greater decrease by 18-24 h. In cells pretreated with glucocorticoids, the decreases in EGFR binding by subsequent beta(2)AR treatment were not statistically significant; glucocorticoid upregulation of EGFRs also prevented further increases by LPA. Similar increases by glucocorticoids and decreases by beta(2)AR agonists were found in HFL-1 human lung fibroblasts. These complex and opposing effects of clinically relevant glucocorticoids and beta(2)AR agonists on airway mesenchymal cell EGFRs likely contribute to their overall therapeutic profile in the diseased airway. abstract_id: PUBMED:25799096 ASM-024, a piperazinium compound, promotes the in vitro relaxation of β2-adrenoreceptor desensitized tracheas. Inhaled β2-adrenoreceptor agonists are widely used in asthma and chronic obstructive pulmonary disease (COPD) for bronchoconstriction relief. β2-Adrenoreceptor agonists relax airway smooth muscle cells via cyclic adenosine monophosphate (cAMP) mediated pathways. However, prolonged stimulation induces functional desensitization of the β2-adrenoreceptors (β2-AR), potentially leading to reduced clinical efficacy with chronic or prolonged administration. ASM-024, a small synthetic molecule in clinical stage development, has shown activity at the level of nicotinic receptors and possibly at the muscarinic level and presents anti-inflammatory and bronchodilator properties. Aerosolized ASM-024 reduces airway resistance in mice and promotes in-vitro relaxation of tracheal and bronchial preparations from animal and human tissues. ASM-024 increased in vitro relaxation response to maximally effective concentration of short-acting beta-2 agonists in dog and human bronchi. Although the precise mechanisms by which ASM-024 promotes airway smooth muscle (ASM) relaxation remain unclear, we hypothesized that ASM-024 will attenuate and/or abrogate agonist-induced contraction and remain effective despite β2-AR tachyphylaxis. β2-AR tachyphylaxis was induced with salbutamol, salmeterol and formoterol on guinea pig tracheas. The addition of ASM-024 relaxed concentration-dependently intact or β2-AR desensitized tracheal rings precontracted with methacholine. ASM-024 did not induce any elevation of intracellular cAMP in isolated smooth muscle cells; moreover, blockade of the cAMP pathway with an adenylate cyclase inhibitor had no significant effect on ASM-024-induced guinea pig trachea relaxation. Collectively, these findings show that ASM-024 elicits relaxation of β2-AR desensitized tracheal preparations and suggest that ASM-024 mediates smooth muscle relaxation through a different target and signaling pathway than β2-adrenergic receptor agonists. These findings suggest ASM-024 could potentially provide clinical benefit when used adjunctively with inhaled β2-adrenoreceptor agonists in those patients exhibiting a reduced response to their chronic use. abstract_id: PUBMED:9056046 Evaluation of partial beta-adrenoceptor agonist activity. A partial beta-adrenoceptor (beta-AR) agonist will exhibit opposite agonist and antagonist activity depending on the prevailing degree of adrenergic tone or the presence of a beta-AR agonist with higher intrinsic activity. In vivo partial beta-AR agonist activity will be evident at rest with low endogenous adrenergic tone, as for example with chronotropicity (beta 1/beta 2), inotropicity (beta 1) or peripheral vasodilatation and finger tremor (beta 2). beta-AR blocking drugs which have partial agonist activity may exhibit a better therapeutic profile when used for hypertension because of maintained cardiac output without increased systemic vascular resistance, along with an improved lipid profile. In the presence of raised endogenous adrenergic tone such as exercise or an exogenous full agonist, beta-AR subtype antagonist activity will become evident in terms of effects on exercise induced heart rate (beta 1) and potassium (beta 2) responses. Reduction of exercise heart rate will occur to a lesser degree in the case of a beta-adrenoceptor blocker with partial beta 1-AR agonist activity compared with a beta-adrenoceptor blocker devoid of partial agonist activity. This may result in reduced therapeutic efficacy in the treatment of angina on effort when using beta-AR blocking drugs with partial beta 1-AR agonist activity. Effects on exercise hyperkalaemia are determined by the balance between beta 2-AR partial agonist activity and endogenous adrenergic activity. For predominantly beta 2-AR agonist such as salmeterol and salbutamol, potentiation of exercise hyperkalaemia occurs. For predominantly beta 2-AR antagonists such as carteolol, either potentiation or attenuation of exercise hyperkalaemia occurs at low and high doses respectively. beta 2-AR partial agonist activity may also be expressed as antagonism in the presence of an exogenous full agonist, as for example attenuation of fenoterol induced responses by salmeterol. Studies are required to investigate whether this phenomenon is relevant in the setting of acute severe asthma. abstract_id: PUBMED:10926383 Inhaled adrenergic bronchodilators: historical development and clinical application. The adrenergic bronchodilators that have been developed for oral inhalation represent successive refinement in terms of receptor specificity and duration of action. Beta agonist bronchodilators have durations of 4-6 hours, or, in the case of salmeterol, of up to 12 hours, offering convenient dosing. Inhalation of the aerosol formulations targets the lung directly. The release of levalbuterol now provides an agent with a single isomer active on beta-2 receptors. The currently available agents offer clinicians and patients with reversible obstructive lung disease a choice of sophisticated drugs for airway smooth muscle relaxation. Although improvements in the drugs have reduced adverse effects and beta agonists are considered safe, concerns persist about the effect of beta agonists in asthma. An improved understanding of asthma pathophysiology may lead to more appropriate use of beta agonists in asthma. abstract_id: PUBMED:19666775 Mechanism regulating proasthmatic effects of prolonged homologous beta2-adrenergic receptor desensitization in airway smooth muscle. Use of long-acting beta(2)-adrenergic receptor (beta2AR) agonists to treat asthma incurs an increased risk of asthma morbidity with impaired bronchodilation and heightened bronchoconstriction, reflecting the adverse effects of prolonged homologous beta2AR desensitization on airway smooth muscle (ASM) function. Since phosphodiesterase 4 (PDE4) regulates ASM relaxation and contractility, we examined whether the changes in ASM function induced by prolonged homologous beta2AR desensitization are attributed to altered expression and action of PDE4. Cultured human ASM cells and isolated rabbit ASM tissues exposed for 24 h to the long-acting beta2AR agonist salmeterol exhibited impaired acute beta2AR-mediated cAMP accumulation and relaxation, respectively, together with ASM constrictor hyperresponsiveness. These proasthmatic-like changes in ASM function were associated with upregulated PDE4 activity due to enhanced expression of the PDE4D5 isoform and were prevented by pretreating the ASM preparations with the PDE4 inhibitor rolipram or with inhibitors of either PKA or ERK1/2 signaling. Extended studies using gene silencing and pharmacological approaches demonstrated that: 1) the mechanism underlying upregulated PDE4D5 expression following prolonged beta2AR agonist exposure involves PKA-dependent activation of G(i) protein signaling via its betagamma-subunits, which elicits downstream activation of ERK1/2 and its induction of PDE4D5 transcription; and 2) the induction of PDE4 activity and consequent changes in ASM responsiveness are prevented by pretreating the beta2AR agonist-exposed ASM preparations with inhibitors of G(i)-betagamma signaling. Collectively, these findings identify that the proasthmatic changes in ASM function resulting from prolonged homologous beta2AR desensitization are attributed to upregulated PDE4 expression induced by G(i)-betagamma-mediated cross-talk between the PKA and ERK1/2 signaling pathways. abstract_id: PUBMED:14730417 Comparative pharmacology of human beta-adrenergic receptor subtypes--characterization of stably transfected receptors in CHO cells. Although many beta1-receptor antagonists and beta2-receptor agonists have been used in pharmacotherapy for many years their pharmacological properties at all three known subtypes of beta-adrenergic receptors are not always well characterized. The aim of this study was, therefore, to provide comparative binding characteristics of agonists (epinephrine, norepinephrine, isoproterenol, fenoterol, salbutamol, salmeterol, terbutalin, formoterol, broxaterol) and antagonists (propranolol, alprenolol, atenolol, metoprolol, bisoprolol, carvedilol, pindolol, BRL 37344, CGP 20712, SR 59230A, CGP 12177, ICI 118551) at all three subtypes of human beta-adrenergic receptors in an identical cellular background. We generated Chinese hamster ovary (CHO) cells stably expressing the three beta-adrenergic receptor subtypes at comparable levels. We characterized these receptor subtypes and analyzed the affinity of routinely used drugs as well as experimental compounds in competition binding studies, using the non-selective antagonist 125I-cyanopindolol as a radioligand. Furthermore, we analyzed the beta-receptor-mediated adenylyl cyclase activity in isolated membranes from these cell lines. The results from our experiments show that all compounds exhibit distinct patterns of selectivity and activity at the three beta-receptor subtypes. In particular, a number of beta2- or beta3-receptor agonists that are inverse agonists at the other subtypes were identified. In addition, beta1-receptor antagonists with agonistic activity at beta2- and beta3-receptors were found. These specific mixtures of agonism, antagonism, and inverse agonism at different subtypes may have important implications for the therapeutic use of the respective compounds. abstract_id: PUBMED:11451847 Alternative mechanisms for long-acting beta(2)-adrenergic agonists in COPD. beta(2)-Adrenergic agonists are commonly used as bronchodilators to treat patients with COPD. In addition to prolonged bronchodilation, long-acting beta(2)-agonists (LABAs) exert other effects that may be of clinical relevance. These include inhibition of airway smooth-muscle cell proliferation and inflammatory mediator release, as well as nonsmooth-muscle effects, such as stimulation of mucociliary transport, cytoprotection of the respiratory mucosa, and attenuation of neutrophil recruitment and activation. This review details the possible alternative mechanisms of action of the LABAs, salmeterol and formoterol, in COPD. abstract_id: PUBMED:8819183 Airway effects of salmeterol in healthy individuals. The long-acting beta 2-agonist salmeterol has been shown in several in vitro studies to produce non-beta-mediated relaxant effects. The aim of the present study was to investigate whether these effects have any relevance in humans in vivo. Thirteen healthy individuals were studied in a randomized, double-blind, cross-over study on five separate days. The subjects were pre-treated orally with either propranolol 400 mg in order to block beta-adrenoceptor mediated effects or placebo. Two hours after drug intake, three increasing doses of salmeterol (25 + 50 + 100 micrograms), salbutamol (100 + 200 + 400 micrograms) or placebo were given from matched meter dose inhalers at 1-h intervals between doses. Specific airway conductance (sGAW) was measured in a body plethysmograph at the beginning of the experiment and 30 and 60 min after each inhaled dose of the beta-agonists. Salmeterol and salbutamol produced the same maximal increase in sGAW and had the same area under the dose-response curves. Pre-treatment with propranolol totally inhibited the effect of both drugs. In conclusion, salmeterol at clinically used doses did not produce any non-beta-mediated bronchodilating effect in normal individuals, measured as sGAW. Salmeterol and salbutamol showed the same efficacy but salmeterol was four times more potent than salbutamol. Answer: No, beta-adrenergic-mediated airway relaxation of salmeterol is not antagonized by its solvent xinafoic acid. A study conducted on anesthetized Basenji-greyhound dogs, where their peripheral airways were challenged with xinafoic acid, showed that xinafoic acid did not increase airway resistance. Additionally, xinafoate alone had no significant effect on the specific binding of 125I-cyanopindolol to lung membranes and did not affect the affinity of salmeterol for the beta-adrenoceptor in the absence or presence of xinafoate. These findings suggest that xinafoic acid, the solvent for salmeterol, does not have direct airway irritant effects, does not bind to beta-adrenoceptors, and does not impair the binding of salmeterol to beta-adrenoceptors. Therefore, it is unlikely that xinafoate contributes to the worsening of airway symptoms in asthmatics using salmeterol xinafoate (PUBMED:10378568).
Instruction: Does midline episiotomy increase the risk of third- and fourth-degree lacerations in operative vaginal deliveries? Abstracts: abstract_id: PUBMED:8336877 Does midline episiotomy increase the risk of third- and fourth-degree lacerations in operative vaginal deliveries? Objective: To explore the association between midline episiotomy and the risk of third- and fourth-degree lacerations during operative vaginal delivery with either vacuum extractor or forceps. Methods: This retrospective cohort study analyzed all operative vaginal deliveries at a university hospital in 1989 and 1990. Univariate analysis of the relationships between perineal lacerations and obstetric variables was performed. Stratified analysis using the relevant variables was used to calculate relative risk (RR) estimates. Results: Episiotomy, birth weight, and whether the index birth was the first vaginal birth were associated with third- and fourth-degree perineal lacerations. Stratified analysis demonstrated an RR of 2.4 with a 95% confidence interval of 1.7-3.5 for rectal injury with episiotomy, adjusting for parity and birth weight. Conclusion: Midline episiotomy is associated with an increased risk of third- and fourth-degree perineal lacerations in operative vaginal deliveries. abstract_id: PUBMED:2375331 Risk factors for third-degree and fourth-degree perineal lacerations in forceps and vacuum deliveries. Third- and fourth-degree perineal lacerations occur frequently during operative vaginal deliveries. To identify risk factors for lacerations, 2832 consecutive forceps and vacuum extraction deliveries were analyzed. Third- and fourth-degree lacerations occurred in 30% of deliveries. Multiple logistic regression was used to control for intercorrelation between potential risk factors. Factors associated with increased risk for third- and fourth-degree lacerations were midline episiotomy, nulliparity, second-stage arrest, occipitoposterior position, low or mid station, use of forceps instead of vacuum, use of local anesthesia, and Asian race. When these factors were controlled, there was no effect of birth weight, faculty versus resident operator, gestational age, abnormalities of first-stage labor, or several other factors. Prevention of perineal lacerations requires that the operator identify the patient at risk. Possible options for management of high-risk patients include use of mediolateral episiotomy or no episiotomy, use of vacuum extraction instead of forceps, and use of conduction anesthesia. abstract_id: PUBMED:18221925 Selective vs routine midline episiotomy for the prevention of third- or fourth-degree lacerations in nulliparous women. Objective: The purpose of this study was to determine whether selective midline episiotomy contributes to the prevention of third- or fourth-degree perineal lacerations. Study Design: A randomized controlled clinical trial was performed with 446 nulliparous women with deliveries after 28 weeks of pregnancy. Patients were randomized to undergo either routine episiotomy or selective episiotomy. In the selective episiotomy group, episiotomies were performed only in cases of imminent lacerations, fetal distress, or forceps delivery. Results: In the group of 223 patients who underwent routine episiotomy, 32 (14.3%) had third- or fourth-degree perineal lacerations, as compared to 15 (6.8%) in the group of 222 patients undergoing selective episiotomy (relative risk, 2.12; 95% confidence interval, 1.18-3.81). Only reduction in third-degree lacerations was significant when analyzed separately. Moreover, periurethral, labia minora, and superficial vaginal lacerations were significantly more frequent in the selective episiotomy group. Conclusion: The policy of performing selective midline episiotomy in nulliparous patients results in a reduction in the risk of third-degree perineal lacerations. abstract_id: PUBMED:16949408 Operative vaginal delivery and midline episiotomy: a bad combination for the perineum. Objective: The purpose of this study was to determine the impact of operative vaginal delivery (forceps or vacuum) and midline episiotomy on the risk of severe perineal trauma. Study Design: In this retrospective cohort study, we assessed the impact of maternal and obstetric factors on the risk of development of severe perineal trauma (third- and fourth-degree perineal lacerations) for all singleton, vertex vaginal live births (n = 33,842) between 1996 and 2003. Results: Among nulliparous women, 12.1% had operative vaginal delivery, 22.4% had midline episiotomy, and 8.1% experienced severe perineal trauma. Among multiparous women, 3.4% had operative vaginal delivery, 4.2% had midline episiotomy, and 1.2% experienced severe perineal trauma. Controlling for maternal age, ethnicity, birth weight and head circumference, evaluation of the interaction of episiotomy and delivery method revealed that forceps (nulliparous women: odds ratio [OR] 8.6, 95% CI 6.5-10.7; multiparous women: OR 26.3, 95% CI 18.1-34.5) and episiotomy (nulliparous women: OR 4.5, 95% CI 3.7-5.4; multiparous women: OR 14.6, 95% CI 10.4-20.5) were consistently associated with the increased risk of anal sphincter trauma. In fact, the magnitude of effect of the statistically significant synergistic interaction was evidenced by more than 3-fold excess of risk of using operative vaginal delivery alone. Conclusion: The use of operative vaginal delivery, particularly in combination with midline episiotomy, was associated with a significant increase in the risk of anal sphincter trauma in both primigravid and multigravid women. Given the reported substantial long-term adverse consequences for anal function, this combination of operative modalities should be avoided if possible. abstract_id: PUBMED:37885968 The association of maternal obesity and obstetric anal sphincter injuries at time of vaginal delivery. Background: The risk of third- and fourth-degree perineal laceration after vaginal delivery in patients with obesity is relatively understudied and has mixed findings in existing literature. Objective: This study aimed to examine the association of maternal obesity and obstetric anal sphincter injuries at vaginal delivery. Study Design: The Healthcare Cost and Utilization Project's National Inpatient Sample was retrospectively queried to examine 7,385,341 vaginal deliveries from January 2017 to December 2019. The exposure assignment was obesity status. The main outcomes were third- and fourth-degree perineal lacerations after vaginal delivery. Statistical analysis examining the exposure-outcome association included (1) inverse probability of treatment weighting with log-Poisson regression generalized linear model to account for prepregnant and pregnant confounders for the exposure and (2) multinomial regression model to account for delivery factors in the inverse probability of treatment weighting cohort. The secondary outcomes included (1) the temporal trends of fourth-degree laceration and its associated factors at cohort level and (2) risk factor patterns for fourth-degree laceration by constructing a classification tree model. Results: In the inverse probability of treatment weighting cohort, patients with obesity were less likely to have fourth-degree lacerations and third-degree lacerations than patients without obesity (fourth-degree laceration: 2.3 vs 3.9 per 1000 vaginal deliveries, respectively; adjusted odds ratio, 0.62; 95% confidence interval, 0.56-0.69; third-degree laceration: 15.6 vs 20.1 per 1000 vaginal deliveries, respectively; adjusted odds ratio, 0.79; 95% confidence interval, 0.76-0.82). In contrast, in patients with obesity vs those without obesity, forceps delivery (54.7 vs 3.3 per 1000 vaginal deliveries, respectively; adjusted odds ratio, 17.73; 95% confidence interval, 16.17-19.44), vacuum-assisted delivery (19.8 vs 2.9 per 1000 vaginal deliveries, respectively; adjusted odds ratio, 5.18; 95% confidence interval, 4.85-5.53), episiotomy (19.2 vs 2.8 per 1000 vaginal deliveries, respectively; adjusted odds ratio, 3.95; 95% confidence interval, 3.71-4.20), and shoulder dystocia (17.8 vs 3.4 per 1000 vaginal deliveries, respectively; adjusted odds ratio, 2.60; 95% confidence interval, 2.29-2.94) were associated with more than a 2-fold increased risk of fourth-degree perineal laceration. Among the group with obesity, patients who had forceps delivery and shoulder dystocia had the highest incidence of fourth-degree laceration (105.3 per 1000 vaginal deliveries). Among the group without obesity, patients who had forceps delivery, shoulder dystocia, and macrosomia had the highest incidence of fourth-degree laceration (294.1 per 1000 vaginal deliveries). The incidence of fourth-degree perineal laceration decreased by 11.9% over time (P trend=.004); moreover, forceps delivery, vacuum-assisted delivery, and episiotomy decreased by 3.8%, 7.6%, and 29.5%, respectively (all, P trend&lt;.05). Conclusion: This national-level analysis suggests that patients with obesity are less likely to have obstetric anal sphincter injuries at the time of vaginal delivery. Furthermore, this analysis confirms other known risk factors for fourth-degree laceration, such as forceps delivery, vacuum-assisted delivery, episiotomy, and shoulder dystocia. However, we noted a decreasing trend in fourth-degree lacerations, which may be due to evolving obstetrical practices. abstract_id: PUBMED:8742161 The relationship of episiotomy to third and fourth degree lacerations. Third and fourth degree lacerations can produce significant long term morbidity to women undergoing childbirth. The incidence of third and fourth degree lacerations is variable depending on the institution and the obstetrical provider. While episiotomy remains a valuable intervention in selected cases, an improvement program directed at lowering the use of episiotomy can reduce the incidence of this clinical event. Hospitals and physicians with higher rates of third and fourth degree tears should examine the use of episiotomy, and midline episiotomy in particular, which is associated with an increased incidence of third and fourth degree tears. abstract_id: PUBMED:9065190 Is there a benefit to episiotomy at operative vaginal delivery? Observations over ten years in a stable population. Objective: Our purpose was to examine the association between maternal vaginal and perineal morbidity and episiotomy performed at operative vaginal delivery. Study Design: We obtained data from 2041 consecutive operative vaginal deliveries and compared yearly rates of episiotomy, lacerations, and potential confounders with linear regression and stratified analyses. Results: Between 1984 and 1994 the use of episiotomy for operative vaginal deliveries fell significantly (93.4% to 35.7%, R2 = 0.85, p = 0.0001). This change was associated with a rise in the rate of vaginal lacerations (16.1% to 40.0%, R2 = 0.80, p = 0.0002), a decrease in the rate of fourth-degree lacerations (12.2% to 5.4%, R2 = 0.62, p = 0.004), but no significant change in the rate of third-degree lacerations. These associations held in separate analyses stratified by parity and type of instrument used for delivery. The prevalence of other previously reported risks for perineal morbidity did not change during the study period. Conclusion: At our institution a statistically and clinically significant reduction in the use of episiotomy for operative vaginal deliveries was not associated with a change in the rate of third-degree lacerations but was associated with an increase in the rate of vaginal lacerations and a decrease in the rate of fourth-degree lacerations. abstract_id: PUBMED:25510687 Prevalence and contributing factors of severe perineal damage following episiotomy-assisted vaginal delivery. Objective: This study was conducted to investigate the risk factors of third- and fourth-degree lacerations following vaginal deliveries in Taiwanese women, and to offer clinical guidance for the reduction of severe perineal lacerations. Materials And Methods: A total of 1879 women who underwent vaginal deliveries assisted by midline episiotomy at a tertiary hospital were included. Obstetric risk factors were analyzed for women with and without third- and fourth-degree lacerations. Results: Two hundred and five deliveries (10.9%) resulted in third- or fourth-degree lacerations. Parity, duration of first and second stages of labor, rate of instrument-assisted vaginal deliveries, the newborn's birth weight and head circumference, and the ratio of the newborn's birth weight to maternal body mass index were significantly different between women with and without severe perineal lacerations. Logistic regression demonstrated that nulliparity (odds ratio = 3.626, p &lt; 0.001), duration of second stage of labor (odds ratio = 1.102, p = 0.044), instrument-assisted vaginal delivery (odds ratio = 4.102, p &lt; 0.001), and newborn's head circumference (odds ratio = 1.323, p &lt; 0.001) were independent risk factors of severe perineal lacerations. Instrument-assisted vaginal delivery was a common independent risk factor for severe lacerations shared between primiparous and multiparous women. Conclusions: With regard to severe perineal lacerations during vaginal delivery, there are multiple obstetric contributory factors despite routine episiotomy, among them, nulliparity, longer labor duration, greater newborn head circumference, and instrument-assisted vaginal delivery. The latter should only be performed after careful evaluation. abstract_id: PUBMED:25751203 Evaluation of third-degree and fourth-degree laceration rates as quality indicators. Objective: To examine the patterns and predictors of third-degree and fourth-degree laceration in women undergoing vaginal delivery. Methods: We identified a population-based cohort of women in the United States who underwent a vaginal delivery between 1998 and 2010 using the Nationwide Inpatient Sample. Multivariable log-linear regression models were developed to account for patient, obstetric, and hospital factors related to lacerations. Between-hospital variability of laceration rates was calculated using generalized log-linear mixed models. Results: Among 7,096,056 women who underwent vaginal delivery in 3,070 hospitals, 3.3% (n=232,762) had a third-degree laceration and 1.1% (n=76,347) had a fourth-degree laceration. In an adjusted model for fourth-degree lacerations, important risk factors included shoulder dystocia and forceps and vacuum deliveries with and without episiotomy. Other demographic, obstetric, medical, and hospital variables, although statistically significant, were not major determinants of lacerations. Risk factors in a multivariable model for third-degree lacerations were similar to those in the fourth-degree model. Regression analysis of hospital rates (n=3,070) of lacerations demonstrated limited between-hospital variation. Conclusion: Risk of third-degree and fourth-degree laceration was most strongly related to operative delivery and shoulder dystocia. Between-hospital variation was limited. Given these findings and that the most modifiable practice related to lacerations would be reduction in operative vaginal deliveries (and a possible increase in cesarean delivery), third-degree and fourth-degree laceration rates may be a quality metric of limited utility. abstract_id: PUBMED:31003947 Impact of Episiotomy During Operative Vaginal Delivery on Obstetrical Anal Sphincter Injuries. Objective: The purpose of this study was to describe associations between episiotomy at the time of forceps or vacuum-assisted delivery and obstetrical anal sphincter injuries (OASIS). Methods: This population-based retrospective cohort study used delivery information from a provincial perinatal clinical database. Full-term, singleton, in-hospital, operative vaginal deliveries of vertex-presenting infants from April 1, 2006 to March 31, 2016 were identified. Odds ratios (ORs) and 95% confidence intervals (CIs) for associations between episiotomy and third- or fourth-degree lacerations were calculated in multiple logistic regression models (Canadian Task Force Classification II-2). Results: Episiotomy was performed in 34% of 52 241 operative vaginal deliveries. OASIS occurred in 21% of forceps deliveries and 7.6% of vacuum deliveries. Episiotomy was associated with increased odds of severe perineal lacerations for vacuum deliveries among women with (OR 2.48; 95% CI 1.96-3.13) and without (OR 1.12; 95% CI 1.02-1.22) a prior vaginal delivery. Among forceps deliveries, episiotomy was associated with increased odds of OASIS for those with a previous vaginal delivery (OR 1.52; 95% CI 1.12-2.06), but it was protective for women with no previous vaginal delivery (OR 0.73; 95% CI 0.67-0.79). Midline compared with mediolateral episiotomy increased the odds of OASIS in forceps deliveries (OR 2.73; 95% CI 2.37-3.13) and vacuum deliveries (OR 1.94; 95% CI 1.65-2.28). Conclusion: In conclusion, results suggest that episiotomy should be used with caution, particularly among women with a previous vaginal delivery and in the setting of vacuum-assisted delivery. Episiotomy may protect against OASIS in forceps-assisted deliveries for women without a prior vaginal delivery. Answer: Yes, midline episiotomy is associated with an increased risk of third- and fourth-degree perineal lacerations in operative vaginal deliveries. A retrospective cohort study found that midline episiotomy was associated with a relative risk of 2.4 for rectal injury, adjusting for parity and birth weight (PUBMED:8336877). Another study identified midline episiotomy as a factor associated with increased risk for these severe lacerations during forceps and vacuum deliveries (PUBMED:2375331). Furthermore, a retrospective cohort study indicated that the combination of operative vaginal delivery and midline episiotomy significantly increased the risk of anal sphincter trauma in both primigravid and multigravid women (PUBMED:16949408). Additionally, a study examining the impact of episiotomy during operative vaginal delivery on obstetrical anal sphincter injuries found that midline compared with mediolateral episiotomy increased the odds of severe perineal lacerations in both forceps and vacuum deliveries (PUBMED:31003947). However, it is important to note that episiotomy may protect against obstetrical anal sphincter injuries in forceps-assisted deliveries for women without a prior vaginal delivery, but should be used with caution, particularly among women with a previous vaginal delivery and in the setting of vacuum-assisted delivery (PUBMED:31003947). These findings suggest that the use of midline episiotomy should be carefully considered during operative vaginal deliveries due to the increased risk of severe perineal trauma.
Instruction: Surgical treatment of carpal tunnel syndrome: endoscopic or classical (open)? Abstracts: abstract_id: PUBMED:32377090 Minimally Invasive Open Surgical Approach and Outcomes for Carpal Tunnel Syndrome. Objectives: The most common peripheral neuropathy is carpal tunnel syndrome. The present study aims to describe our minimally invasive open surgical approach for carpal tunnel syndrome and evaluate surgical outcomes. Methods: We included 217 patients who were operated in our clinic for carpal tunnel syndrome by minimally invasive open surgical approach. Visual Analogue Scale and Functional Outcome Scale scores were obtained preoperative, postoperative at one month and three months to determine surgical outcomes. Results: The mean age of the patients was 55.4±12.8 years (32 to 69), 175 (80.6%) were women and 42 (19.4%) were men. The assessment of carpal tunnel syndrome's etiology showed that 189 (%87.1%) of the cases were idiopathic, 19 (8.8%) had hypothyroidism, 5 (2.3%) had rheumatoid arthritis and 4 (1.8%) were due to pregnancy. The average improvement of VAS between preoperatively and late postoperatively was 5.41±1.05. The average improvement FOS was 17.44±3.06. They were statistically significant. Conclusion: The minimally invasive open surgical approach for carpal tunnel syndrome (an average of 1 cm skin incision) is performed with local anesthesia and successful surgical outcomes are achieved. abstract_id: PUBMED:35966418 Prevalence of Anatomical Variations in Patients with Carpal Tunnel Syndrome Undergoing Classical Open Carpal Tunnel Release. Objective To evaluate the prevalence of anatomical variations encountered in patients with carpal tunnel syndrome who underwent carpal tunnel classical open release. Methods A total of 115 patients with a high probability of clinical diagnosis for carpal tunnel syndrome and indication for surgical treatment were included. These patients underwent electroneuromyography and ultrasound for diagnostic confirmation. They underwent surgical treatment by carpal tunnel classical open release, in which a complete inventory of the surgical wound was performed in the search and visualization of anatomical variations intra- and extra-carpal tunnel. Results The total prevalence of anatomical variations intra- and extra-carpal tunnel found in this study was 63.5% (95% confidence interval [CI]: 54.5-72.4). The prevalence of the carpal transverse muscle was 57.4% (95% CI: 47.8-66.6%), of the bifid median nerve associated with the persistent median artery was 1.7% (95% CI: 0.0-4.2%), and the median bifid nerve associated with the persistent median artery and the transverse carpal muscle was 1.7% (95% CI: 0.0-4.2%). Conclusion The most prevalent extra-carpal tunnel anatomical variation was carpal transverse muscle. The most prevalent intra-carpal tunnel anatomical variation was median bifid nerve associated with the persistent median artery. The surgical finding of an extra-carpal tunnel anatomical variation, such as the transverse carpal muscle, may indicate the presence of other associated carpal intra tunnel anatomical variations, such as the bifid median nerve, persistent median artery, and anatomical variations of the recurrent median nerve branch. abstract_id: PUBMED:9273452 Surgical treatment of carpal tunnel syndrome: endoscopic or classical (open)? A prospective randomized trial Objective: To compare the results of open with endoscopic release of the carpal tunnel in patients with the carpal tunnel syndrome. Design: Randomised prospective study. Setting: General hospital Zeeuws-Vlaanderen, Oostburg and Terneuzen, the Netherlands. Method: 178 patients were randomised for open or endoscopic release. The symptom severity score and functional status score were completed before and three months after the procedure. One week after the operation the patients' postoperative pain was measured on a 10-point visual analogue scale. Differences were analysed using the Chi-square test or the t-test. Results: Randomisation failed in two patients; 85 patients had an endoscopic release and 91 patients had an open release. The postoperative pain was significantly less in the endoscopic group. Improvement in symptom severity score and functional status score was the same in both groups. There was no difference in absence from work. Two local complications occurred in the endoscopically treated group. Of the patients 25% were not or only slightly satisfied with the results. Conclusion: Endoscopic release of the carpal tunnel is as effective as the open release but it gives less postoperative pain. Because of the risk of complications and the additional costs, the endoscopic release is not the preferred method for treatment of the carpal tunnel syndrome, however. abstract_id: PUBMED:29759797 Our Surgical Experience: Open Versus Endoscopic Carpal Tunnel Surgery. Carpal tunnel release is one of the most common hand operations in the United States and every year approximately 500,000 patients undergo surgical release. In this article, we examine the argument for endoscopic carpal tunnel release versus open carpal tunnel release, as well as some of the literature on anatomical variants in the median nerve at the wrist. We further describe the experience of several surgeons in a large academic practice. The goals of this article are to describe key anatomic findings and to present several cases that have persuaded us to favor offering patients open carpal tunnel release. abstract_id: PUBMED:11525205 Endoscopic versus open surgical treatment of carpal tunnel syndrome. The impact of CTS is significant as evidenced by the fact that only back injuries result in greater rates of employee absenteeism in the workplace. CTR is now the most commonly performed surgical procedure in the United States. Earlier efforts using open surgical techniques were associated with significant morbidity, which some would argue is greater than that associated with the disease itself. The addition of endoscopy to surgeon's armamentarium offers the promise of decreased morbidity associated with the surgical treatment of CTS. Evidence indicates that when compared with open CTR, endoscopic CTR results in earlier achievement of patient satisfaction and functional outcomes. As a result, it is becoming clear that endoscopic surgery is a safe and effective method of treating CTS. abstract_id: PUBMED:29903541 Endoscopic Versus Open Carpal Tunnel Release: A Detailed Analysis Using Time-Driven Activity-Based Costing at an Academic Medical Center. Purpose: In order to effectively improve value in health care delivery, providers must thoroughly understand cost drivers. Time-driven activity-based costing (TDABC) is a novel accounting technique that may allow for precise characterization of procedural costs. The purpose of the present study was to use TDABC to characterize costs in a high-volume, low-complexity ambulatory procedure (endoscopic vs open carpal tunnel release [CTR]), identify cost drivers, and inform opportunities for clinical improvement. Methods: The costs of endoscopic and open CTR were calculated in a matched cohort investigation using TDABC. Detailed process maps including time stamps were created accounting for all clinical and administrative activities for both the endoscopic and the open treatment pathways on the day of ambulatory surgery. Personnel cost rates were calculated accounting for capacity, salary, and fringe benefits. Costs for direct consumable supplies were based on purchase price. Total costs were calculated by aggregating individual resource utilization and time data and were compared between the 2 surgical techniques. Results: Total procedural cost for the endoscopic CTR was 43.9% greater than the open technique ($2,759.70 vs $1,918.06). This cost difference was primarily driven by the disposable endoscopic blade assembly ($217), direct operating room costs related to procedural duration (44.8 vs 40.5 minutes), and physician labor. Conclusions: Endoscopic CTR is 44% more expensive than open CTR compared with a TDABC methodology at an academic medical center employing resident trainees. Granular cost data may be particularly valuable when comparing these 2 procedures, given the clinical equipoise of the surgical techniques. The identification of specific cost drivers with TDABC allows for targeted interventions to optimize value delivery. Type Of Study/level Of Evidence: Economic Analysis II. abstract_id: PUBMED:37939918 Comparative analysis of open versus endoscopic carpal tunnel release in a comprehensive national database. Purpose: Carpal tunnel syndrome is the most common compressive neuropathy. There is limited evidence to support endoscopic compared to open carpal tunnel release according to the 2016 American Academy of Orthopaedic Surgeons Clinical Practice Guideline on carpal tunnel syndrome. The purpose of the present study was to assess differences between the two procedures by comparing 30- and 90-day complications and mean hospital costs in a large patient population. Methods: Using the national Mariner15 Database by PearlDiver Technologies, we retrospectively studied 27,192 carpal tunnel syndrome patients who received carpal tunnel release using an endoscopic or open surgical approach from 2010 to 2019. Patients who met the inclusion criteria were grouped and case-matched at a 1:1 ratio through the corresponding International Classification of Diseases codes (n = 13,596) and assessed for 30- and 90-day complications such as median nerve injury, superficial palmar arch injury, and revision carpal tunnel release surgery. Univariate analysis was used to compare outcomes and a multivariate regression was performed to identify risk factors associated with each outcome. Results: Endoscopic carpal tunnel release was associated with a higher rate of median nerve injury than open release at 30 days (0.3% vs. 0.1% odds ratio, 2.21; 95% confidence interval, 1.29-3.81; p &lt; 0.05) and 90 days (0.4% vs. 0.3%; odds ratio, 1.77; 95% confidence interval, 1.16-2.70; p &lt; 0.05). Endoscopic release was also associated with a higher rate of superficial palmar arch injury (0.1% vs. 0%; odds ratio, 25.02; 95% confidence interval, 1.48-423.0; p &lt; 0.05). Conclusions: In the present study, risk of median nerve injury and vascular injury was higher after endoscopic than open carpal tunnel release. At 90 days, all-cause revision rates were similar between techniques. Surgeons should understand these differences, to optimize surgical decision-making. Level Of Evidence: Therapeutic, IIIa. abstract_id: PUBMED:17763582 Prospective comparison of endoscopic and open surgical methods for carpal tunnel syndrome. Objective: To compare outcomes of patients undergoing either open or endoscopic carpal tunnel release for the treatment of idiopathic carpal tunnel syndrome. Methods: A prospective, randomized study was performed on 70 hands in 62 patients with idiopathic carpal tunnel syndrome from April 2000 to April 2004. Either open (36 hands in 30 patients) or endoscopic (34 hands in 32 patients) carpal tunnel release was performed randomly. Symptom improvement, complications, and the time of operation, in-hospital stay, and return to work between the two groups were assessed with average 2 years of follow-up. The electromyography was tested pre- and 3 months post-operation. Results: There were no significant differences between the two surgical groups with regard to postoperative improvements of symptom, electromyography tests, and the incidence of complications. But it was statistically less in the rate of scar tenderness, the time of operation, in-hospital stay, and return to work in the endoscopic group compared with the open group (P &lt; 0.05). Conclusions: The endoscopic carpal tunnel release is a reliable method in the treatment of idiopathic carpal tunnel syndrome. And it has the advantages of slight scar tenderness, less operation time, less in-hospital stay, early functional recovery, safety, and high- satisfaction rate compared with open methods. abstract_id: PUBMED:31773481 Endoscopic Versus Open Carpal Tunnel Release. Purpose Of Review: Carpal tunnel syndrome is the most common compressive neuropathy encountered by hand and upper extremity surgeons. The predominant presentation includes symptomatic paresthesias in the median nerve distribution of the affected hand, frequently causing nocturnal disturbances. Surgical treatment requires division of the transverse carpal ligament, which can be performed through open and endoscopic means. Endoscopic techniques have evolved significantly since they were first introduced in the late 1980s. This manuscript reviews the literature to summarize the current state of carpal tunnel surgery. Recent Findings: While endoscopic techniques have demonstrated superior early functional outcomes and a more rapid recovery, there are lingering concerns over the potential for nerve, vessel, and tendon injuries. These concerns have not been validated by the hand surgery literature, which ascribes similar rates of complications for both open and endoscopic surgical approaches. Moreover, patients report greater satisfaction with endoscopic surgical approaches compared with open techniques. In summary, the debate between proponents of open versus endoscopic carpal tunnel surgery continues. While surgeons who employ endoscopic techniques appear to be in the minority, there is a plethora of evidence to suggest that both approaches are comparably safe and equally effective. abstract_id: PUBMED:19199454 Quantitative analysis of the variation in angles of the carpal arch after open and endoscopic carpal tunnel release. Clinical article. Object: Previous studies have demonstrated that the volume of the carpal canal increases after open and endoscopic surgery in patients with carpal tunnel syndrome. There is some controversy regarding the contribution of the post operative widening of the carpal arch to the increment in carpal canal volume. The objectives of this study were to: 1) evaluate the degree of variation in the angles formed by the borders of the carpal arch following the surgical division of the transverse carpal ligament; and 2) determine if there are differences in the variation of these angles after the classical open surgery versus endoscopic carpal tunnel release.Methods. The authors prospectively studied 20 patients undergoing carpal tunnel syndrome surgery: 10 patients were treated via the standard open technique, and 10 underwent endoscopic carpal tunnel release. The angles of the carpal arch were measured on CT scans of the affected hand obtained before and immediately after the surgical procedures.Measurements were performed at the level of the pisiform-scaphoid hiatus and at the level of the hook of the hamate-trapezium hiatus. Results: There was widening of the postoperative angles of the carpal arch after open and endoscopic division of the flexor retinaculum; however, the difference between pre- and postoperative angulations reached statistical significance only in those patients treated by means of the open procedure. The mean (+/- SD) values for the postoperative increase in the angles at the level of the pisiform-scaphoid hiatus were 5.1 +/- 0.4 degrees after open surgery and 2.5 +/- 0.3 degrees after the endoscopically assisted procedure (p &lt; 0.05). At the level of the hook of the hamate-trapezium hiatus, the mean values for the widening of the angles were 6.2 +/- 0.6 degrees for the open surgery group and 1.2 +/- 0.4 degrees for those patients treated by means of the endoscopic technique (p &lt; 0.05). Conclusions: The widening of the postoperative angles of the carpal arch is a phenomenon observed at the proximal and distal levels of the carpal canal, and it can be noted after both open and endoscopically assisted carpal tunnel release. The endoscopic procedure yielded less increase in these angles than the open surgery. Answer: The surgical treatment of carpal tunnel syndrome (CTS) can be performed using either endoscopic or classical open methods, and there is ongoing debate about which approach is superior. Endoscopic carpal tunnel release (CTR) is associated with less postoperative pain compared to open release, but the results in terms of symptom severity score and functional status score improvement are similar for both methods. However, endoscopic release is not necessarily the preferred method due to the risk of complications and additional costs (PUBMED:9273452). Endoscopic CTR is also reported to result in earlier patient satisfaction and functional outcomes, suggesting decreased morbidity compared to open CTR (PUBMED:11525205). However, a detailed analysis using time-driven activity-based costing at an academic medical center found that endoscopic CTR is 44% more expensive than open CTR, with the cost difference primarily driven by the disposable endoscopic blade assembly, direct operating room costs related to procedural duration, and physician labor (PUBMED:29903541). Comparative studies have shown that endoscopic CTR has advantages such as slight scar tenderness, less operation time, less in-hospital stay, early functional recovery, safety, and high satisfaction rate compared with open methods (PUBMED:17763582). However, a comprehensive national database analysis revealed that endoscopic CTR was associated with a higher rate of median nerve injury and superficial palmar arch injury compared to open release (PUBMED:37939918). Despite these concerns, the hand surgery literature generally ascribes similar rates of complications for both open and endoscopic surgical approaches, and patients report greater satisfaction with endoscopic surgical approaches (PUBMED:31773481). On the other hand, a minimally invasive open surgical approach for CTS, which involves an average of 1 cm skin incision performed with local anesthesia, has been shown to achieve successful surgical outcomes (PUBMED:32377090). Additionally, open surgery has been found to result in a statistically significant widening of the postoperative angles of the carpal arch, which was less pronounced in the endoscopic procedure (PUBMED:19199454). In conclusion, both endoscopic and open surgical treatments for CTS have their advantages and disadvantages. Endoscopic CTR may offer less postoperative pain and quicker functional recovery, but it comes with higher costs and a potentially higher risk of nerve and vascular injuries.
Instruction: Are adverse events of nevirapine and efavirenz related to plasma concentrations? Abstracts: abstract_id: PUBMED:16038474 Are adverse events of nevirapine and efavirenz related to plasma concentrations? Objective: The relationships between adverse events (AEs) and plasma concentrations of nevirapine (NVP) and efavirenz (EFV) were investigated as part of the large, international, randomized 2NN study. Methods: Treatment-naive, HIV-1-infected patients received NVP (once or twice daily), EFV or their combination, each in combination with lamivudine and stavudine. Blood samples were collected on day 3 and weeks 1, 2, 4, 24 and 48. Concentrations of NVP and EFV were quantitatively assessed by a validated HPLC assay. Individual Bayesian estimates of the area under the plasma concentration-time curve over 24 h (AUC24h), and minimum and maximum plasma concentrations (Cmin and Cmax) as measures for drug exposure of NVP and EFV, were generated using a previously developed population pharmacokinetic model. Pharmacokinetic parameters were compared for patients with and without central nervous system (CNS) and psychiatric AEs, hepatic events, liver enzyme elevations (LEEs) and rash. Furthermore, it was investigated whether a clear cut-off for a pharmacokinetic parameter could be identified above which the incidence of AEs was clearly increased. AEs were also related to demographic parameters and baseline characteristics. Results: In total, from 1077 patients, NVP (3024 samples) and EFV (1694 samples) plasma concentrations and AE data (825 observations) were available. For all patients Cmin, Cmax and AUC24h were determined. When corrected for known covariates of gender, CD4 cell count at baseline, region, hepatitis coinfection and possible interactions between these factors, no significant associations between AEs and any tested exposure parameter of NVP was observed. Also, no target Cmin value, above which patients were at increased risk for AEs, could be established. On the other hand, geographical region, hepatitis coinfection, CD4 cell count and gender were found to be significantly related with the incidence of CNS and psychiatric AEs, hepatic events, LEEs and rash during the treatment with NVP. The occurrence of elevated liver enzymes during the first 6 weeks in the EFV-containing arm was significantly (P = 0.036) correlated to the exposure of EFV (Cmin). Only hepatitis coinfection impacted on LEEs during the first 6 weeks of treatment. With an EFV Cmin above 2.18 mg/l during the induction phase, patients were 4.4 (range 1.3-15.5) times more at risk for elevated liver enzymes. No other correlations between AEs and EFV pharmacokinetics or patient characteristics could be identified. Conclusions: Pharmacokinetic parameters of NVP did not have a relationship to AEs in the 2NN trial when corrected for known covariates. The value of periodical drug monitoring of NVP as a way to prevent toxicity is therefore limited. Treating physicians should instead focus on factors that are more predictive of AEs (gender, CD4 count and hepatitis coinfection). High EFV Cmin levels resulted in elevated liver enzyme values during the first 6 weeks of treatment. Regular measurement of EFV levels and liver enzymes at the start of therapy may therefore be advised. abstract_id: PUBMED:11017835 The tolerability of efavirenz after nevirapine-related adverse events. Eight patients who were infected with human immunodeficiency virus, and who had each sustained an adverse drug reaction while following a regimen including nevirapine, were switched to a regimen including efavirenz. None of the patients experienced adverse events identical to that which necessitated discontinuation of nevirapine. This study demonstrates that adverse events related to nevirapine are not a class-specific effect. abstract_id: PUBMED:25657900 Evaluation of adverse drug reactions in HIV positive patients in a tertiary care hospital. Context: The advancement and development of new drugs and treatment strategies increase the risk of unusual Adverse Events (AEs) in HIV patients. Aims: The objective of our study was to assess the incidence, types and nature of AEs in HIV positive subjects. Settings And Design: Patients with WHO stage IV disease irrespective of the CD4 cell count, or WHO stage III disease with a CD4 cell count &lt;350 cell/cu. Mm, or, WHO stage I or II disease with a CD4 cell count of &lt;200 cells/cu. mm, and on prior anti-retroviral therapy for not more than six months preceding the observation date, were included in the study. After initiation of therapy, the patients were examined for the occurrence any adverse events including the type and severity, or any other abnormal laboratory findings. Causality assessment of the adverse events was done using the Naranjo's scale. Results: Out of 327 patients studied prospectively, 43 patients developed AEs. Out of these, 23 (53.5%) were males and 20 (46.5%) were females. A total of 53 (16.21%) AEs were reported. Antitubercular drugs caused the maximum AEs (28.3%) followed by zidovudine (20.7%), nevirapine (15.0%) and efavirenz (5.6%). Stavudine, ethambutol, sulfamethoxazole and trimethoprim, and atazanavir were also responsible for 3.7% of AEs individually. Causality assessment done according to the Naranjo's scale revealed that 66.04% AEs were 'probable' and 33.96% were 'possible'. Conclusions: Anemia, hepatitis and dermatological adverse effects are the most common AEs. Antitubercular drugs contributed significantly for the incidence of AEs in these patients. Frequency of AEs was slightly more in males compared to females. abstract_id: PUBMED:22302563 Use of dried blood spots for the determination of plasma concentrations of nevirapine and efavirenz. Objectives: Plasma concentrations are frequently used for therapeutic drug monitoring of antiretroviral drugs. Dried blood spot sampling offers a patient-friendly and easy alternative to plasma sampling. However, dried blood spot concentrations are not necessarily equal to plasma concentrations and therefore the objective of this work was to establish the relationship between nevirapine and efavirenz dried blood spot and plasma concentrations to facilitate clinical implementation of dried blood spot sampling. Methods: Paired dried blood spot and plasma samples were obtained from 40 HIV-infected patients on nevirapine and 40 on efavirenz treatment. All samples were analysed using validated HPLC-tandem mass spectrometry methods for the two matrices. Theoretical plasma concentrations were calculated from dried blood spot concentrations using the formula [dried blood spot concentration/(1 - haematocrit)] × fraction bound to plasma proteins = plasma concentration. Linear regression and Bland-Altman analysis were used to compare the two methods. Results: Dried blood spot and plasma concentrations of nevirapine and efavirenz correlated well (r(2) = 0.867 and 0.972, respectively), although efavirenz dried blood spot concentrations were 39.8% (SD 7.1%) lower than plasma concentrations. Theoretical plasma concentrations (using patient-specific haematocrit) of nevirapine and efavirenz were similar to measured plasma concentrations, with a mean difference between the two methods of 0.29 mg/L (SD 1.35 mg/L) and 0.08 mg/L (SD 0.31 mg/L), respectively. Conclusions: Dried blood spot concentrations of nevirapine and efavirenz were equal to plasma concentrations after correction for haematocrit and compound-specific plasma protein binding and can therefore be used in clinical practice. abstract_id: PUBMED:15509183 Practical guidelines to interpret plasma concentrations of antiretroviral drugs. Several relationships have been reported between antiretroviral drug concentrations and the efficacy of treatment, and toxicity. Therefore, therapeutic drug monitoring (TDM) may be a valuable tool in improving the treatment of HIV-1-infected patients in daily practice. In this regard, several measures of exposure have been studied, e.g. trough and maximum concentrations, concentration ratios and the inhibitory quotient. However, it has not been unambiguously established which pharmacokinetic parameter should be monitored to maintain optimal viral suppression. Each pharmacokinetic parameter has its pros and cons. Many factors can affect the pharmacokinetics of antiretroviral agents, resulting in variability in plasma concentrations between and within patients. Therefore, plasma concentrations should be considered on several occasions. In addition, the interpretation of the drug concentration of a patient should be performed on an individual basis, taking into account the clinical condition of the patient. Important factors herewith are viral load, immunology, occurrence of adverse events, resistance pattern and comedication. In spite of the described constraints, the aim of this review is to provide a practical guide for TDM of antiretroviral agents. This article outlines pharmacokinetic target values for the HIV protease inhibitors amprenavir, atazanavir, indinavir, lopinavir, nelfinavir, ritonavir and saquinavir, and the non-nucleoside reverse transcriptase inhibitors efavirenz and nevirapine. Detailed advice is provided on how to interpret the results of TDM of these drugs. abstract_id: PUBMED:35113365 Short-Term Adverse Drug Reactions to Antiretroviral Therapy in Children with HIV: A Cohort Study. Objectives: To carry out an active surveillance for adverse drug reactions (ADRs) in children with HIV infection newly initiated on antiretroviral therapy (ART), determine risk factors for their occurrence, and assess their influence on adherence to ART. Methods: All children newly initiated on ART from 1st March 2014 to 30th June 2019 at a tertiary care children's hospital in New Delhi, were actively monitored for ADRs to ART for a period of 6 mo after ART initiation. The frequency, spectrum, and severity of ADRs, their influence on adherence, and risk factors for their occurrence were analyzed. Results: Among the 174 enrolled children, ADRs were observed in 78 (44.8%) children during the first 6 mo after ART initiation. Total numbers of episodes of ADR observed were 108 (0.62 episodes of ADR/child). Sixty percent of events were of grade 1 severity, 19.4% events were of grade 2 and 3 each, while 1 (0.9%) event was of grade 4 severity. Adherence to ART was adversely affected in 21.8% of ADRs. Gastrointestinal symptoms (49.1%) were most frequent among all the events observed. Zidovudine, lopinavir/ritonavir, efavirenz and nevirapine based regimes were significantly associated with hematological, gastrointestinal, neurological, and dermatological ADRs, respectively. Children with immunological suppression were at a higher risk of developing ADRs as compared to those without it [RR 1.9 (95% CI (1.1-3.2)]. Conclusions: ADRs to ART are very frequent; most of them are mild and self-limiting. However, they can adversely impact adherence to ART. Anticipatory guidance, ongoing monitoring, and provision of symptomatic treatment will help tide over most ADRs and reduce their adverse impact upon ART adherence. abstract_id: PUBMED:11684213 Prevalence of adverse events associated with potent antiretroviral treatment: Swiss HIV Cohort Study. Background: Data on adverse events to antiretroviral treatment have been recorded in clinical trials, post-marketing analyses, and anecdotal reports. Such data might not be an up-to-date or comprehensive assessment of all possible treatment combinations defined as potent antiretroviral treatment. Methods: Using a standard clinical and laboratory method, we assessed prevalence of adverse events in 1160 patients who were receiving antiretroviral treatment. We measured the toxic effects associated with the drug regimen (protease inhibitor [PI], non-nucleoside and nucleoside analogue reverse transcriptase inhibitor) and specific compounds using multivariate analyses. Findings: 47% (545 of 1160) of patients presented with clinical and 27% (194 of 712) with laboratory adverse events probably or definitely attributed to antiretroviral treatment. Among these, 9% (47 of 545) and 16% (30 of 194), respectively, were graded as serious or severe. Single-PI and PI-sparing-antiretroviral treatment were associated with a comparable prevalence of adverse events. Compared with single-PI treatment, use of dual-PI-antiretroviral treatment and three-class-antiretroviral treatment was associated with higher prevalence of adverse events (odds ratio [OR] 2.0 [95% CI 1.0-4.0], and 3.9 [1.2-12.9], respectively). Compound specific associations were identified for zidovudine, lamivudine, stavudine, didanosine, abacavir, ritonavir, saquinavir, indinavir, nelfinavir, efavirenz, and nevirapine. Interpretation: We recorded a high prevalence of toxic effects attributed to antiretroviral treatment for HIV-1. Such data provides a reference for regimen-specific and compound-specific adverse events and could be useful in postmarketing analyses of toxic effects. abstract_id: PUBMED:29465861 A prospective study of adverse events to antiretroviral therapy in HIV- infected adults in Ekiti State, Nigeria. Objectives: Highly active antiretroviral therapy (HAART); the-current standard of antiretroviral therapy for Human Immunodeficiency Virus (HIV) infected persons, has been documented to drastically, reduce the number of cases of Acquired Immune Deficiency Sypdrome (AIDS). However, adverse. events are a challenge to the use of HAART. This study intends to determine the nature and incidence of suspected advcrse events to prescribed anti retroviral drugs in treatment centers in Ekiti State. Method: One hundred and twenty participants were enrolled and followed up over a period of six months. At each clinic visit, there was an administration of a detailed interviewer questionnaire that was completed by the attending pharmacist together with the participant. The form is designed to obtain information on the demographics of the patients, WHO clinical stage of their HIV infection, HAART regimen for the patients, and suspected adverse events associated with the antiretroviral drugs used by the patients. Results: Tenofovir/Lamivudine/Eifavirenz (72.5%), Zidovudinc/Lamiv.udin/Nevirapine (16.7%), Zidovudine/Lamivudiine/ElafIvirenz (6.7%), Tenofovir/ Lamivudine/Nevirapine (3.3%), and Abacavir/ Lamivudine/Nevirapine (0.8%) were the HAART regimens prescribed to the patients. About half (57%) of the participants reported clinical adverse events; 92% of which were reported within two weeks of HAART initiation. Most of the reported adveise events were nausea (14.5%), abdominal discomfort (8.2%), and insomnia (7.5%). A few (6%) of those who reported adverse events required regimen switch or drug substitution. Conclusions: Antiretroviral drugs exposure often presents with adverse events, an observation similar to other studies. Most of the clinical adverse events were not severe or life threatening. abstract_id: PUBMED:35945983 Hair Zidovudine Concentrations Predict Virologic Outcomes Among People Living with HIV/AIDS in China. Background: Hair antiretroviral concentrations are an objective and non-invasive measure of adherence to long-term antiretroviral therapy (ART) and can further predict virologic outcomes among people living with HIV/AIDS (PLWH). Zidovudine, one of the mainstream antiretrovirals in China, has been verified to have high reliability in adherence assessment, especially for its hair concentrations. However, data are limited in its predicting virologic outcomes. Therefore, this study aimed to characterize whether hair zidovudine concentrations can predict virologic suppression among Chinese PLWH compared with hair lamivudine concentrations and two self-reported measures, the overall frequency of adherence behaviors and percentage adherence. Methods: This cross-sectional study randomly recruited 564 PLWH currently treated with zidovudine, lamivudine, and other ART agents (efavirenz, nevirapine, or lopinavir/ritonavir) in Guangxi, China. Hair antiretroviral concentrations were determined using the LC-ESI+-MS/MS method. Receiver operating characteristic (ROC) curves were used to estimate the optimal classification thresholds of hair concentrations of zidovudine and lamivudine, and the two self-reported measures. Based on those optimal classification thresholds, logistic regression was used to examine whether those four adherence measures can predict virologic suppression (HIV-1 RNA &lt;200 copies/mL). Results: ROC curves demonstrated good classification performance for association with virologic suppression of zidovudine with the optimal threshold at 58 pg/mg and lamivudine at 255 pg/mg but no self-reported measures. PLWH with hair zidovudine concentrations &gt;58 pg/mg had an adjusted odds ratio (aOR) of 43.191 (95% confidence interval (CI) = 10.171‒183.418, p &lt; 0.001) for virologic suppression. Hair lamivudine concentrations were also associated with virologic suppression (aOR = 10.656, 95% CI = 3.670‒30.943, p &lt; 0.001). However, two self-reported measures did not predict virologic suppression (aORs = 1.157 and 2.488, ps &gt;0.149). Conclusion: Hair zidovudine concentrations can be served as an alternative tool for clinically predicting virologic suppression among PLWH in China. abstract_id: PUBMED:17898659 Effect of anticoagulants on plasma nevirapine and efavirenz concentrations. Antiretroviral drug measurements as part of therapeutic drug monitoring are performed in several human immunodeficiency virus (HIV) clinics in developed as well as developing countries. Heparin is routinely used as an anticoagulant for plasma drug estimations. EDTA plasma for drug estimations, if found suitable, would have several practical advantages. In this study, plasma nevirapine (NVP) and efavirenz (EFV) levels were compared in paired blood collected in heparin and EDTA Vacutainer tubes (Greiner Bio-one, Austria). The study involved HIV-infected individuals who were receiving antiretroviral treatment at the Government Hospital of Thoracic Medicine, Tambaram, Chennai, India. Their treatment consisted of administration of NVP or EFV, along with lamivudine and stavudine/zidovudine. Blood was collected at the same time in heparin and EDTA Vacutainer tubes. Plasma concentrations of NVP and EFV were estimated, according to validated methods, by high-performance liquid chromatography (HPLC). The mean NVP concentrations in heparin and EDTA plasma were 8.79 and 8.59 microg/mL, respectively. This difference was not statistically significant (P = 0.203). In the case of EFV, the corresponding values were 3.03 and 2.78 microg/mL, respectively; the difference was significant (P &lt; 0.05). However, the percent variation in EFV between both types of plasma was &lt;10%. Blood collected in heparin or EDTA Vacutainer tubes could be used for plasma NVP determination. Answer: Adverse events (AEs) of nevirapine (NVP) and efavirenz (EFV) have been investigated in relation to their plasma concentrations. For nevirapine, no significant associations between AEs and any tested exposure parameter were observed when corrected for known covariates such as gender, CD4 cell count at baseline, region, hepatitis coinfection, and possible interactions between these factors. Consequently, no target minimum plasma concentration (Cmin) value could be established above which patients were at increased risk for AEs. Instead, factors like geographical region, hepatitis coinfection, CD4 cell count, and gender were found to be significantly related to the incidence of CNS and psychiatric AEs, hepatic events, liver enzyme elevations (LEEs), and rash during treatment with NVP (PUBMED:16038474). In contrast, for efavirenz, the occurrence of elevated liver enzymes during the first 6 weeks of treatment was significantly correlated with EFV exposure (Cmin). Patients with an EFV Cmin above 2.18 mg/l during the induction phase were more at risk for elevated liver enzymes. However, no other correlations between AEs and EFV pharmacokinetics or patient characteristics could be identified (PUBMED:16038474). A separate study showed that patients who experienced adverse events related to nevirapine did not necessarily experience the same adverse events when switched to efavirenz, suggesting that nevirapine-related adverse events are not a class-specific effect (PUBMED:11017835). In summary, while there is a significant correlation between efavirenz plasma concentrations and the occurrence of certain AEs, particularly elevated liver enzymes, no such relationship was found for nevirapine when accounting for other covariates. Monitoring factors predictive of AEs and regular measurement of EFV levels and liver enzymes at the start of therapy may be advised for efavirenz, but the value of periodic drug monitoring of nevirapine as a way to prevent toxicity is limited (PUBMED:16038474).
Instruction: Can targeted intervention in CRC patients' relatives influence screening behaviour? Abstracts: abstract_id: PUBMED:17459064 Can targeted intervention in CRC patients' relatives influence screening behaviour? A pilot study. Objective: This study aimed to assess the utility of a standardised risk information tool with respect to the uptake of screening activities administered to an accessible population of first-degree relatives of patients with sporadic colorectal cancer. Method: Patients admitted for colorectal cancer resection were invited to enroll their family unit in the study. Families were randomised either to receive standard care or the intervention tool. The intervention group received a structured one page pamphlet outlining the risk associated with a family history of colorectal cancer, and the availability of and potential benefits from screening. Three months after the initial contact with the index patient, family members were invited to participate in a telephone interview. Primary end-points were both intention to screen and uptake of screening activities. Results: Forty-seven families had 156 eligible first degree relatives. Ninety-one consented to participate and were enrolled: 59 received standard care, 32 received the intervention. Age and sex were similar between groups. There was no significant difference in previous screening activities. The intervention tool had no influence on perceived self-risk of developing colorectal cancer or uptake of screening activities within the study period. Conclusion: This study suggests that the provision of targeted risk information to first-degree relatives is not likely to positively influence screening behaviour. Health care providers need to find alternative methods of disseminating information to this high-risk group. abstract_id: PUBMED:29998024 Uncovering the barriers to undergoing screening among first degree relatives of colorectal cancer patients: a review of qualitative literature. First degree relatives (FDRs) of colorectal cancer (CRC) patients are at higher risks of developing the disease, but screening rates amongst this group remains dismal. We undertook a systematic review of qualitative studies to identify the barriers surrounding CRC screening among FDRs from both the FDRs' and the healthcare professionals' perspectives. A comprehensive search of major bibliographic databases from January 2000 till February 2017 was performed to answer the above research questions. Pre-determined inclusion and exclusion criteria were applied. Thematic analysis was used to derive the commonalities across the studies. COREQ checklist was used to evaluate the quality of the included studies. Eight qualitative studies were included. Some of the barriers reported by FDRs included the fear of diagnosis of cancer and socio-cultural barriers. The most important barrier was the FDRs' negative perceptions towards screening test such as discomfort, embarrassment, cost of procedure and accessibility to healthcare resources. Likewise, the lack of awareness amongst FDRs that they are at increased risk of developing CRC was also found to be a barrier. On the other hand, healthcare providers are keen for patients themselves to be advocates for CRC screening of their family members as constraints posed by their daily workload impede their time to advocate screening. Lack of knowledge of the physicians on CRC screening guidelines is another notable barrier. A lack of awareness of being at higher risk of developing CRC coupled with negative attitude towards colonoscopy are the main barriers faced by the FDRs of CRC patients. Healthcare providers are more comfortable with patients being the advocates of screening among their family members. abstract_id: PUBMED:27625308 Can a print-based intervention increase screening for first degree relatives of people with colorectal cancer? A randomised controlled trial. Objective: To test the effectiveness of a targeted print-based intervention to improve screening adherence in first degree relatives of people with colorectal cancer (CRC). Methods: People with CRC and their adult first degree relatives were identified through a population-based cancer registry and randomly allocated as a family unit to the intervention or control condition. The control group received general information about CRC screening. The intervention group received printed advice regarding screening that was targeted to their risk level. Screening adherence was assessed at baseline and at 12 months via self report. Results: 752 (25%) index cases and 574 (34%) eligible first degree relatives consented to take part in the trial and completed baseline interviews. At 12 months, 58% of first degree relatives in the control group and 61% in the intervention group were adherent to screening guidelines (mixed effects logistic regression group by time interaction effect =2.7; 95%CI=1.2-5.9; P=0.013). Subgroup analysis indicated that the intervention was only effective for those with the lowest risk. Conclusions: Provision of personalised risk information may have a modest effect on adherence to CRC screening recommendations among first degree relatives of people diagnosed with CRC. Implications: Improved strategies for identifying and engaging first degree relatives are needed to maximise the population impact of the intervention. abstract_id: PUBMED:25946376 Randomized trial to increase colorectal cancer screening in an ethnically diverse sample of first-degree relatives. Background: Ethnic minorities, especially African Americans and Latinos, bear a disproportionate burden of colorectal cancer (CRC), as reflected in incidence, cancer stage, and mortality statistics. In all ethnic groups, first-degree relatives (FDRs) of CRC cases are at an elevated disease risk. However, underuse of CRC screening persists and is particularly evident among minority groups. The current study tested a stepped intervention to increase CRC screening among an ethnically diverse sample of FDRs of CRC cases. Methods: A statewide cancer registry was used to recruit CRC cases and through them their FDRs. Relatives who were not current on CRC screening were randomized to intervention or usual-care control arms. The stepped intervention consisted of ethnically targeted and individually tailored print materials followed by telephone counseling for those unscreened at 6 months. Results: The study sample of 1280 individuals consisted of 403 Latino, 284 African American, 242 Asian, and 351 white FDRs. Statistically significant effects were observed for the cumulative print plus telephone intervention at 12 months (26% in the intervention vs 18% in the control group) and the print intervention alone at 6 months (15% in the intervention vs 10% in the control group). The effect of the print intervention alone versus the cumulative interventions was not statistically significantly different. Stratified analyses indicated that the intervention was effective among white, Latino, and Asian individuals, but not among African-Americans. Conclusions: Overall, the intervention was effective in increasing screening rates. Oversampling racial/ethnic minorities allowed for the examination of effects within subgroups, revealing no effect among African American individuals. This finding illustrates the importance of including sufficient numbers of participants from diverse ethnic subgroups in intervention research to enable such stratified analyses. abstract_id: PUBMED:31528846 Community-Clinical Linkage Intervention to Improve Colorectal Cancer Screening Among Underserved Korean Americans. Korean Americans report the lowest and declined rates of colorectal cancer (CRC) screening, compared to general population in the United States. The present study aimed to evaluate the efficacy of a community-based multifaceted intervention designed to improve CRC screening among Korean Americans. A cluster-randomized trial involving 30 Korean church-based community organizations (n = 925) was conducted. Fifteen churches were assigned to intervention (n=470) and the other 15 to control (n = 455) groups. Main components of the intervention included interactive group education, patient navigation, physician engagement, and provision of fecal immunochemical test (FIT) kit. CRC screening rates were assessed at a 12-month follow-up. Participants in the intervention group were significantly more likely to receive CRC screening (69.3%) as compared with those in the control group (16%). The intervention was particularly effective in promoting FIT among the more disadvantaged individuals in the Korean American community. Regression analysis revealed that controlling for the intervention effect, male gender, high school education, annual income of $20,000-40,000 were significantly associated with increased screening by FIT, whereas English inefficiency was significantly and lack of health insurance was marginally significantly associated with decreased screening by colonoscopy/sigmoidoscopy. Culturally and linguistically appropriate multifaceted intervention combining FIT provision with community-clinical linkage has a potential to be a cost-effective and practical approach to effectively targeting hard-to-reach disadvantaged minority populations and enhance CRC screening to reduce cancer disparities. abstract_id: PUBMED:27921050 Esophagus-Related Symptoms in First-Degree Relatives of Patients with Achalasia: Is Screening Necessary? Background: Despite an increasing understanding of the pathophysiology of achalasia, the etiology of this esophageal motility disorder remains largely unknown. However, the occurrence of familial achalasia, its association with well-defined genetic syndromes, the candidate gene approach, and recent presentation of the first systematic genome-wide association study on achalasia suggest the involvement of genetic factors. Methods: In this study we analyzed the frequency with which symptoms associated with esophageal function (swallowing difficulties, regurgitations, retrosternal cramps/pain, heartburn) occur in first-degree relatives of patients with achalasia to determine if screening is useful and justified against the background of early diagnosis in a genetically predisposed population. The survey of data was carried out in 759 relatives of the 359 achalasia patients included in this study by means of structured interviews. Results: Swallowing difficulties as the principal symptom of achalasia were found to occur at least occasionally in 11.2% of first-degree relatives. In comparison with the prevalence of dysphagia in the general population of 7-10% up to 22%, as described in the literature, the frequency of swallowing difficulties does not seem to be increased in our population of relatives. Conclusion: Screening measures do not appear to be justified in spite of the potential genetic background of achalasia. abstract_id: PUBMED:37128484 Predictive factors based on the health belief model on cancer screening behaviour in first degree relatives of patients with Lynch syndrome-associated colorectal cancer. Objectives: This study aimed to investigate colorectal cancer-related knowledge, health beliefs, and screening behaviour in first-degree relatives (FDRs) of patients with Lynch syndrome-associated colorectal cancer (CRC) and explore the predictive factors of screening behaviour based on a health belief model. Methods: This cross-sectional study was conducted in the colorectal department of a Class A tertiary hospital in Guangzhou from December 2017 to December 2019. A total of 265 FDRs of 96 patients with Lynch syndrome-related CRC were selected. The study was conducted in the colorectal department of a tertiary cancer centre in Guangzhou. The demographic questionnaire, the simplified CRC knowledge questionnaire, and the Champion's Health Belief Model Scale were used for evaluation. Data were analyzed using statistical description, between-group comparisons, and binary logistic regression. Results: A total of 160 (60.4%), 61 (23.0%), and 44 (16.6%) of the participants had high, medium, and low levels of knowledge about CRC, respectively; the average overall score of health belief was 121.36 ± 13.02. Sixty-one participants (23.0%) underwent Lynch syndrome-associated cancer screening. The predictive factors of screening behaviour included sex (male), age (older), married status (married), multiple primary cancers of the index patients, and high levels of knowledge and health beliefs (P &lt; 0.05). Conclusions: The knowledge and health beliefs of cancer and cancer screening in FDRs of patients with Lynch syndrome-associated CRC should be improved. Both knowledge and beliefs are critical in promoting their cancer screening behaviour. Interventions should focus on health education and enhance health beliefs of the FDRs for better screening behaviour. abstract_id: PUBMED:38019247 Colorectal Cancer (CRC) Screening Literacy and Facilitators Related to a Free Screening Program in Thailand. Objective: This study aimed to evaluate CRC screening literacy and to determine the association between demographic characteristics and levels of CRC screening literacy and facilitators motivating participation. Methods: This cross-sectional study was carried out with 1,272 Thai citizens aged 50-70 years consenting to complete the screening protocol. Data were collected at the final visit by a self-reported questionnaire which was developed and validated using I-CVI=1. Difficulty of CRC screening knowledge dimension was tested by KR20, Cronbach's alpha 0.49. Results: The majority of participants, 834 (65.6%) had sufficient CRC screening literacy and association between characteristics and levels of CRC screening literacy was not different. Comparing between characteristics and facilitators showed significant differences in trusting healthcare providers, perceiving harmless procedures, having constipation and time available. By married and own income participants exhibited lower trust in healthcare providers, 68% (OR 0.32, 95%CI 0.14 - 0.73) and 59% (OR 0.41, 95%CI 0.19 - 0.95), respectively. Married individuals perceived the procedures of screening involved higher harm, 32% (OR 0.68, 95%CI 0.45 - 0.99). Female and age over 60 with constipation constituted higher screening, 1.9 fold (OR 1.93, 95%CI 1.44 - 2.60) and 1.4 fold (OR 1.44, 95%CI 1.09 - 1.90). Though time available, age over 60 was lower screening 56% (OR 0.44, 95%CI 0.24 - 0.80), while own income and income more than 30,000 THB/month participants constituted higher screening 15 fold (OR 15.14, 95%CI 4.72 - 48.56) and 2.5 fold (OR 2.50, 95%CI 1.50 - 4.18), respectively. Conclusion: CRC screening literacy of participants was high. The association between characteristics and CRC screening literacy levels did not differ while characteristics and facilitators differed in some aspects. The free program promoted CRC screening equity; however, multilevel facilitators must be concerned, prioritized and intervened to motivate participation. abstract_id: PUBMED:26997821 Screening First Degree Relatives of Persons with Primary Open Angle Glaucoma in India. Purpose: To report the results of screening first degree relatives of persons identified with primary open angle glaucoma in a tertiary eye hospital glaucoma services. Design: A cross-sectional study of first degree relatives of persons with primary open angle glaucoma. Materials And Methods: First degree relatives of patients identified with primary open angle glaucoma were invited to participate in a screening evaluation in the base hospital to detect glaucoma. All participating individuals had comprehensive eye examination including vision screening, refraction, slit-lamp biomicroscopy, applanation tonometry, gonioscopy, frequency doubling peri-metry and dilated fundus examination. Persons with definite and suspected glaucoma were subject to full threshold automated perimetry. Results: A 514 first degree relatives of 346 persons with primary open angle glaucoma, of 4972 individuals who were invited to participate attended the screening examination (Response Rate 7%). Fifty-five percent of those who attended were males and mean age of participants was 56.8 years. Sixty-eight relatives (13.3% of those screened) were detected to have definite glaucoma. Sixty percent of those detected with definite glaucoma were siblings. Fifteen percent of siblings, 4% of off-springs and 20% of parents who attended the screening examination had definite open angle glaucoma. Conclusion: Prevalence of open angle glaucoma amongst first degree relatives of persons with glaucoma is higher than in the general population as reported in previous studies. Significant barriers, however, exist in the uptake of eye care services among relatives of persons known to have primary open angle glaucoma. How to cite this article: Rajendrababu S, Gupta N, Vijayakumar B, Kumaragurupari R, Krishnadas SR. Screening First Degree Relatives of Persons with Primary Open Angle Glaucoma in India. J Curr Glaucoma Pract 2014;8(3):107-112. abstract_id: PUBMED:30861097 Telephone versus in-person colorectal cancer risk and screening intervention for first-degree relatives: A randomized controlled trial. Background: Having a first-degree relative (FDR) with colorectal cancer (CRC) is a significant risk factor for CRC. Counseling for FDRs regarding CRC risk factors and personalized risk is important to improve knowledge and screening compliance. Methods: A 3-arm randomized controlled trial compared tailored in-person and telephone CRC counseling interventions with controls among FDRs who were not mutation carriers for known hereditary cancer syndromes, but who were considered to be at an increased risk based on family history. It was hypothesized that both telephone and in-person approaches would increase CRC knowledge, screening adherence, perceived risk accuracy, and psychosocial functioning compared with controls. The authors anticipated greater satisfaction with the in-person approach. CRC knowledge, risk perception, psychosocial functioning, and intention to screen were assessed at baseline and at 2-week and 2-month follow-ups (primary endpoint). Results: A total of 278 FDRs (mean age, 47.4 years, standard deviation, 11.38 years) participated. At baseline, participants reported low to moderate CRC knowledge and overestimations of risk. Screening adherence was 73.7%. At 2 months, participants in the in-person arm and telephone arm demonstrated improvements in knowledge and perceived risk and were not found to be statistically different from each other. However, when comparing each intervention with controls, knowledge in the in-person arm was found to be statistically significantly higher, but the difference between the telephone and control arms was not. Cancer-related stress reduced over time in all groups. Intervention benefits were maintained at 1 year. Baseline screening intent/adherence were high, and therefore did not reach statistically significant improvement. Conclusions: Tailored in-person or telephone formats for providing CRC risk counseling, incorporating behavioral interventions, appear to improve knowledge and risk perceptions, with high client satisfaction. Answer: Targeted interventions in colorectal cancer (CRC) patients' relatives have shown mixed results in influencing screening behavior. A pilot study found that providing targeted risk information to first-degree relatives did not significantly influence their perceived self-risk of developing CRC or uptake of screening activities within the study period, suggesting that alternative methods of disseminating information to this high-risk group are needed (PUBMED:17459064). However, another study demonstrated that a targeted print-based intervention could modestly improve screening adherence in first-degree relatives of people with CRC, particularly for those with the lowest risk (PUBMED:27625308). A randomized trial that included ethnically diverse samples of first-degree relatives showed that a stepped intervention consisting of ethnically targeted and individually tailored print materials followed by telephone counseling increased CRC screening rates, although the intervention was not effective among African American individuals (PUBMED:25946376). Similarly, a community-clinical linkage intervention significantly improved CRC screening among underserved Korean Americans, particularly when combining fecal immunochemical test (FIT) provision with community-clinical linkage (PUBMED:31528846). In the context of Lynch syndrome-associated CRC, a study found that higher levels of knowledge and health beliefs were predictive factors of screening behavior among first-degree relatives, indicating the importance of improving knowledge and health beliefs to promote screening (PUBMED:37128484). Another study in Thailand showed that a free screening program promoted CRC screening equity, but multilevel facilitators must be considered to motivate participation (PUBMED:38019247). Overall, while some studies suggest that targeted interventions can influence screening behavior in CRC patients' relatives, the effectiveness may vary based on the type of intervention, the population targeted, and the specific barriers to screening that are addressed. It is clear that interventions need to be carefully designed and may need to be tailored to the specific needs and characteristics of the target population to be effective (PUBMED:29998024).
Instruction: Influence of angiotensin-converting enzyme inhibitors on hypotension after anesthetic induction: is the preoperative discontinuation of this drug necessary? Abstracts: abstract_id: PUBMED:20011860 Influence of angiotensin-converting enzyme inhibitors on hypotension after anesthetic induction: is the preoperative discontinuation of this drug necessary? Background And Objectives: The discontinuation of drugs such as angiotensin-converting enzyme inhibitors (ACE inhibitors) has been suggested based on reports of hypotension during anesthesia. This may imply on a higher risk of intraoperative hypertensive peaks with deleterious consequences for the patient. The objective of the present study was to evaluate the influence of the preoperative use of ACE inhibitors on the development of hypotension during anesthesia. Methods: This is a case-controlled study of patients who developed hypotension after anesthetic induction. The control group was composed of patients of the same age and gender who underwent surgeries during the same period and who did not develop hypotension. Parameters of interest included: age, gender, size of the surgery, prior diagnosis of hypertension, use of ACE inhibitors, physical status (ASA), intraoperative bleeding, anesthetic technique, and duration of the surgery. Results: In 2,179 surgeries, 40 patients developed hypotension. Twenty of those patients used ACEIs on the day of the surgery. The control group was composed by 171 patients, 11 of which used ACE inhibitors. Univariate analysis showed an association between hypertension and advanced age (p &lt; 0.001), prior diagnosis of hypertension (p &lt; 0.001), use of ACEIs (p = 0.006), physical status (ASA) (p = 0.006), and size of the surgery (p = 0.028). Stratified analysis did not show an association between hypotension and use of ACE inhibitors. On logistic regression, the association between the size of the surgery and the age of the patient and the development of hypotension remained. Conclusions: We did not observe an association between the use of ACE inhibitors and hypotension during anesthesia. The authors considered that the recommendation to discontinue ACE inhibitors on the day before the surgery should be reviewed. Other studies are necessary to confirm those results. abstract_id: PUBMED:36960855 Discontinuation of Renin-Angiotensin System Inhibitors During the Early Stage of the COVID-19 Pandemic. Background: In March and April 2020, medical societies published statements recommending continued use of renin-angiotensin system (RAS) inhibitors despite theoretical concerns that these medications could increase COVID-19 severity. Determining if patients discontinued RAS inhibitors during the COVID-19 pandemic could inform responses to future public health emergencies. Methods: We analyzed claims data from US adults with health insurance in the Marketscan database. We identified patients who filled a RAS inhibitor and were persistent, defined by not having a ≥30-day gap without medication available, and high adherence, defined by having medication available on ≥80% of days, from March 2019 to February 2020. Among these patients, we estimated the proportion who discontinued their RAS inhibitor (i.e., had ≥30 consecutive days without a RAS inhibitor available to take) between March and August 2020. For comparison, we estimated the proportion of patients that discontinued a RAS inhibitor between March and August 2019 after being persistent with high adherence from March 2018 to February 2019. Results: Among 816,380 adults who were persistent and adherent to a RAS inhibitor from March 2019 to February 2020, 10.8% discontinued this medication between March and August 2020. Among 822,873 adults who were persistent and adherent to a RAS inhibitor from March 2018 to February 2019, 11.7% discontinued this medication between March and August 2019. The multivariable-adjusted relative risk for RAS inhibitor discontinuation in 2020 vs. 2019 was 0.94 (95% CI 0.93-0.95). Conclusions: There was no evidence of an increase in RAS inhibitor discontinuation during the early stage of the COVID-19 pandemic. abstract_id: PUBMED:37231791 Discontinuation of Renin-Angiotensin System Inhibitors and Clinical Outcomes in Chronic Kidney Disease: A Systemic Review and Meta-Analysis. Background: Discontinuation of renin-angiotensin system (RAS) inhibitors is common in patients with chronic kidney disease (CKD), and the potential danger has been reported in several studies. However, a comprehensive analysis has not been conducted. Objectives: This study sought to evaluate the effects of discontinuation of RAS inhibitors in CKD. Method: Relevant studies up to November 30, 2022, were identified in the PubMed, Embase, Web of Science, and Cochrane Library databases. Efficacy outcomes included the composite of all-cause mortality, cardiovascular events, and end-stage kidney disease (ESKD). Results were combined using a random-effects or fixed-effects model, and sensitivity analysis used the leave-one-out method. Results: Six observational studies and one randomized clinical trial including 244,979 patients met the inclusion criteria. Pooled data demonstrated that discontinuation of RAS inhibitors was associated with an increased risk of all-cause mortality (HR 1.42, 95% CI 1.23-1.63), cardiovascular event risk (HR 1.25, 95% CI 1.17-1.22), and ESKD (HR 1.23, 95% CI 1.02-1.49). In sensitivity analyses, the risk for ESKD was reduced. Subgroup analysis showed that the risk of mortality was more pronounced in patients with eGFR above 30 mL/min/m2 and in patients with hyperkalemia-related discontinuation. In contrast, patients with eGFR below 30 mL/min/m2 were at great risk of cardiovascular events. Conclusions: The discontinuation of RAS inhibitors in patients with CKD was associated with a significantly increased risk of all-cause mortality and cardiovascular events. These data suggest that RAS inhibitors should be continued in CKD if the clinical situation allows. abstract_id: PUBMED:18327094 Discontinuation of and changes in drug therapy for hypertension among newly-treated patients: a population-based study in Italy. Objectives: To assess rates and determinants of treatment discontinuation of or changes in initial antihypertensive drug therapy in a large cohort of patients from Lombardia (Italy). Methods: The cohort included 445356 patients aged 40 -80 years who received their first antihypertensive drug prescription (monotherapy) during 1999--2002. Discontinuation was defined by the absence of any antihypertensive prescription during a 90-day period following the end of the latest prescription. If during the same period a drug of a different class was added or replaced the initial prescription, treatment modification was regarded as combination or switching, respectively. Competing risks methodology was used to estimate and compare cause-specific cumulative incidence. Results: Cumulative incidences of discontinuation, combination and switching were respectively 33, 14 and 15% at 6 months, 41, 18 and 17% at 1 year, and 50, 25 and 19% at 5 years since initial treatment. Compared with patients starting treatment with angiotensin-converting enzyme inhibitors, the rate of discontinuation was less for patients on angiotensin receptor blockers with a hazard ratio of 0.92 (95% confidence interval =0.90-0.94), whereas increased discontinuation was observed for patients starting with other drugs, mainly beta-blockers with a hazard ratio of 1.64 (1.62-1.67); and diuretics with a hazard ratio of 1.83 (1.81-1.85). Conclusion: In the general population of Lombardia, discontinuation of the initial single antihypertensive drug treatment is a common phenomenon, whereas switching to another monotherapy and to combination treatment occur at similarly much lower rates. Blockers of the renin-angiotensin system are associated with the lowest incidence of treatment discontinuation. abstract_id: PUBMED:29952638 Anesthetic Management of the Hypertensive Patient: Part I. Hypertension is an important health challenge that affects millions of people across the world and is a major risk factor for cardiovascular disease. It is critical that anesthesia providers have a working knowledge of the systemic implications of hypertension. This review article will discuss the medical definitions of hypertension, the physiology of maintaining blood pressure, outpatient treatment of hypertension, anesthetic implications, and the common medications used by anesthesia providers in the treatment of hypertension. Part I will provide an overview of hypertension and blood pressure regulation. In addition, drugs affecting predominantly renal control of hypertension, such as diuretics, angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, and renin-inhibiting agents, will be discussed. In part II, the remaining major antihypertensive medications will be reviewed as well as anesthetic implications of managing patients with hypertension. abstract_id: PUBMED:11944760 Impact of reference-based pricing for angiotensin-converting enzyme inhibitors on drug utilization. Background: Increasing copayments for higher-priced prescription medications has been suggested as a means to help finance drug coverage for elderly patients, but evaluations of the impact of such policies are rare. The objective of this study was to analyze the effect of reference-based pricing of angiotensin-converting enzyme (ACE) inhibitors on drug utilization, cost savings and potential substitution with other medication classes. Methods: We analyzed 36 months of claims data from British Columbia for 2 years before and 1 year after implementation of reference-based pricing (in January 1997). The 119,074 patients were community-living Pharmacare beneficiaries 65 years of age or older who used ACE inhibitors during the study period. The main outcomes were changes over time in use of ACE inhibitors, use of antihypertensive drugs and expenditures for antihypertensive drugs, as well as predictors of medication switching related to reference-based pricing. Results: We observed a sharp decline (29%) in the use of higher-priced cost-shared ACE inhibitors immediately after implementation of the policy (p &lt; 0.001). After a transition period, the post-implementation utilization rate for all ACE inhibitors was 11% lower than projected from pre-implementation data. However, overall utilization of antihypertensives was unchanged (p = 0.40). The policy saved $6.7 million in pharmaceutical expenditures during its first 12 months. Patients with heart failure or diabetes mellitus who were taking a cost-shared ACE inhibitor were more likely to remain on the same medication after implementation of reference-based pricing (OR 1.12 [95% confidence interval, CI, 1.06-1.19] and 1.28 [95% CI 1.20-1.36] respectively). Patients with low-income status were more likely than those with high-income status to stop all antihypertensive therapy (OR 1.65 [95% CI 1.43-1.89]), which reflects a general trend toward discontinuation of therapy among these patients even before implementation of reference-based pricing. Interpretation: Reference-based pricing in British Columbia achieved a sustained reduction in drug expenditures, and no changes in overall use of antihypertensive therapy were observed. Further research is needed on the overall health and economic effects of such policies. abstract_id: PUBMED:2674438 Drug interactions with ACE inhibitors. Drug interactions common to all angiotensin-converting enzyme (ACE) inhibitors include those with thiazide diuretics and other antihypertensive agents. Interactions involving specific ACE inhibitors include captopril-digoxin, resulting in decreased clearance of digoxin from plasma in patients with heart failure, and captopril-probenecid, causing a decrease in captopril clearance. Tissue kinins, such as bradykinin, are metabolised by ACE inhibitors. Interactions involving bradykinin include captopril-indomethacin, in which an attenuation of the antihypertensive effects of captopril is manifest. Interestingly, neither enalapril nor lisinopril appear to show this interaction with indomethacin. Kinin-based interactions may also be important in the genesis of ACE inhibitor-induced cough and skin rash. Renal dysfunction affects the pharmacokinetics and pharmacodynamics of all ACE inhibitors, necessitating dosage reduction. Hepatic impairment is of less clinical importance, causing a delay in the onset of action of enalapril with initial doses, but probably having little relevance to long-term therapy. abstract_id: PUBMED:22023456 Discontinuation of angiotensin-converting enzyme inhibitors: a cohort study. What Is Known And Objective: Angiotensin-converting enzyme inhibitors (ACEI) are among the most commonly used antihypertensive agents worldwide. Factors associated with their discontinuation in clinical practice are not well defined as studies undertaken in different ethnic populations have yielded conflicting result. We aimed to identify predictors of ACEI discontinuation among Chinese patients. Methods: We used a validated clinical database for this cohort study. We included all adult patients aged ≥ 18 years who visited any primary care clinic in one Territory of Hong Kong and who were prescribed an ACEI from January 2004 to June 2007. The cumulative incidence of discontinuation 180 days after prescription of an ACEI was measured. Factors associated with discontinuation were evaluated by multiple regression analyses. Results: Among 9398 eligible patients, 14·0% discontinued their prescriptions. After controlling for patient's age, gender, socio-economic status, service type, district of residence, visit type and number of comorbidities, patients aged 70 or above [adjusted odds ratio (AOR) = 1·27, 95% CI 1·04-1·56, P = 0·022], male subjects (AOR = 1·21, 95% CI 1·05-1·40, P = 0·008), new visitors (AOR = 0·64, 95% CI 0·55-0·75, P &lt; 0·001), patients who obtained their prescriptions in Family Medicine Specialist Clinic (FMSC) (AOR = 1·43, 95% CI 1·14-1·79, P = 0·002), patients who lived in the less urbanized district (AOR = 1·96, 95% CI 1·55-2·48, P &lt; 0·001) and the most rural district (AOR = 1·24, 95% CI 1·03-1·49, P = 0·027), and patients with no comorbidity, were more likely to discontinue their medications. What Is New And Conclusion: Physicians should pay attention to adherence problems particularly when prescribing ACEI to male patients, those who are older than 70 years, have no comorbidity, live in less urbanized or more rural areas, qualify for fee-waiver, are new attendees of consultations or obtain their ACEI prescriptions in FMSC. Future research should evaluate the reasons for ACEI discontinuation among these higher-risk groups. abstract_id: PUBMED:16685222 Discontinuation of antihypertensive drugs among newly diagnosed hypertensive patients in UK general practice. Objectives: To evaluate antihypertensive drug discontinuation among newly diagnosed hypertensive patients. Methods: This was a population-based cohort study using the UK General Practice Research Database (GPRD). Patients newly diagnosed with hypertension between 1991 and 2001 and subsequently treated with antihypertensive drugs were included. Overall antihypertensive drug discontinuation was evaluated from a patient's first-ever antihypertensive prescription. Class-specific discontinuations were evaluated from a patient's first-ever prescriptions of angiotensin-converting enzyme (ACE) inhibitors (ACE-I), alpha antagonists, angiotensin-2 antagonists (AIIA), beta blockers, calcium-channel blockers (CCB), miscellaneous, potassium-sparing diuretics, and thiazides. Discontinuation occurred when no antihypertensive prescription was issued within 90 days following the most recent prescription expiration. Results: The study population comprised 109 454 patients, with 223 228 antihypertensive drug-class episodes contributing to the class-specific analysis. Overall antihypertensive drug discontinuation was 20.3% [95% confidence interval (CI): 20.0, 20.5%] at 6 months and 28.5% (95% CI: 28.2, 28.7%) at 1 year, with a median time to discontinuation of 3.07 years. The median time to antihypertensive class discontinuation was longest for AIIAs (2.90 years) followed by ACE-I (2.24), CCB (1.86), beta blockers (1.50), thiazides (1.50), alpha antagonists (1.35), potassium-sparing diuretics (0.40), and miscellaneous (0.39). One-year discontinuation ranged from 29.4% (95% CI: 28.0, 30.7) for AIIAs to 64.1% (95% CI: 62.1, 66.3) for potassium-sparing diuretics. Forty-four percent who discontinue their first-ever antihypertensive drug class failed to switch to a different drug class within 90 days of discontinuation. Conclusion: It is important that general practitioners (GPs) monitor patients closely in the first year following antihypertensive drug initiation, due to the high early risk of discontinuation, and the low percentage of patients who switch to a different antihypertensive drug class after a drug-class discontinuation. AIIA, followed by ACE-I and CCB, had the lowest risk of discontinuation among antihypertensive drug classes. abstract_id: PUBMED:35618096 Use of renin angiotensin aldosterone system inhibitors in children with lupus and time to glucocorticoid discontinuation. There is little data to inform use of renin angiotensin aldosterone system (RAAS) inhibitors in pediatric patients with systemic lupus erythematosus (SLE). Here, we sought to characterize RAAS inhibitor use in pediatric SLE and determine whether early RAAS inhibitor initiation among children with incident lupus nephritis is associated with decreased duration of chronic glucocorticoid exposure. A retrospective cohort study was performed of children (ages 5-18) with SLE and/or lupus nephritis in the Truven MarketScan™ Medicaid and Commercial databases (2013-2018) and estimated RAAS inhibitor use. Among incident nephritis cases, we used competing risk hazard models with inverse probability of treatment weighting to estimate the association between RAAS inhibitor initiation less than 180 days after diagnosis and time to glucocorticoid discontinuation with kidney failure as a competing event. Among 592 children with nephritis and 1407 children with non-kidney SLE, 67% and 15% ever received RAAS inhibitors, respectively. Median duration of RAAS inhibitor use among 323 incident users was 14 and 9 months in children with and without nephritis, respectively. Medicaid enrollment was independently associated with greater likelihood of RAAS inhibitor use, irrespective of nephritis. Among 158 incident nephritis cases, early RAAS inhibitor initiation was significantly associated with a faster rate of glucocorticoid discontinuation (adjusted sub-distribution hazard ratio 1.81, 95% confidence interval [1.09 - 3.00]). Thus, early initiation of RAAS inhibitors may have a role in children newly diagnosed with lupus nephritis; not only those with refractory proteinuria after induction therapy. Hence, integrated health systems data could be leveraged to confirm these findings and optimize adjunctive therapies in pediatric lupus. Answer: The influence of angiotensin-converting enzyme (ACE) inhibitors on hypotension after anesthetic induction has been a topic of discussion, with some suggesting the preoperative discontinuation of these drugs to avoid intraoperative hypotension. However, a case-controlled study (PUBMED:20011860) did not observe an association between the use of ACE inhibitors and hypotension during anesthesia. The study compared patients who developed hypotension after anesthetic induction with a control group that did not develop hypotension, considering factors such as age, gender, surgery size, prior diagnosis of hypertension, use of ACE inhibitors, physical status (ASA), intraoperative bleeding, anesthetic technique, and surgery duration. The results indicated that the size of the surgery and the age of the patient were associated with the development of hypotension, but the use of ACE inhibitors was not. Consequently, the authors suggested that the recommendation to discontinue ACE inhibitors before surgery should be reviewed, although they also acknowledged that further studies are necessary to confirm these results. In summary, based on the study referenced (PUBMED:20011860), preoperative discontinuation of ACE inhibitors may not be necessary solely for the purpose of preventing hypotension after anesthetic induction. However, clinical decisions should always be made on a case-by-case basis, considering the individual patient's health status and the type of surgery being performed.
Instruction: Does continuous use of metformin throughout pregnancy improve pregnancy outcomes in women with polycystic ovarian syndrome? Abstracts: abstract_id: PUBMED:18834342 Does continuous use of metformin throughout pregnancy improve pregnancy outcomes in women with polycystic ovarian syndrome? Aim: Polycystic ovarian syndrome (PCOS) is one of the most common endocrinopathies in women of reproductive age. It is associated with hyperinsulinemia and insulin resistance which is further aggravated during pregnancy. This mechanism has a pivotal role in the development of various complications during pregnancy. In the past few years, metformin, an insulin sensitizer, has been extensively evaluated for induction of ovulation. Its therapeutic use during pregnancy is, however, a recent strategy and is a debatable issue. At present, evidence is inadequate to support the long-term use of insulin-sensitizing agents during pregnancy. It is a challenge for both clinicians and researchers to provide good evidence of the safety of metformin for long-term use and during pregnancy. This study aimed to evaluate pregnancy outcomes in women with PCOS who conceived while on metformin treatment, and continued the medication for a variable length of time during pregnancy. Methods: This case-control study was conducted from January 2005 to December 2006 at the antenatal clinics of the Department of Obstetrics and Gynecology, Aga Khan University, Karachi, Pakistan. The sample included 137 infertile women with PCOS; of these, 105 conceived while taking metformin (cases), while 32 conceived spontaneously without metformin (controls). Outcomes were measured in three groups of cases which were formed according to the duration of use of metformin during pregnancy. Comparison was made between these groups and women with PCOS who conceived spontaneously. Results: All 137 women in this study had a confirmed diagnosis of PCOS (Rotterdam criteria). These women were followed up during their course of pregnancy; data forms were completed once they had delivered. Cases were divided into three groups: group A, 40 women who stopped metformin between 4-16 weeks of pregnancy; group B, 20 women who received metformin up until 32 weeks of gestation; and group C; 45 women who continued metformin throughout pregnancy. All the groups were matched by age, height and weight. Comparison was in terms of early and late pregnancy complications, intrauterine growth restriction and live birth rates. In groups A, B and C the rate of pregnancy-induced hypertension/pre-eclampsia was 43.7%, 33% and 13.9% respectively (P&lt;0.020). Rates of gestational diabetes requiring insulin treatment in groups A and B were 18.7% and 33.3% compared to 2.5% in group C (P&lt;0.004). The rate of intrauterine growth restriction was significantly low in group C: 2.5% compared to 19.2% and 16.6% in groups A and B respectively (P&lt;0.046). Frequency of preterm labor and live birth rate was significantly better in group C compared to groups A and B. Overall rate of miscarriages was 7.8%. Controls were comparable to group A in terms of early and late pregnancy complications. Conclusion: In women with PCOS, continuous use of metformin during pregnancy significantly reduced the rate of miscarriage, gestational diabetes requiring insulin treatment and fetal growth restriction. No congenital anomaly, intrauterine death or stillbirth was reported in this study. abstract_id: PUBMED:23205605 Effects of metformin-diet intervention before and throughout pregnancy on obstetric and neonatal outcomes in patients with polycystic ovary syndrome. Objective: Prospectively assess whether metformin/diet pre-conception and throughout pregnancy would safely reduce first trimester miscarriage and improve pregnancy outcomes in women with polycystic ovary syndrome (PCOS). Research Design And Methods: In 76 PCOS women, first pregnancy miscarriage and live birth were compared before and on metformin/diet, started 6.8 months (median) before conception, continued throughout pregnancy. On metformin 2-2.55 g/day, low glycemic index diet, first pregnancy outcomes in PCOS were compared with 156 community obstetric practice women (controls). Main Outcome Measures: Live births, miscarriage, birth &lt;37 weeks gestation, gestational diabetes, pre-eclampsia, fetal macrosomia. Results: In 76 PCOS women before metformin-diet, there were 36 miscarriages (47%) and 40 live births vs. 14 (18%) miscarriages and 62 live births on metformin-diet 6.8 months before conception and throughout pregnancy, p = 0.0004, OR 3.99, 95% CI 1.91-8.31. On metformin-diet, PCOS women did not differ (p &gt; 0.08) from controls for birth &lt;37 weeks gestation, gestational diabetes, pre-eclampsia, or fetal macrosomia. Conclusions: Metformin-diet before and during pregnancy in PCOS reduces miscarriage and adverse pregnancy outcomes. Study limitation: individual benefits of the diet alone and diet plus metformin could not be assessed separately. Randomized, controlled clinical trials now need to be done with a larger number of patients. abstract_id: PUBMED:22375613 Pregnancy outcomes and the effect of metformin treatment in women with polycystic ovary syndrome: an overview. This article is a review of the literature assessing pregnancy outcomes and the effect of metformin treatment among women with polycystic ovary syndrome (PCOS). A review of research published in English was undertaken using PubMed and MEDLINE databases. The weight of the available evidence suggests that pregnant women with PCOS are at an increased risk of developing gestational diabetes, hypertensive disorders of pregnancy, preterm birth and early pregnancy loss. Obesity is a contributory factor for the increased risk of gestational diabetes in this group of women and is estimated to affect 5-40% of pregnant women with PCOS. The prevalence of other obstetric complications is estimated at 10-30% for gestational hypertension, 8-15% for pre-eclampsia and 6-15% for preterm birth. The association between PCOS and early pregnancy loss may not be direct, wherein the presence of PCOS-associated hyperinsulinemia, leading to hyperandrogenemia, has been implicated in the pathophysiology of early pregnancy loss. Apart from the role of metformin in improving the metabolic consequences accompanying PCOS, it has been shown to improve pregnancy rates in women with PCOS who are resistant to clomiphene citrate. In conclusion, pregnancy in women with PCOS is associated with adverse obstetric outcomes (multiple adverse obstetric risk). Whether metformin should be administered throughout pregnancy still remains controversial. Further prospective studies that foster a larger number of participants and adjust for all potentially confounding factors are needed. abstract_id: PUBMED:19708174 Prevention of gestational diabetes mellitus by continuing metformin therapy throughout pregnancy in women with polycystic ovary syndrome. Aim: The aim of the present study was to assess whether continuation of metformin therapy throughout pregnancy can reduce the development of gestational diabetes in women with polycystic ovarian syndrome (PCOS). Methods: This experimental study was conducted in a private tertiary level infertility care center between June 2002 and December 2006. Fifty-nine non-diabetic PCOS patients who conceived while taking metformin and different ovulation-inducing agents comprised the sample group. Twenty-nine of them continued metformin throughout pregnancy and 30 did not The main outcome measure was development of gestational diabetes in women with PCOS and their fetal outcome. Results: Basic parameters such as age, body mass index, levels of follicle-stimulating hormone and luteinizing hormone, free testosterone, dihydroepiandrosterone sulfate, fasting sugar and fasting insulin did not differ between the two groups. Among 29 women who received metformin, gestational diabetes developed during one of 29 pregnancies (3.44%) versus nine of 30 pregnancies (30%) without metformin. The odds ratio for gestational diabetes in women without metformin versus with metformin was 12 (95% confidence interval: 6.20-18.08). All babies born in the metformin group had average birthweight and those in the control group 4 (13.33%) were large for date. Conclusion: In PCOS use of metformin throughout pregnancy is associated with and might be responsible for a ninefold reduction (30-3.44%) of gestational diabetes. abstract_id: PUBMED:30039871 Metformin for women who are overweight or obese during pregnancy for improving maternal and infant outcomes. Background: There has been considerable interest in providing antenatal dietary and lifestyle advice for women with obesity or who are overweight during pregnancy, as a strategy to limit gestational weight gain and improve maternal and infant health. However, such antenatal interventions appear to have a modest effect on gestational weight gain and other clinical pregnancy and birth outcomes and additional strategies are required.Metformin is an oral insulin-sensitising medication that acts to decrease blood glucose concentrations. Metformin is commonly used in the treatment of type 2 diabetes mellitus and polycystic ovarian syndrome, and is being used increasingly in the treatment of gestational diabetes, having been shown to result in decreased rates of caesarean birth and neonatal hypoglycaemia. Metformin may be an adjuvant therapy to current antenatal strategies in pregnant women with obesity or who are overweight, acting to reduce glucose production in the liver and improve glucose uptake in smooth muscle cells, and therefore improve the overall metabolic health of women in pregnancy and reduce the risk of known adverse pregnancy outcomes. Objectives: To evaluate the role of metformin in pregnant women with obesity or who are overweight, on maternal and infant outcomes, including adverse effects of treatment and costs. Search Methods: We searched Cochrane Pregnancy and Childbirth's Trials Register, ClinicalTrials.gov, the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (11 October 2017), and reference lists of retrieved studies. Selection Criteria: All published and unpublished randomised controlled trials evaluating metformin use (compared with placebo or no metformin) in women with obesity or who are overweight in pregnancy for improving outcomes, alone or in combination with other interventions were eligible for inclusion. Data Collection And Analysis: Two review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. We used the GRADE approach to assess the quality of the evidence. Main Results: We included three studies which randomised women (1099) with a body mass index (BMI) of 30 kg/m2 (1 study) and 35 kg/m2 (2 studies), with outcomes available for 1034 participants. None of the studies assessed women with a BMI between 25 kg/m2and 29.9 kg/m2, therefore we could not assess the use of metformin in women considered overweight. We did not identify studies of metformin in combination with another treatment. Two other studies are ongoing.All three included studies were randomised controlled trials and compared metformin with placebo, commencing early in the second trimester. Doses ranged from 500 mg twice daily to 3.0 g per day. All three studies (two in the UK, one in Egypt) included women attending hospitals for antenatal care.Two studies were generally at a low risk of bias across the majority of domains. We assessed the third study as being at an unclear risk of selection bias, performance and detection bias due to insufficient information in the report. We assessed the trial as being at a low risk of attrition bias and other bias; we felt it was at a high risk of reporting bias.The primary outcome for this review was infant birthweight large-for-gestational-age (&gt; 90th centile for gestational age and infant sex). Women who received metformin or placebo had a similar risk of their baby being born large for his or her gestational age (risk ratio (RR) 0.95, 95% confidence interval (CI) 0.70 to 1.30; 2 studies, 831 infants; high-quality evidence).Women who received metformin may have a slightly lower gestational weight gain (mean difference (MD) -2.60 kg, 95% CI -5.29 to 0.10; 3 studies, 899 women; low-quality evidence).Metformin may make little or no difference in the risk of women developing gestational hypertension (average RR 1.02, 95% CI 0.54 to 1.94; 3 studies, 1040 women; low-quality evidence) or pre-eclampsia (RR 0.74, 95% CI 0.09 to 6.28; 2 studies, 840 women; low-quality evidence). Metformin probably makes little or no difference in the risk of women developing gestational diabetes (RR 0.85, 95% CI 0.61 to 1.19; 3 studies, 892 women; moderate-quality evidence).One study of 400 women reported women receiving metformin were more likely to experience any adverse effect compared with women receiving placebo (RR 1.63, 95% CI 1.27 to 2.08; 1 study, 400 women). Adverse effects included abdominal pain, diarrhoea, or headache. When considering individual side effects, women receiving metformin were more likely to experience diarrhoea than women receiving placebo (RR 2.34, 95% CI 1.74 to 3.14; 797 women; 2 studies, 797 women; high-quality evidence). No other important differences were identified between Metformin and placebo for other maternal secondary outcomes, including: caesarean birth, birth before 37 weeks of pregnancy, shoulder dystocia, perineal tear, or postpartum haemorrhage.In terms of other infant outcomes, there was little or no difference in the infant birthweight (MD 6.39 g, 95% CI -81.15 to 93.92; 2 studies, 834 infants; high-quality evidence). There were no other important differences identified for other infant secondary outcomes in this review: hypoglycaemia (low blood sugar); hyperbilirubinaemia (jaundice); Apgar score less than 7 at five minutes; or stillbirth and neonatal death. Only one study reported admission to the neonatal intensive care unit (NICU), indicating similar rates of admission between women receiving metformin or placebo; no other admission data were reported to assess differences in costs. Authors' Conclusions: There is insufficient evidence to support the use of metformin for women with obesity in pregnancy for improving maternal and infant outcomes. Metformin was, however, associated with increased risk of adverse effects, particularly diarrhoea. The quality of the evidence in this review varied from high to low, with downgrading decisions based on study limitations and inconsistency.There were only a small number of studies included in this review. Furthermore, none of the included studies included women categorised as 'overweight' and no trials looked at metformin in combination with another treatment.Future research is required in order to further evaluate the role of metformin therapy in pregnant women with obesity or who are overweight, as a strategy to improve maternal and infant health, alone or as an adjuvant to dietary and lifestyle advice. abstract_id: PUBMED:35149444 Does metformin improve reproduction outcomes for non-obese, infertile women with polycystic ovary syndrome? Meta-analysis and systematic review. Background: Polycystic ovary syndrome (PCOS) affects 10-12% of women of reproductive age. The prevalence of infertility in women with PCOS is high at between 70 and 80%. Treatment initially includes recommendations to follow preconception guidelines, such as lifestyle changes, folic acid therapy and halting the consumption of tobacco and alcohol. Management with pharmacological agents and surgical procedures have been incorporated into treatment regimens to improve fertility. Of these, metformin, an insulin sensitizer used as oral hypoglycemic agent, is gaining popularity. Objectives: The aim of this study was to perform a meta-analysis of randomized controlled trials (RCTs) to evaluate the role of metformin in improving the reproduction outcomes for non-obese, infertile women with polycystic ovary syndrome. Search Methods: In June 2019, we searched PubMed (from inception to present), Ovid Medline, Ovid EMBASE, Scopus, and the Cochrane library without date or language restrictions for relevant RCTs. Search was then updated in April 2020. Bibliographies of included studies were also searched for eligible studies. Selection Criteria: RCTs that compared the effectiveness of metformin with other modalities in treating infertility in non-obese women with PCOS were included. The eligible outcomes for inclusion were pregnancy rate, miscarriage rate, live birth rate, ovarian hyperstimulation (OHSS) and multiple pregnancy. Data Collection And Analysis: Data extraction and study quality assessment were performed independently by two reviewers, and any disagreements resolved by consensus or by arbitration by a third reviewer. Where two or more studies reported on the same outcome a meta-analysis was conducted using Cochrane RevMan 5. Results: We found 21 RCTs which were eligible for inclusion in our systematic review, including 2638 patients with PCOS. Our meta-analysis showed that the use of metformin in non-obese women with PCOS is associated with slight increase in clinical pregnancy rate compared to placebo (47.7% vs. 42.9%) (Pooled risk ratio = 1.08 [0.82, 1.42], 95% CI, p = 0.60). It also showed that metformin is comparable to clomiphene citrate (CC) when the outcome is clinical pregnancy rate and the risk of multiple pregnancies tended to be lower (Pooled risk ratio = 0.36 [0.07, 1.92], 95% CI, p = 0.23, 3 studies). However, metformin had a higher risk of miscarriage rate (Pooled risk ratio = 2.41 [0.39, 14.86], 95% CI, p = 0.72). Furthermore, this analysis suggested that adding metformin to CC treatment decreases miscarriage risk by two folds compared to metformin alone (Pooled risk ratio = 2.67 [1.32, 5.39], 95% CI, p = 0.006) and showed no difference compared to CC alone. In comparison to letrozole, combination of metformin and CC is associated with lower clinical pregnancy rate (Pooled risk ratio = 0.52 [0.14, 1.91] 95% CI, p = 0.33) and multiple pregnancies (Pooled risk ratio = 0.45 [0.06, 3.19] 95% CI, p = 0.42). Conclusion: Although this study illustrated that metformin may be better than placebo for some pregnancy outcomes, stronger, more definitive evidence from sufficiently powered trials are required before considering metformin for treating non-obese infertile women with PCOS within the current recommended guidelines. abstract_id: PUBMED:23886298 Efficacy and safety of metformin during pregnancy in women with gestational diabetes mellitus or polycystic ovary syndrome: a systematic review. Background: Metformin is an effective oral anti-hyperglycemic agent that is widely used to manage diabetes mellitus type 2 in the general population and more recently, in pregnancy. However, as metformin crosses the placenta, its use during pregnancy raises concerns regarding potential adverse effects on the mother and fetus. Objective: (i) To provide background for the use of metformin during pregnancy through a narrative review and (ii) to critically appraise the published evidence on the efficacy and safety of using metformin during pregnancy through a systematic review. Results: Metformin appears to be effective and safe for the treatment of gestational diabetes mellitus (GDM), particularly for overweight or obese women. However, patients with multiple risk factors for insulin resistance may not meet their treatment goals with metformin alone and may require supplementary insulin. Evidence suggests that there are potential advantages for the use of metformin over insulin in GDM with respect to maternal weight gain and neonatal outcomes. Furthermore, patients are more accepting of metformin than insulin. The use of metformin throughout pregnancy in women with polycystic ovary syndrome reduces the rates of early pregnancy loss and preterm labor and protects against fetal growth restriction. There have been no demonstrable teratogenic effects, intra-uterine deaths or developmental delays with the use of metformin. Conclusions: The publications reviewed in this paper support the efficacy and safety of metformin during pregnancy with respect to immediate pregnancy outcomes. Because there are no guidelines for the continuous use of metformin in pregnancy, the duration of treatment is based on clinical judgment and experience on a case-by-case basis. abstract_id: PUBMED:30585549 Metformin and Pregnancy Outcomes: Evidence Gaps and Unanswered Questions. Background: Metformin is sometimes used as an alternative to insulin in gestational diabetes mellitus (GDM). It is also used to achieve ovulation in polycystic ovary syndrome (PCOS). Pre-natal exposure to metformin results from its continuation after a successful ovulation in women with PCOS, its maintenance in women with pre-gestational diabetes or the installation of metformin in GDM. Little is known about the potential consequences of metformin exposure on pregnancy outcomes and offspring development. The aim of this review is to summarize the metformin effects on pregnancy outcomes and offspring development. Gaps in the available evidence and unanswered questions are also discussed. Methods: A comprehensive literature search was carried out to identify eligible studies from MEDLINE/PubMed, EMBASE and SCIELO databases through 1995 first semester. Results: Several factors limit the effect of metformin on embryos. In contrast, placental transport of metformin is effective allowing for a higher fetal exposure; the impact of this finding remains unclear. It seems that the interruption of metformin after a pregnancy diagnosis in women with PCOS is not associated with a higher miscarriage risk and it continuation does not seem to impair the maternal metabolic prognosis or prevent emerging GDM. Conclusions: It seems to have no sense to prolong the use of metformin after a pregnancy diagnosis in women with PCOS. Patients with GDM may be treated with metformin under on judicious basis, and a careful attachment to clinical guidelines and regulations is recommended. The long-term effects of pre-natal exposure to metformin on the offspring remain uncertain. abstract_id: PUBMED:16766228 Polycystic ovarian syndrome--prognosis and outcomes. Polycystic ovarian syndrome (PCOS) is a condition that is present in 5-6% of women of reproductive age. It has potentially profound implications for women with regard to anovulatory infertility and symptoms related to elevated androgen levels. In addition, in later life women are prone to significant health problems related to hyperinsulinaemia, with an excess risk for diabetes and cardiovascular risk factors. Evidence suggests that the adverse features of PCOS can be ameliorated with lifestyle intervention, such as diet and exercise, while further short-term benefits related to ovulation and cardiac risk factors may be derived from medication with metformin. Evidence for the long-term use of metformin to protect against adverse cardiovascular outcomes and for the use of metformin throughout pregnancy to reduce the risk of miscarriage, gestational diabetes, pre-eclampsia and fetal macrosomia is still lacking. abstract_id: PUBMED:33634727 Effects of metformin treatment on pregnancy outcomes in patients with polycystic ovary syndrome. Introduction: This review covers the current evidence regarding the use of metformin as a therapeutic intervention for optimizing pregnancy outcomes in women with polycystic ovary syndrome (PCOS). Areas Covered: After searching Medline, Embase and CINAHL, all important large clinical trials and observational studies plus systematic reviews, meta-analyses and Cochrane reviews have been summarized here. The results have been compared to culminate in a thorough review and discussion on the use of metformin in relation to reproductive outcomes for women with PCOS. The role of metformin in PCOS is explored both in terms of achieving conception and during pregnancy. The existing evidence around metformin use is summarized both during the preconceptual period and during pregnancy, in relation to reproductive outcomes. Expert Opinion: Metformin is a widely used medication, often prescribed to improve reproductive outcomes for women with PCOS. However, the evidence remains equivocal regarding its efficacy both in optimizing fertility and pregnancy outcomes. More research is required with special emphasis on metformin use within different populations, including ethnic groups and women with varying BMI ranges. Answer: The continuous use of metformin throughout pregnancy in women with polycystic ovarian syndrome (PCOS) has been associated with improved pregnancy outcomes. Studies have shown that metformin use during pregnancy can significantly reduce the rate of miscarriage, gestational diabetes requiring insulin treatment, and fetal growth restriction (PUBMED:18834342). Additionally, metformin use before and during pregnancy in PCOS has been reported to reduce miscarriage and adverse pregnancy outcomes (PUBMED:23205605). In women with PCOS, metformin therapy throughout pregnancy is associated with a reduction in the development of gestational diabetes, particularly for overweight or obese women (PUBMED:19708174). Furthermore, metformin use throughout pregnancy in women with PCOS has been found to reduce rates of early pregnancy loss and preterm labor, and protect against fetal growth restriction, without demonstrable teratogenic effects, intra-uterine deaths, or developmental delays (PUBMED:22375613; PUBMED:23886298). However, the evidence is not entirely conclusive, and some studies suggest that there is insufficient evidence to support the use of metformin for women with obesity in pregnancy for improving maternal and infant outcomes (PUBMED:30039871). Moreover, the long-term effects of prenatal exposure to metformin on offspring remain uncertain (PUBMED:30585549). In summary, while there is evidence to suggest that the continuous use of metformin throughout pregnancy may improve certain pregnancy outcomes in women with PCOS, further research is required to fully understand its efficacy and safety, particularly in different populations and women with varying BMI ranges (PUBMED:33634727). Additionally, the potential long-term effects on offspring need to be further investigated.
Instruction: Should a reimplantation valve sparing procedure be done systematically in type A aortic dissection? Abstracts: abstract_id: PUBMED:17049875 Should a reimplantation valve sparing procedure be done systematically in type A aortic dissection? Objective: To evaluate the risks and benefits of a systematic reimplantation valve sparing procedure in the surgical treatment of type A aortic dissection (TAAD). Patients And Methods: From February December 2005, 15 consecutive patients (mean age 61+/-12 years) who underwent surgery for TAAD were analyzed prospectively. Eleven had a preoperative CT-scan and all had an echography. Eight patients presented with a preoperative aortic insufficiency&gt;2/4 and seven had an ascending aortic aneurysm over 50mm. In 11 cases, arterial cannulation was performed directly into the ascending aorta. Surgical technique included complete resection and replacement of the ascending aorta using a reimplantation valve sparing technique (David), associated in 12 patients with an arch replacement, under mild (29.7+/-3.0 degrees C) hypothermia and cerebral selective antegrade perfusion. Results: Aortic clamping, cerebral perfusion and cardiopulmonary bypass (CPB) times were respectively 93+/-29, 18+/-9, and 131+/-38min. Mean bleeding at 24h was 1165+/-846ml. Troponin I level at 24h was 21+/-30 microg/l. One patient had a right coronary artery bypass for a chronically occluded coronary. Another had a triple arterial revascularisation for pre-existing coronary dissection. One patient presented with a postoperative regressive right hemiparesia (normal CT-scan). Two patients underwent revision for bleeding (one was undergoing treatment by clopidogrel). One patient had at day 7 an implantation of a covered stentgraft on the descending aorta for a concomitant penetrating aortic ulcer. One patient died suddenly on POD 7 during a tracheal aspiration. Intubation and ICU times were respectively 9.5+/-16.3 and 16.2+/-20.9 days. Four patients with severe preoperative co morbidities had long intubations. Echographic and CT-scan control, done in postoperative and after a mean follow up of 11.0+/-4.8 months, did not show any residual aortic insufficiency (actuarial survival rate at 2 years of 93.3%). Conclusion: A reimplantation valve sparing procedure in the TAAD seems to be reliable and should be proposed systematically without emphasizing perioperative morbidity. abstract_id: PUBMED:33061187 Long-term experience with valve-sparing root reimplantation surgery in tricuspid aortic valve. Objective: To analyze our long-term experience with valve-sparing reimplantation technique in treating aortic root aneurysm, aortic regurgitation, and aortic dissection in patients with tricuspid aortic valve. Methods: Between March 1998 and October 2018, 303 consecutive patients underwent valve-sparing reimplantation in our institution. The mean age of this cohort was 52.9 ± 15 years. Time to event analysis was performed with the Kaplan-Meier method. Risk of death, reoperation, and aortic regurgitation (AR) recurrence were analyzed using the cox-regression method. Results: In-hospital mortality was 1% (n = 3) of which two were admitted for acute aortic dissection. Median follow-up was 5.81 years ([IQR]: 2.8-10 years). Thirty-nine patients (14.4%) died during follow-up. At 5 and 10 years, overall survival was 92 ± 2%and 75 ± 4.9%, respectively. Seventeen patients required late aortic valve reoperation. Freedom from valve reoperation was 95 ± 2% and 90 ± 3%. Freedom from AR &gt; 2+ and AR &gt; 1+ at 10 years was 91 ± 4% and 71.5 ± 4.6%, respectively. Significant multivariate predictors of death included age, New York Heart Association dyspnea class (NYHA), type A acute dissection (TAAD), and preoperative left ventricular end-diastolic diameter (LVEDD). Significant multivariate predictors of AR recurrence included indication for surgery, previous cardiac surgery, and presence of preoperative AR. Freedom from events like major bleeding, thromboembolic events, and infective endocarditis at 10 years were 97%, 98%, and 96%, respectively. Conclusions: Aortic valve-sparing with the reimplantation technique has been performed for over two decades in our institution, and the results in patients with tricuspid aortic valve (TAV) are excellent in terms of survival and freedom from valve-related adverse outcomes including valve reoperation. These results continue supporting the use of valve sparing root replacement using the reimplantation technique (VSRR) in patients with aortic aneurysm, irrespective of whether they have preoperative AR or not. VSRR is safe, durable, and reproducible, but further follow-up, well into the second decade is still necessary. abstract_id: PUBMED:37304700 Reimplantation of the aortic valve in patients with tricuspid aortic valve: the Toronto General Hospital experience. Background: Aortic valve sparing operations were introduced three decades ago but controversy remains regarding its appropriateness, reproducibility and durability. This article describes the long-term outcomes of patients who had reimplantation of the aortic valve. Methods: All patients who had reimplantation of a tricuspid aortic valve at Toronto General Hospital from 1989 through 2019 were selected for this study. Patients were followed prospectively with periodical clinical assessments and imaging of the heart and aorta. Results: Four hundred and four patients were identified. The median age was 48.0 [interquartile range (IQR), 35.0-59.0] years and 310 (76.7%) were men. There were 150 patients with Marfan syndrome, 20 with Loeys-Dietz syndrome and 33 with acute or chronic aortic dissections. The median follow-up was 11.7 (IQR, 6.8-17.1) years. There were 55 patients alive and without reoperation at 20 years. The cumulative mortality at 20 years was 26.7% [95% confidence interval (CI): 20.6-34.2%], the cumulative incidence of reoperation on the aortic valve was 7.0% (95% CI: 4.0-12.2%) and the development of moderate or severe aortic insufficiency was 11.8% (95% CI: 8.5-16.5%). We could not identify variables associated with reoperation on the aortic valve or with the development of aortic insufficiency. New distal aortic dissections were common in patients with associated genetic syndromes. Conclusions: Reimplantation of the aortic valve in patients with tricuspid aortic valve provides excellent aortic valve function during the first two decades of follow-up. Distal aortic dissections are relatively common in patients with associated genetic syndromes. abstract_id: PUBMED:33778863 Aortic valve-sparing root replacement with Tirone E. David's reimplantation technique: single-centre 25-year experience. Objectives: Tirone E. David described aortic valve-sparing root reimplantion (AVSRR) almost 30 years ago. At our centre, we adopted this operation in 1993, and since then, we have performed &gt;700 operations over a time period of &gt;25 years. In this report, we present our single-centre experience. Methods: Between 1993 and 2019, a total of 732 patients underwent AVSRR at our centre. The mean age was 53 ± 15 and 522 (71%) were male. Marfan syndrome was present in 117 (16%) patients and bicuspid aortic valve in 81 (11%). The indication for surgery was aortic root aneurysm in 588 (80%) patients and acute aortic dissection in 144 (20%) patients. Results: Mini-sternotomy was performed in 74 (10%) patients. A straight tube graft (David I) was used in 677 (92%) and a Valsalva-graft in 55 (8%) patients. Cusp plasty was done in 83 (11%) patients. Concomitant cardiac procedures were performed in 438 (60%) patients. Overall in-hospital mortality was 3.8% (n = 29) and 1.9% (n = 11) in elective cases. Postoperative echocardiography was available for 671 patients and showed aortic insufficiency (AI) ≤I° in 647 (96%) patients. The mean follow-up time was 10 ± 6.7 years and comprised a total of 7307 patient-years. The 1-, 5-, 10- and 15-year survival rates were: 93%, 88%, 77% and 65%, respectively. The rates for freedom from valve-related reoperation at 1, 5, 10 and 15 years after initial surgery were 97%, 93%, 88% and 85%, respectively. Cox regression analysis identified age [odds ratio (OR) = 0.975, 95% confidence interval (CI) = 0.955-0.995, P = 0.016], hyperlipidaemia (OR = 1.980, 95% CI = 1.175-3.335, P = 0.010), preoperative level of left ventricular ejection function (OR = 1.749, 95% CI = 1.024-2.987, P = 0.041) and postoperative degree of AI (OR = 1.880, 95% CI = 1.532-2.308, P ≤ 0.001) as risk factors for the future AI or reoperation. Conclusions: David procedure can be performed extremely safely, with low risks for perioperative morbidity and mortality, both in elective situations via minimally invasive access and in emergent settings for acute aortic type A dissection. Regarding long-term outcome, David's AVSRR seems to provide excellent clinical results and sustainable function of the aortic valve in the majority of patients almost 3 decades after its introduction. abstract_id: PUBMED:28838101 Valve-sparing David I procedure in acute aortic type A dissection: a 20-year experience with more than 100 patients. Objectives: The aortic valve-sparing David procedure has been applied to the elective treatment of patients with aortic aneurysms with excellent results. The use of this technique in patients with acute aortic dissection type A (AADA) is still a matter of debate. We present our long-term experience with 109 patients with AADA who had the valve-sparing David I procedure. Methods: Between July 1993 and October 2015, 109 patients with AADA had the valve-sparing David I procedure at our centre. We conducted a retrospective review with follow-up. Results: The mean age was 54 ± 12 years; 78 (72%) patients were men. Marfan syndrome was present in 6 (5%) patients and bicuspid aortic valve in 3 (3%). Only 4 (4%) patients received the isolated David procedure; 50 (46%) underwent additional proximal, 13 (12%) subtotal and 42 (39%) total aortic arch replacement. The in-hospital mortality rate was 11% ( n = 12). Intraoperative/discharge echocardiography showed aortic insufficiency ≤ I° in 93 of 97 patients (96%). Mean follow-up time was 8.3 ± 5.7 years. The survival rate after discharge at 1, 5 and 10 years was 94%, 90% and 78%, respectively. Thirteen percent ( n = 13) of patients underwent valve-related reoperation. Freedom from valve-related reoperation at 1, 5 and 10 years was 96%, 88% and 85%, respectively. Compared to patients who underwent the David I procedure for any reason other than AADA, there were no significant differences in long-term survival rates ( P = 0.29) and freedom from a valve-related reoperation ( P = 0.39). Conclusions: The valve-sparing David I procedure has acceptable long-term results even in emergent operations for AADA and is not inferior when performed in elective settings. abstract_id: PUBMED:35213703 Long-term durability of a reimplantation valve-sparing aortic root replacement can be expected in both Marfan syndrome and Loeys-Dietz syndrome. Objectives: The goal of this study was to evaluate the surgical outcomes of a valve-sparing root replacement using the reimplantation technique for annuloaortic ectasia in patients with Marfan syndrome (MFS) and in those with Loeys-Dietz syndrome (LDS). Methods: We reviewed 103 patients with MSF with mutations in the fibrillin-1 gene and 28 patients with LDS with mutations in the transforming growth factor-beta receptor and 2, SMAD3 and transforming growth factor beta-2 from 1988 to 2020. Results: Forty-four (42.7%) patients with MFS [26 men, 31 (7.6) years] and 10 (35.7%) patients with Loeys-Dietz syndrome (LDS) [7 men, 22 (standard deviation: 8.6) years] who had no aortic dissection and underwent valve-sparing root replacement were included. The preoperative sinus diameter [46 (45-50.5) mm in those with MFS vs 48 (47-50) mm in those with LDS, p = 0.420] and the percentage of aortic insufficiency &gt; grade 2+ [31.8% (10/44) in patients with MFS vs 10.0% (1/10) in those with LDS, p = 0.667] revealed no significant differences between the 2 groups. The cumulative incidences of aortic insufficiency greater than grade 1 (p = 0.588) and aortic valve reoperation (p = 0.310) were comparable between the 2 groups. Patients with LDS had a higher tendency towards aortic dissection after the initial operation (p = 0.061) and a significantly higher cumulative incidence of aortic reoperation (p = 0.003) versus those with MFS. Conclusions: Patients with MFS and those with LDS showed similar cumulative incidences of recurrent aortic valve insufficiency and aortic valve reoperation. Those with LDS revealed a higher cumulative incidence of aortic reoperation and a greater tendency towards aortic dissection after the initial operation compared with those with MFS. abstract_id: PUBMED:28918516 Valve-sparing aortic root surgery. CON: remodeling. The two major valve-sparing root replacement procedures, aortic valve reimplantation (reimplantation) and aortic root remodeling (remodeling), have advantages and disadvantages, which are reviewed herein. The main advantage of reimplantation is the resulting annular support, and the disadvantages are the unfavorable hemodynamics and relatively long procedure time. The main advantages of remodeling are the physiological hemodynamics and decreased procedure time, and the disadvantage is the lack of annular support. With technical advances and modifications, however, the differences between these two procedures have narrowed. Application of a graft with sinuses for reimplantation improves the hemodynamics, and addition of annuloplasty to remodeling provides the necessary annular support. Nevertheless, remodeling has some advantages because less root dissection is required and the procedure time is shorter and hemodynamically favorable. Thus, remodeling may be the procedure of choice for high-risk patients (such as those with acute aortic dissection, of advanced age, with reduced ventricular function, or undergoing a concomitant operation). Remodeling may also be best for young athletes because of the hemodynamic advantage. Regardless of the advantages and disadvantages, both procedures provide excellent clinical results in terms of late valve durability. Surgeons should be familiar with both techniques and properly match patients to the appropriate treatment. abstract_id: PUBMED:26795638 Early and Late Outcomes Following Valve Sparing Aortic Root Reconstruction: The ANZSCTS Database. Background: Valve sparing aortic root reconstruction (VSARR) has become an alternative to traditional aortic root replacement with a valved conduit. There have been various modifications but the two broad types are aortic root reimplantation and the aortic root remodelling procedure. We present the early and late outcomes following valve sparing aortic root reconstruction surgery in Australia. Methods: We reviewed the ANZSCTS database for patients undergoing these procedures. Preoperative, intraoperative and postoperative variables were analysed. Multivariable regression was performed to determine independent predictors of 30-day mortality. We also obtained five- and 10-year survival estimates by cross-linking the ANZSCTS database with the Australian Institute of Health and Welfare's National Death Index. Results: Between January 2001 and January 2012, 169 consecutive patients underwent VSARR procedures. The mean age of the study population was 54.4 years with 31.4% being females. Overall, nine patients (5.9%) died within 30 days post procedure and five patients (3%) had permanent strokes. However, out of 132 elective cases, only five patients died (3.8%). Independent predictors of 30-day mortality were female gender [OR 5.65(1.24-25.80), p=0.025], preoperative atrial arrhythmia [OR 6.07(1.14-32.35), p=0.035] and acute type A aortic dissection [OR 7.71(1.63-36.54), p=0.01]. Long-term survival was estimated as 85.3% and 72.7% at five- and 10-years, respectively. Conclusions: Along with an acceptable rate of early mortality and stroke, VSARR procedures provide good long-term survival according to the ANZSCTS database. As promising procedure for pathologies that impair the aortic root integrity, they can be adopted more widely, especially in Australian and New Zealand centres with experienced aortic units. Future studies are planned to assess freedom from valve deterioration and repeat surgery. abstract_id: PUBMED:17257921 Remodeling or reimplantation for valve-sparing aortic root surgery? Background: Valve-sparing operations are gaining increasing acceptance; however, there is an ongoing discussion about the technique-specific indications. We present our experience with a follow-up of 123 months. Methods: Between July 1993 and July 2005, 164 consecutive patients were operated on using the remodeling (group A, n = 96) or reimplantation technique (group B, n = 68). Fifty-seven patients presented with acute type A dissection. Aortic regurgitation was present in 84%. Follow-up was 54.7 +/- 28 in group A and 48.4 +/- 37.3 months in group B. Results: After urgent operations, 4 patients died in each group, but none died after elective surgery. Late mortality was 8% in group A and 4% in group B. Seven patients of group A and 1 in group B required reoperation. Echocardiographic follow-up of reoperation-free survivors showed that 3 patients (all group A, 1.3%) had aortic regurgitation of more than grade II. Root diameter, valve pressure gradient, and valve orifice area were comparable. No gross thromboembolic or bleeding events occurred. Conclusions: Aortic valve-sparing operations can provide acceptable long-term results in both techniques. Particular care to the annulus in the remodeling technique and different prosthesis designs in the reimplantation technique may overcome the intrinsic problems of each procedure. abstract_id: PUBMED:16928499 Early results of valve-sparing reimplantation procedure using the Valsalva conduit: a multicenter study. Background: This study evaluates the midterm clinical results of valve-preserving aortic root reconstruction by means of a modified conduit incorporating sinuses of Valsalva. Methods: During a 5-year period, 151 patients with aneurysm of the aortic root underwent a reimplantation type of valve-sparing procedure using the Gelweave Valsalva prosthesis that incorporates sinuses of Valsalva. There were 121 males (80.1%), and the mean age was 56.4 +/- 14.4 years (range, 14 to 83). Fourteen percent of the patients had Marfan syndrome and 8.6% had bicuspid aortic valve. Seven patients (4.6%) suffered from acute aortic dissection. Aortic replacement was extended to the arch in 14 patients (9.3%). Sixteen patients (10.6%) had associated cusp repair. Results: In-hospital mortality was 3.3%, and it was significantly higher among patients operated on for acute dissection (p = 0.001) and in symptomatic patients (III-IV New York Heart Association class; p = 0.021). Follow-up (mean, 18 months; range, 1 to 60) was 100% complete. There were 2 late deaths. Ten patients (6.8%) had 3 to 4+ aortic regurgitation, and 8 of these required late aortic valve replacement. Cusp repair was associated with a high incidence of late aortic valve replacement (p = 0.005). At 5 years, freedom from aortic valve replacement and freedom from grade 3 to 4 aortic insufficiency was 90.8% +/- 3.3% and 88.7% +/- 3.6%, respectively. Conclusions: The reimplantation valve-sparing procedure with the Gelweave Valsalva prosthesis provides satisfactory results for patients with aortic root aneurysm. Aortic cusp repair may lead to late aortic insufficiency. Proper leaflet evaluation is of paramount importance in preventing residual valve regurgitation. Answer: The question of whether a reimplantation valve sparing procedure should be done systematically in type A aortic dissection (TAAD) is addressed in several studies, each providing insights into the risks, benefits, and long-term outcomes of the procedure. One study suggests that a reimplantation valve sparing procedure in TAAD seems reliable and should be proposed systematically without emphasizing perioperative morbidity (PUBMED:17049875). This is supported by the fact that postoperative echographic and CT-scan control did not show any residual aortic insufficiency, and the actuarial survival rate at 2 years was 93.3%. Another study reports excellent long-term results in patients with tricuspid aortic valve (TAV) undergoing valve-sparing reimplantation, with high survival rates and freedom from valve-related adverse outcomes, including valve reoperation (PUBMED:33061187). This study also emphasizes the need for further follow-up into the second decade to confirm durability. The Toronto General Hospital experience with reimplantation of the aortic valve in patients with TAV indicates excellent aortic valve function during the first two decades of follow-up, although new distal aortic dissections were common in patients with associated genetic syndromes (PUBMED:37304700). A single-centre 25-year experience with Tirone E. David's reimplantation technique shows that the procedure can be performed safely with low risks for perioperative morbidity and mortality, and provides excellent clinical results and sustainable function of the aortic valve in the majority of patients (PUBMED:33778863). The David I procedure in acute aortic type A dissection has been found to have acceptable long-term results even in emergent operations, and is not inferior when performed in elective settings (PUBMED:28838101). In patients with Marfan syndrome (MFS) and Loeys-Dietz syndrome (LDS), valve-sparing root replacement using the reimplantation technique showed similar cumulative incidences of recurrent aortic valve insufficiency and aortic valve reoperation, although patients with LDS had a higher cumulative incidence of aortic reoperation and a greater tendency towards aortic dissection after the initial operation compared with those with MFS (PUBMED:35213703). While the evidence generally supports the use of reimplantation valve sparing procedures in TAAD, it is important to note that individual patient factors and surgical considerations may influence the decision to use this technique systematically. For instance, the study discussing the remodeling versus reimplantation for valve-sparing aortic root surgery highlights that while both procedures provide excellent clinical results, surgeons should be familiar with both techniques and match patients to the appropriate treatment based on individual circumstances (PUBMED:17257921). Additionally, the ANZSCTS database review of valve sparing aortic root reconstruction (VSARR) procedures, including both reimplantation and remodeling, indicates that VSARR offers good long-term survival with an acceptable rate of early mortality and stroke. This suggests that VSARR, which includes reimplantation, can be adopted more widely, especially in centers with experienced aortic units (PUBMED:26795638). Finally, a multicenter study evaluating the midterm clinical results of valve-preserving aortic root reconstruction using a modified conduit incorporating sinuses of Valsalva (reimplantation type) found satisfactory results for patients with aortic root aneurysm. However, it also noted that aortic cusp repair may lead to late aortic insufficiency, emphasizing the importance of proper leaflet evaluation to prevent residual valve regurgitation (PUBMED:16928499). In conclusion, while there is strong support for the systematic use of reimplantation valve sparing procedures in TAAD, the decision should be individualized based on patient-specific factors, surgical expertise, and the presence of any concomitant conditions that may affect the outcome.
Instruction: Is grip strength a useful single marker of frailty? Abstracts: abstract_id: PUBMED:14600007 Is grip strength a useful single marker of frailty? Background: chronological age is widely used as a marker of frailty in clinical practice. However there can be wide variation in frailty between individuals of a similar age. Grip strength is a powerful predictor of disability, morbidity and mortality which has been used in a number of frailty scores but not as a single marker of frailty. Objective: to investigate the potential of grip strength as a single marker of frailty in older people of similar chronological age. Design: cross-sectional study with prospective collection of mortality data. Setting: North Hertfordshire, UK. Subjects: 717 men and women, aged 64-74, born and still living in North Hertfordshire, who took part in a previous study to investigate the relationship between size at birth and ageing processes in later life. Methods: the number of significant associations between grip strength and the ageing markers was compared with numbers between chronological age and the ageing markers. Results: in men, lower grip strength correlated significantly with ten ageing markers compared to chronological age which was significantly associated with seven. In women, there were six significant relationships for grip compared to three for age. The greater number of relationships between grip strength and ageing markers was not explained by the association between grip strength and age, and remained after adjustment for adult size. Conclusions: grip strength was associated with more markers of frailty than chronological age within the narrow age range studied. Grip strength may prove a more useful single marker of frailty for older people of similar age than chronological age alone. Its validity in a clinical setting needs to be tested. abstract_id: PUBMED:36211090 The Utility of Grip Strength as a Simplified Measure of Frailty in the Older Adult in the Preoperative Clinic. Objective The aim of this study was to compare the measure of grip strength against other validated methods of measuring frailty. Materials and methods This was a single-center, cross-sectional study that took place at the Westchester Medical Center Pre-Procedural Testing Clinic. The patient population included n = 73 patients ≥65 years of age evaluated for elective surgery. During the study, patients' grip strength, CFS-I (Clinical Frailty Score of Investigator), CFS-P (Clinical Frailty Score of Participant), and FRAIL (Fatigue, Resistance, Aerobic capacity, Illnesses, and Loss of weight) scores were measured. Results Grip strength correlated negatively with the CFS-I, CFS-P, and FRAIL scores for females. Reduced grip strength in females correlated with higher frailty scores and vice versa. Male grip strength showed no significant relationship with the frailty scales. In addition, multivariate linear regression analysis revealed that the independent measure that demonstrated a significant inverse association with grip strength was age (β= -0.43, p = &lt;0.001). Conclusions There exists a difference in the utility of grip strength as a measure of frailty between males and females. abstract_id: PUBMED:37493378 Relationships among Grip Strength Measurement, Response Time, and Frailty Criteria. Background: Muscle response in older adults is believed to decrease with maximal muscle strength, although it has not been adequately assessed; further, the relationship between frailty and muscle response remains unexamined. Objectives: This study aimed to develop a practical method for measuring muscle response using grip strength in older adults and to clarify the relationship between frailty and grip strength response. Design, Setting, And Participants: We performed a cross-sectional, clinical, observational study. A total of 248 patients (94 men and 154 women, mean age: 78.2 years) who visited the outpatient unit in the Integrated Healthy Aging Clinic of our Hospital for the first time were enrolled. Measurements: Using a grip strength measuring device originally developed by us, we measured grip strength response indices, such as reaction time, time constant, rate of force development (response speed), and maximum grip strength. Grip strength response indices were compared among three groups (robust, pre-frail, and frail) according to the Fried and Kihon checklist assessments for frailty. Results: Based on Fried's assessment, marked differences were found between groups not only in maximal grip strength but also in response time and response speed. Based on the Kihon checklist assessment, there was no significant difference in response time; however, a considerable difference in response speed for the left hand was observed. Moreover, according to the Kihon checklist assessment, some cases showed differences in muscle response although not in maximal muscle strength. Conclusions: The response speed of grip strength was suggested to decrease with frailty. The results suggest that measurement of grip strength response in both hands is useful to examine the relationship between frailty and grip strength response. abstract_id: PUBMED:31912668 Comparison of multidimensional frailty score, grip strength, and gait speed in older surgical patients. Background: Frail older adults are at increased risk of post-operative morbidity compared with robust counterparts. Simple methods testing frailty such as grip strength or gait speed have shown promising results for predicting post-operative outcome, but there is a debate regarding the most appropriate and precise frailty assessment method. We compared the predictive value of multidimensional frailty score (MFS) with grip strength, gait speed, or conventional risk stratification tool for predicting post-operative complications in older surgical patients. Methods: From January 2016 to June 2017, 648 older surgical patients (age ≥ 65 years) were included for analysis. MFS was calculated based on the preoperative comprehensive geriatric assessment. Grip strength and gait speed were measured before surgery. The primary outcome was a composite of post-operative complications (e.g. pneumonia, urinary tract infection, delirium, acute pulmonary thromboembolism, and unplanned intensive care unit admission). The secondary outcome was the 6 month all-cause mortality. Results: Among 648 patients (mean age 76.6 ± 5.4 years, 52.8% female), 66 (10.2%) patients experienced post-operative complications, and the 6 month mortality was 3.9% (n = 25). Grip strength, gait speed, MFS, and American Society of Anesthesiologists (ASA) classification could predict post-operative complication but only MFS (hazard ratio = 1.581, 95% confidence interval 1.276-1.959, P &lt; 0.001) could predict 6 month mortality after adjustment. MFS (C-index = 0.750) had a superior prognostic utility compared with age (0.638, P = 0.008), grip strength (0.566, P &lt; 0.001), and ASA classification (0.649, P = 0.004). MFS improved the predictive value on age [C-index of 0.638 (age) vs. 0.758 (age + MFS), P &lt; 0.001] and ASA classification [C-index of 0.649 (ASA) vs. 0.765 (ASA + MFS), P &lt; 0.001] for post-operative complication; however, gait speed or grip strength did not provide additional prognostic value in both age and ASA. Conclusions: Multidimensional frailty score based on preoperative comprehensive geriatric assessment showed better utility than age, grip strength, gait speed, or ASA classification for predicting post-operative complication and 6 month mortality. MFS also showed incremental predictive ability for post-operative complications with the addition of age and ASA classification. Accordingly, MFS is superior to grip strength or gait speed for predicting complications among older surgical patients. abstract_id: PUBMED:37847794 The phenotypic and genotypic association of grip strength with frailty, physical performance and functional limitations over time in older adults. Objectives: To replicate the phenotypic associations of grip strength with frailty, physical performance and functional limitations in older adults for longer follow-up periods and to examine whether these associations are due to shared genetic factors. Methods: In total 2,262 participants 55 years and older with follow-up data up to 23 years (Nobservations = 8,262) from the Longitudinal Aging Study Amsterdam were included. Weighted polygenic risk scores for grip strength (PRS-GS) were built using the genome-wide meta-analysis results from UK Biobank as reference. Grip strength was measured two times on each hand using a dynamometer. Frailty index (FI) and frailty phenotype were operationalised following standard procedures. Performance tests included a timed walk test, a repeated chair stands test and put on-take off cardigan test. Functional limitations were assessed using a questionnaire with six items. Results: Higher grip strength was phenotypically associated with lower FI (b = -0.013, 95% CI (-0.016, -0.009)), better physical performance (b = 0.040, 95% CI (0.026, 0.054)) and less functional limitations (OR = 0.965, 95% CI (0.954, 0.977)) over time for follow-up periods up to 23 years. However, PRS-GS was not associated with any of the traits. Conclusion: The phenotypic associations between grip strength, frailty, physical performance and functional limitations were replicated for follow-up periods up to 23 years. However, the associations between the traits could not be explained by shared genetics potentially indicating a more relevant involvement of non-genetic factors. abstract_id: PUBMED:31876400 Grip strength mediates the relationship between muscle mass and frailty. Background: Although sarcopenia and frailty are important diseases in geriatrics, few studies have investigated the association between the two diseases. Thus, this study aimed to examine the relationship between two components of sarcopenia (muscle mass and muscle function) and frailty. Methods: In total, 997 Korean older adults (456 men and 541 women) were included in this cross-sectional observational study. We used a polynomial linear regression analysis to obtain standardized sex, age, and height-adjusted appendicular skeletal muscle mass (zASM), as well as to standardized sex, age, and height-adjusted grip strength (zGS). We then performed a causal mediation analysis to confirm the relationship between zASM and frailty. Results: In both men and women, zGS mediated the relationship between zASM and frailty (average causal mediation effect in men: -0.096 {-0.159 to -0.050}; in women: -0.053 {-0.098 to -0.010}). For every one-point increase in zGS score, the relative risk of a one-point increase in frailty was reduced by 21% in men (e-0.238 = 0.788) and by 11% in women (e-0.113 = 0.893). Conclusions: In this study on Korean older adults, muscle mass did not have a direct effect on frailty but had an indirect effect through altered muscle function. abstract_id: PUBMED:38247984 The Contributions of Knee Extension Strength and Hand Grip Strength to Factors Relevant to Physical Frailty: The Tanno-Sobetsu Study. Sarcopenia is the core factor of frailty. This study specifically focused on lower limb muscle strength and examined muscle indices that indicate the risk of frailty or pre-frailty in older adults. The study included 327 community-dwelling individuals aged ≥65 years (43.7% male) who participated in the cohort. Frailty was defined based on five symptoms: weight loss, low activity level, exhaustion, weakness and slowness. Participants were classified into frail (three or more applicable), pre-frail (one to two applicable) and non-frail groups. Muscle strength (knee extension strength, toe grip strength and hand grip strength) were assessed, and appendicular muscle mass was assessed via a bioelectrical impedance analysis. The adjusted odds ratio (OR) of muscle indices for with frailty (frail group vs. pre-frail group) or pre-frailty (pre-frail group vs. non-frail group) were calculated. The prevalence of frail and pre-frail was 7% and 40%, respectively. Adjusted for age, sex, albumin and medical history, knee extension strength was significantly associated with frailty (odds ratio 0.95, 95% CI 0.92-0.98), while hand grip strength was associated with pre-frailty (odds ratio 0.92, 95% CI 0.88-0.97) but not with other muscle indices. This study is significant for identifying knee extension strength as a factor relevant to frailty in older adults considered pre-frailty, emphasizing the importance of this specific muscle measure in predicting and managing frailty. abstract_id: PUBMED:27118256 Hand grip strength as a screening tool for frailty in older patients with haematological malignancies. Objectives: Frailty is a geriatric syndrome characterized by decreased physiological reserves and an age-related vulnerability to stressors with higher risk of adverse health outcomes. Comprehensive geriatric assessment (CGA) might detect frailty but is time-consuming, implying the need for initial frailty screening. Most frailty screening tools do not include functional measures. Hand grip strength (HGS) is a reliable surrogate for overall muscle strength and predicts functional decline, morbidity and mortality. No studies are available in cancer patients on HGS as screening tool for frailty. We aimed to assess whether HGS can be used as a screening tool to predict an abnormal CGA and therefore frailty. Methods: Single centre cohort study in 59 patients aged 70 years or more with a haematological malignancy. HGS was measured using a vigorimeter. A patient was considered frail if any of the CGA elements were impaired. Results: Mean HGS before start of therapy in women was 37.0 ± 14.3 kPa and in men 66.1 ± 13.1 kPa. An abnormal CGA was present in 52 subjects (88%). HGS was associated with concurrent abnormal CGA (p = 0.058 in women, p = 0.009 in men). AUC was 0.800 (SE = 0.130) in women and 0.847 (SE = 0.118) in men. Optimal HGS cut-off points for likelihood of abnormal CGA were ≤52 kPa in women and ≤80 kPa in men. Discussion: In older patients with haematological malignancies, impairment in muscle function is present at diagnosis. HGS seems a promising screening tool to identify patients with abnormal CGA. abstract_id: PUBMED:37834800 Back Extensor Strength as a Potential Marker of Frailty Using Propensity Score Matching and Machine Learning. This study assessed the potential of back extensor strength as an alternative marker of frailty. A total of 560 farmers were included. Computed tomography scans measured fat and muscle mass volumes at the mid-L4 vertebral level. Back extensor strength was measured in a seated posture. Multivariate linear regression was used to analyze the associations between back extensor strength and trunk muscle/fat compositions. The participants were divided into two groups based on back extensor strength. Propensity score matching, multivariate logistic regression, and Extreme Gradient Boosting (XGBoost) were employed to evaluate the relationship between Fried's frailty criteria and back extensor strength. Back extensor strength exhibited positive associations with abdominal muscle volume (r = 1.12) as well as back muscle volume (r = 0.89) (p &lt; 0.05). Back extensor strength was linked to more frail status, such as reduced grip strength, walking speed, and frequent self-reported exhaustion. Multivariate logistic regression indicated that back extensor strength was associated with higher frail status (OR = 0.990), and XGBoost analysis identified back extensor strength as the most important predictor (gain = 0.502) for frailty. The prediction models using grip strength produced similar results (OR = 0.869, gain = 0.482). These findings suggested the potential of back extensor strength as an alternative frailty marker. abstract_id: PUBMED:36166355 Analysis of individual components of frailty: Pre-transplant grip strength is the strongest predictor of post kidney transplant outcomes. Frailty is commonly assessed during kidney transplant recipient (KTR) evaluation. However, individual frailty components may have varying impact on post-transplant outcomes. In this single-center study of 825 KTRs, we determined the association between the individual components of a modified Fried frailty score and delayed graft function (DGF), early hospital readmission (EHR), cardiovascular (CV) events, acute rejection (AR), death censored graft failure (DCGF), and death. Sum frailty ≥3 was significantly associated with EHR (aOR = 3.62; 95% CI: 1.21-10.80). Among individual components, only grip strength was significantly associated with EHR (aOR = 1.54; 95% CI: 1.03-2.31). The addition of grip strength to a model with the other four components resulted in Net Reclassification Improvement (NRI) of 20.51% (p = .01). Similarly, only grip strength was significantly associated with CV events (aOR = 1.79; 95% CI: 1.12-2.86). The addition of grip strength to a model with the other four components resulted in NRI of 27.37% (p = .006). No other frailty components were associated with the outcomes of interest. Based on our findings, handgrip strength may be an important tool while assessing frailty, mainly predicting early readmission and cardiovascular events post-transplant. Answer: Yes, grip strength is considered a useful single marker of frailty. Several studies have demonstrated its utility and association with various aspects of frailty and related outcomes: - Grip strength has been shown to be associated with more markers of frailty than chronological age within a narrow age range, suggesting it may be a more useful single marker of frailty for older people of similar age than chronological age alone (PUBMED:14600007). - Grip strength correlated negatively with other validated measures of frailty, such as the Clinical Frailty Score and FRAIL scores, particularly in females, indicating its potential as a measure of frailty (PUBMED:36211090). - The response speed of grip strength was found to decrease with frailty, suggesting that measurement of grip strength response is useful to examine the relationship between frailty and grip strength response (PUBMED:37493378). - Grip strength, along with gait speed, could predict post-operative complications, but a multidimensional frailty score (MFS) based on preoperative comprehensive geriatric assessment showed better utility for predicting post-operative complication and 6-month mortality (PUBMED:31912668). - Higher grip strength was phenotypically associated with lower frailty index, better physical performance, and fewer functional limitations over time, although these associations could not be explained by shared genetics (PUBMED:37847794). - Grip strength mediated the relationship between muscle mass and frailty, indicating that muscle function as measured by grip strength has an indirect effect on frailty (PUBMED:31876400). - Hand grip strength was associated with pre-frailty, and knee extension strength was associated with frailty, emphasizing the importance of muscle strength measures in predicting and managing frailty (PUBMED:38247984). - Hand grip strength was associated with an abnormal comprehensive geriatric assessment (CGA) in older patients with haematological malignancies, suggesting its potential as a screening tool for frailty (PUBMED:27118256). - Back extensor strength was also considered as an alternative marker of frailty, but grip strength was identified as a strong predictor for frailty in the same study (PUBMED:37834800). - Pre-transplant grip strength was the strongest predictor of post kidney transplant outcomes such as early hospital readmission and cardiovascular events, indicating its importance in assessing frailty in this context (PUBMED:36166355). In summary, grip strength is a useful and practical single marker of frailty that correlates with various health outcomes and can be used in different clinical settings to assess frailty in older adults.
Instruction: Cognitive behavioural therapy combined with the relapse-prevention medication acamprosate: are short-term treatment outcomes for alcohol dependence improved? Abstracts: abstract_id: PUBMED:12225445 Cognitive behavioural therapy combined with the relapse-prevention medication acamprosate: are short-term treatment outcomes for alcohol dependence improved? Objective: The relapse prevention medication acamprosate has been recently introduced to the Australian Pharmaceutical Benefits Scheme (PBS) for the treatment of alcohol dependence. Overseas clinical trials have demonstrated the efficacy of using acamprosate as an adjunct to existing psychotherapeutic approaches. Research has not examined treatment outcomes using a standardized clinical approach. The objective of this study is to investigate the impact of adding acamprosate to an established abstinence-based outpatient alcohol rehabilitation programme in an Australian population. Methods: Fifty patients participated in an established 12-week, outpatient, "contract" based Cognitive Behavioural Therapy (CBT) alcohol abstinence programme and received acamprosate (CBT + acamprosate). Patients weighing &gt; or = 60 kg were prescribed acamprosate calcium 333 mg tablets, two tablets three times daily (1998 mg/day) and those weighing &lt; 60 kg received four tablets (1332 mg/day) daily. Outcomes were compared with 50 historical, matched controls, all of whom participated in the same program without a relapse prevention medication (CBT alone). All patients met DSM-IV criteria for alcohol dependence and the majority were socially disadvantaged. Results: Programme attendance across the eight treatment sessions was similar in both the CBT + acamprosate and the CBT alone conditions (P = 0.268). Relapse to alcohol use occurred sooner and more frequently in the CBT alone group (P = &lt; 0.0005). Rehabilitation programme completion at 12 weeks was 42% (CBT + acamprosate) compared with 32% for (CBT alone) (P = &lt; 0.204). Alcohol abstinence at 12 weeks was 38% (CBT + acamprosate) compared with 14% (CBT alone) (P = &lt; 0.006). Conclusion: Even within an alcohol dependent population characterized by poor prognostic indices, the addition of acamprosate to an established CBT outpatient programme significantly improved abstinence rates over a 12-week period. The use of acamprosate as an adjunctive treatment for alcohol dependence should be encouraged in Australia. abstract_id: PUBMED:29016275 Effectiveness of Psychoanalytic-Interactional Group Therapy vs. Behavioral Group Therapy in Routine Outpatient Treatment of Alcohol-Dependent Patients. Background: The cognitive behavioral therapy has been extensively investigated to assess relapse prevention rates in patients with alcohol dependence. In contrast, only little is known regarding the effectiveness of psychoanalytical psychotherapy in relapse prevention, although this treatment is widely used and especially so in Germany. The aim of this quasi-randomized study was to compare the effectiveness of these two group treatments' approaches under the condition of routine outpatient treatment in a non-university hospital. Methods: After inpatient detoxification, patients with alcohol dependence were allocated either to combined behavioral intervention (CBI) or to psychoanalytic-interactional therapy (PIT). The group treatment was carried out weekly over a period of six months. Also, the clinical care package included both individual treatment sessions (e.g. every 4-6 weeks) and abstinence supporting medication. The main outcome criteria included retention rates and frequency of alcohol relapse. Results: Some 215 patients (mean age 49.6 years [standard deviation, 10], 56.7% males, with a mean duration of alcohol dependence of 16.5 years [range: 1-50 years]) were included in the study. Overall, CBI clients showed a retention rate of 66.7%, compared to 81.8% for PIT clients (p =.008). An intention-to-treat analysis of alcohol relapses showed a significant difference between PIT and CBI groups (PIT: 33.6%; CBI: 49.5%; p =.018). There were no statistically significant differences between the 2 groups in terms of prescription rates of disulfiram, naltrexone or acamprosate. Conclusions: Notwithstanding the study limitations, PIT seemed here to be at least as effective as CBI in terms of retention and relapse prevention rates' levels. abstract_id: PUBMED:17088225 Is acamprosate use in alcohol dependence treatment reflected in improved subjective health status outcomes beyond cognitive behavioural therapy alone?. Objective: To examine whether the addition of acamprosate to Cognitive Behavioural Therapy (CBT) outpatient alcohol dependence treatment impacted on subjective health status. Method: Among 268 patients consecutively treated for alcohol dependence, 149 chose CBT alone. A matched design was used. From a possible pool of 119 Acamprosate + CBT and 149 CBT-only patients, 86 Acamprosate + CBT subjects were individually matched with 86 CBT-only patients on parameters of gender, age, prior detoxification and alcohol dependence severity. Health Status (SF-36) and Psychological Well-Being (GHQ-28) was assessed pre- and post-treatment. Results: Pre-treatment, both self-reported health status and psychological well-being was markedly below normative (community) ranges. Program completers significantly improved across both measures over 12 weeks of treatment and some health domains approximated community levels. No treatment group differences were observed. Conclusions: Participants who completed the CBT-based treatment showed significant improvement in self-reported health status. The use of acamprosate did not register additional improvement on either SF-36 or GHQ-28, beyond CBT alone. abstract_id: PUBMED:34622734 Relapse prevention for alcohol use disorders: combined acamprosate and cue exposure therapy as aftercare. Aims: Many patients with alcohol use disorders are challenged by cravings leading to repeated relapses. Both cue exposure therapy (CET) and acamprosate target alcohol cravings and are often combined (CET + acamprosate). The main aim of this study was to investigate whether aftercare treatment consisting of CET combined with acamprosate is equivalent to (A) CET as monotherapy, (B) aftercare as usual (AAU) as monotherapy or (C) AAU combined with acamprosate. Methods: Patients were randomized to receive either CET with urge-specific coping skills (USCS) as aftercare or AAU. Acamprosate prescription data were extracted from patient case records. Alcohol consumption, cravings, and USCS were assessed at pre-aftercare, post-aftercare, and 6-month follow-up. Results: Overall, patients increased their alcohol consumption during and following aftercare treatment, thereby relapsing despite any treatment. However, CET + acamprosate achieved greater abstinence compared to AAU + acamprosate at follow-up (p=.047). CET + acamprosate also reduced number of drinking days (p=.020) and number of days with excessive drinking (p=.020) at post-aftercare, when compared to AAU monotherapy. CET monotherapy increased sensible drinking at post-aftercare compared to AAU monotherapy (p=.045) and AAU + acamprosate (p=.047). Only CET monotherapy showed improvement in cravings, when compared to AAU at follow-up (mean urge level: p=.032; peak urge level: p=.014). Conclusion: The study showed that CET both as monotherapy and combined with acamprosate was superior to AAU monotherapy and AAU + acamprosate in reducing alcohol consumption. Only CET + acamprosate was capable of reducing alcohol consumption in the longer term, indicating that anti-craving medication may not impede CET from exerting an effect on alcohol consumption. Trial registration: ClinicalTrials.gov ID: NCT02298751 (24/11-2014). abstract_id: PUBMED:16467406 Combined acamprosate and naltrexone, with cognitive behavioural therapy is superior to either medication alone for alcohol abstinence: a single centres' experience with pharmacotherapy. Aims: To compare treatment outcomes amongst patients offered pharmacotherapy with either naltrexone or acamprosate used singly or in combination, in a 12-week outpatient cognitive behavioural therapy (CBT) programme for alcohol dependence. Methods: We matched 236 patients across gender, age group, prior alcohol detoxification, and dependence severity and conducted a cohort comparison study of three medication groups (CBT + acamprosate, CBT + naltrexone, CBT + combined medication) which included 59 patients per group. Outcome measures included programme attendance, programme abstinence and for those who relapsed, cumulative abstinence duration (CAD) and days to first breach (DFB). Secondary analyses compared the remaining matched 59 subjects who declined medication with the pharmacotherapy groups. Results: Across medication groups, CBT + combined medication produced the greatest improvement across all outcome measures. Although a trend favoured the CBT + combined group, differences did not reach statistical significance. Programme attendance: CBT + Acamprosate group (66.1%), CBT + Naltrexone group (79.7%), and in the CBT + Combined group (83.1%). Abstinence rates were 50.8, 66.1, and 67.8%, respectively. For those that did not complete the programme abstinent, the average number of days abstinent (CAD) were 45.07, 49.95, and 53.58 days, respectively. The average numbers of days to first breach (DFB) was 26.79, 26.7, and 37.32 days. When the focal group (CBT + combined) was compared with patients who declined medication (CBT-alone), significant differences were observed across all outcome indices. Withdrawal due to adverse medication effects was minimal. Conclusions: The addition of both medications (naltrexone and acamprosate) resulted in measurable benefit and was well tolerated. In this patient population naltrexone with CBT is as effective as combined medication with CBT, but the trend favours combination medication. abstract_id: PUBMED:22892944 Relapse prevention in alcohol dependence: acamprosate and naltrexone as a combined pharmacological strategy Acamprosate and naltrexone are established strategies for pharmacologic relapse prevention in patients with alcohol dependence. Regarding pharmacodynamic and pharmacokinetic considerations the combination of both agents for this indication is a reasonable treatment option that has been described to be safe and effective in clinical studies. However, this combination is uncommon in clinical practice. We report the case of a patient with severe alcohol and benzodiazepine dependence who achieved the longest interval of abstinence under combined treatment with acamprosate and naltrexone since the development of addiction. In addition, the currently available evidence regarding efficacy, safety and tolerability of both agents is discussed. In summary the combined treatment with both agents should be considered in patients who did not achieve abstinence under monotherapy unless contraindications are present. abstract_id: PUBMED:9411660 Alcoholism: relapse prevention Prevention of alcoholism relapse implies an early detection of the problem (abuse or dependence). This can be achieved through clinical interview, biological evaluation, psychological assessment (i.e. "The Drinking Habits"--I. Pelc) or according to the "Alcoholism Decision Tree" (The Plinius Maior Society). In order to prevent alcoholism relapse, strategies to control the clinical symptoms related to the "Post-Detoxification Syndrome" have recently been highlighted. Further are discussed the benefit of various psychotherapeutic modalities as well as psychopharmacological interventions. According to a better knowledge of the neurochemical and neurobiological basis of alcoholism and craving, the contributions of specific psychopharmacological agents active on the dopaminergic, serotoninergic, glutamatergic systems as well as blocking of opiate receptors are discussed. Acamprosate, as a specific modulator of the glutamatergic system, recently marketed in Belgium, appears one of the most promising new pharmacological agent in relapse prevention when the patient benefits of psychosocial support. abstract_id: PUBMED:18852481 Effectiveness of sequential combined treatment in comparison with treatment as usual in preventing relapse in alcohol dependence. Aim: The aim of this study was to compare the effectiveness of the sequential combined treatment (SCT) and treatment as usual (TU) in relapse prevention in a sample of alcohol-dependent patients, during 180 days of outpatient treatment. Method: 209 alcohol-dependent patients who could attend with an informant adult were randomized to either TU or SCT. The primary outcome measure was time to first relapse, defined as the consumption of any amount of alcohol during the 180 days of follow-up. Secondary outcome measures included maximum duration of continuous abstinence (MDCA), cumulative abstinence duration (CAD), quality of life (ARPQ) and blood test markers of alcohol consumption. Results: The SCT approach was more effective than TU. The Kaplan-Meier abstinent proportion at the end of the 180 days was 78% for the SCT group and 59% for the TU group (P&lt;0.01). The mean time to first relapse was 150 days for SCT and 123 days for TU (P&lt;0.01). The relative risk reduction of relapse was 62% for SCT after adjustment in multiple Cox regression (P&lt;0.01). SCT had more MDCA (P&lt;0.05) and more CAD (P&lt;0.05). Therapy sessions lasted slightly longer for SCT than TU (mean 13 min versus 10 min). Conclusions: SCT can result in better outcomes than TU in the outpatient treatment of alcohol dependence. abstract_id: PUBMED:35292814 Service Users' Views and Experiences of Alcohol Relapse Prevention Treatment and Adherence: New Role for Pharmacists? Aims: To understand service users' views and experiences of alcohol relapse prevention medication, views of a telephone behavioural modification intervention delivered by pharmacists and the use of Contingency Management (CM) to support acamprosate adherence following assisted alcohol withdrawal. Methods: Four focus groups were conducted within four alcohol treatment and recovery groups across England (UK), with service users with lived experience of alcohol dependence (26 participants). Semi-structured topic guide was used to explore participants' views and experiences of alcohol relapse prevention medication, a telephone behavioural modification medication intervention delivered by pharmacists, and the use of CM to support acamprosate adherence. These were audio-recorded, transcribed verbatim and thematically analysed inductively and deductively. Results: Four themes were identified: concerns about support and availability of alcohol relapse prevention medication; lack of knowledge and understanding about acamprosate treatment; positive perceptions of acamprosate adherence telephone support from pharmacists; and negative perceptions of CM to support acamprosate adherence. There were misunderstandings about acamprosate's mode of action and strong negative beliefs about CM. However, most were positive about pharmacists' new role to support acamprosate adherence. Conclusion: This study highlighted challenges service users face to commence alcohol relapse prevention medication. It appears service users could benefit from a pharmacist-led telephone intervention to improve understanding about acamprosate medication, particularly, if delivered in an engaging and motivating way. abstract_id: PUBMED:32558914 Combined Pharmacotherapy and Cognitive Behavioral Therapy for Adults With Alcohol or Substance Use Disorders: A Systematic Review and Meta-analysis. Importance: Substance use disorders (SUDs) represent a pressing public health concern. Combined behavioral and pharmacological interventions are considered best practices for addiction. Cognitive behavioral therapy (CBT) is a first-line intervention, yet the superiority of CBT compared with other behavioral treatments when combined with pharmacotherapy remains unclear. An understanding of the effects of combined CBT and pharmacotherapy will inform best-practice guidelines for treatment of SUD. Objective: To conduct a meta-analysis of the published literature on combined CBT and pharmacotherapy for adult alcohol use disorder (AUD) or other SUDs. Data Sources: PubMed, Cochrane Register, MEDLINE, PsychINFO, and Embase databases from January 1, 1990, through July 31, 2019, were searched. Keywords were specified in 3 categories: treatment type, outcome type, and study design. Collected data were analyzed through September 30, 2019. Study Selection: Two independent raters reviewed abstracts and full-text articles. English language articles describing randomized clinical trials examining CBT in combination with pharmacotherapy for AUD and SUD were included. Data Extraction And Synthesis: Inverse-variance weighted, random-effects estimates of effect size were pooled into 3 clinically informative subgroups: (1) CBT plus pharmacotherapy compared with usual care plus pharmacotherapy, (2) CBT plus pharmacotherapy compared with another specific therapy plus pharmacotherapy, and (3) CBT added to usual care and pharmacotherapy compared with usual care and pharmacotherapy alone. Sensitivity analyses included assessment of study quality, pooled effect size heterogeneity, publication bias, and primary substance moderator effects. Main Outcomes And Measures: Substance use frequency and quantity outcomes after treatment and during follow-up were examined. Results: The sample included 62 effect sizes from 30 unique randomized clinical trials that examined CBT in combination with some form of pharmacotherapy for AUD and SUD. The primary substances targeted in the clinical trial sample were alcohol (15 [50%]), followed by cocaine (7 [23%]) and opioids (6 [20%]). The mean (SD) age of the patient sample was 39 (6) years, with a mean (SD) of 28% (12%) female participants per study. The following pharmacotherapies were used: naltrexone hydrochloride and/or acamprosate calcium (26 of 62 effect sizes [42%]), methadone hydrochloride or combined buprenorphine hydrochloride and naltrexone (11 of 62 [18%]), disulfiram (5 of 62 [8%]), and another pharmacotherapy or mixture of pharmacotherapies (20 of 62 [32%]). Random-effects pooled estimates showed a benefit associated with combined CBT and pharmacotherapy over usual care (g range, 0.18-0.28; k = 9). However, CBT did not perform better than another specific therapy, and evidence for the addition of CBT as an add-on to combined usual care and pharmacotherapy was mixed. Moderator analysis showed variability in effect direction and magnitude by primary drug target. Conclusions And Relevance: The present study supports the efficacy of combined CBT and pharmacotherapy compared with usual care and pharmacotherapy. Cognitive behavioral therapy did not perform better than another evidence-based modality (eg, motivational enhancement therapy, contingency management) in this context or as an add-on to combined usual care and pharmacotherapy. These findings suggest that best practices in addiction treatment should include pharmacotherapy plus CBT or another evidence-based therapy, rather than usual clinical management or nonspecific counseling services. Answer: Yes, short-term treatment outcomes for alcohol dependence are improved when cognitive behavioural therapy (CBT) is combined with the relapse prevention medication acamprosate. A study conducted in Australia found that the addition of acamprosate to an established CBT outpatient program significantly improved abstinence rates over a 12-week period compared to CBT alone. Specifically, the study reported that relapse to alcohol use occurred sooner and more frequently in the group that received only CBT, while the group that received both CBT and acamprosate had higher program completion rates (42% vs. 32%) and higher alcohol abstinence rates at 12 weeks (38% vs. 14%) (PUBMED:12225445). Furthermore, another study showed that combined acamprosate and CBT was superior to CBT alone in terms of abstinence rates, with the combined treatment group achieving greater abstinence compared to the group that received only CBT (PUBMED:16467406). Additionally, a study comparing combined acamprosate and cue exposure therapy (CET) as aftercare to CET alone and aftercare as usual (AAU) found that CET combined with acamprosate was superior in reducing alcohol consumption and achieving greater abstinence at follow-up (PUBMED:34622734). These findings suggest that the use of acamprosate as an adjunctive treatment to CBT can lead to improved short-term outcomes for individuals with alcohol dependence.
Instruction: Multiple repeat caesarean deliveries: do they increase maternal and neonatal morbidity? Abstracts: abstract_id: PUBMED:27125601 Multiple repeat caesarean deliveries: do they increase maternal and neonatal morbidity? Objective: The aim of the present study is to evaluate the effects of the increased number of caesarean deliveries (CDs) in cases of multiple repeat caesarean deliveries (MRCDs) on maternal and neonatal morbidity. Methods: MRCDs admitted to our hospital between January 2013 and September 2014 were analysed retrospectively. A total number of 1133 women were included in the study and were divided into 4 groups. Group 1: second CDs (n = 329); Group 2: third CDs (n = 225); Group 3: fourth CDs (n = 447); Group 4: fifth CDs (n = 132). The clinical, demographic, intraoperative and postoperative data of the patients were registered upon the review of patient files. Results: The differences among the groups were found to be statistically significant in terms of mean maternal age, gravida, APGAR (Activity, Pulse, Grimace, Appearance, Respiration) scores, hospital stay and operation time. In addition, the difference was also statistically significant for severe adhesion, bladder injury and premature birth. No statistically significant difference was observed among the groups with respect to placenta previa, placenta accreta, caesarean hysterectomy, uterine scar rupture. Conclusions: According to our findings, MRCDs seem to increasing the maternal and neonatal morbidity even though they are not life-threatening. abstract_id: PUBMED:20230324 Timing of elective repeat caesarean section: maternal and neonatal morbidity and mortality. Timing of elective repeat caesarean section should take into account both fetal and maternal considerations. The percentage of caesarean deliveries has dramatically increased during the last decades. It undoubtedly leads to an increase in the number of women having multiple caesarean sections. While maternal morbidity increases with increased number of caesarean sections, when compared with their term counterparts, late pre-term infants face increased morbidity. Establishing the optimal time of delivery for both mother and child is a major challenge faced by clinicians. The aim of this review is to better understand neonatal and maternal morbidity and mortality that are associated with elective repeat caesarean section, and to provide an educated decision regarding the optimal timing for elective repeat caesarean section. abstract_id: PUBMED:30739400 Impact of timing of delivery on maternal and neonatal outcomes for women after three previous caesarean deliveries; a secondary analysis of the caesarean section registry. Background: As more women are presenting with three previous caesarean deliveries (CD), providers may suggest early term delivery as a means to avoid the risk of spontaneous labour and associated maternal morbidity. Objective: To determine whether early term delivery is associated with lower maternal and neonatal morbidity for women with three previous CD. Study Design: Secondary analysis of a prospective registry of CD at 19 US academic centres from 1999 to 2002. Population: Women with three previous CD undergoing scheduled or emergent delivery with live, singleton gestations 37-41+ weeks of gestation were included. Women with non-low transverse incisions, antepartum stillbirth, previous myomectomy, fetal anomalies, more or fewer than three previous CD or attempting trial of labour after caesarean section were excluded. Methods: Gestational age was categorised by week. We fitted logistic regression models to adjust for clinically relevant or statistically significant confounders. Main Outcome Measures: The primary and secondary outcomes were composites, respectively, of maternal and neonatal morbidity. Results: In all, 821 women met the inclusion criteria; maternal morbidity composite occurred in 9.86% and neonatal morbidity occurred in 10.5%. After adjusting for confounding variables, maternal and neonatal morbidity occurred least frequently at 39 weeks. Conclusions: In women with three previous CDs, adverse maternal outcomes do not increase with increasing gestational age beyond 37 weeks but early term elective repeat CDs are associated with higher neonatal morbidity. Elective delivery of women with three previous CD at 39 weeks of gestation is safe in the absence of maternal or fetal indications for early term delivery. Tweetable Abstract: Delivery of women with three previous caesarean deliveries at 39 weeks, in the absence of maternal or fetal indications for early term delivery, is associated with decreased maternal morbidity. abstract_id: PUBMED:26605232 Maternal and neonatal morbidity and mortality rate in caesarean section and vaginal delivery. Background: The cesarean section is one of the most common procedures to prevent health-threatening risks to the mother and infant. Increasing rate of cesarean section attracted the attention of professionals and the overall objective of this study was to determine the frequency of maternal and neonatal morbidity and mortality rates in the two methods of delivery. Materials And Methods: In a comparative cohort study, 300 cases undergoing caesarean section and 300 cases with vaginal delivery were selected in two main hospitals of Isfahan, Iran during 2013 and 2014. Demographic characteristics and factors related to mortality and morbidity of mothers and infants were studied. Mothers were also recruited 6 weeks after delivery to ask for complications. Mothers and infants mortality and morbidity were studied and analyzed by SPSS 22 software. Results: Follow-up of deliveries up to 1-month after delivery suggested 2 cases of infant death (7%) in vaginal delivery group, while no case of infant death was reported in cesarean delivery group (P = 0.5). Incidence of fever was observed in first 10 days after delivery in 7 cases in the vaginal delivery group and 11 cases in the cesarean delivery group (2.3% vs. 3.7%, P = 0.4). Conclusion: Despite all the benefits of vaginal delivery compared with cesarean section, in many cases, especially in emergency cesarean section delivery can substantially reduce the maternal and neonatal mortality and morbidity. It is recommended to assess the complications of each method in all pregnant women about to give birth, and then decide on the method of delivery. abstract_id: PUBMED:38287196 Maternal and neonatal morbidity associated with Fetal Pillow® use at full dilatation caesarean: A retrospective cohort. Objective: To investigate associations of the Fetal Pillow® with maternal and neonatal morbidity. Design: Retrospective cohort. Setting: Two tertiary maternity units, New Zealand. Population Or Sample: Full dilatation singleton, term, cephalic caesarean section, with three comparisons: at Unit A (1) before versus after introduction of the Fetal Pillow® (1 Jaunary 2016-31 October 2021); (2) with versus without the Fetal Pillow® after introduction (27 July 2017-31 October 2021); and (3) between Unit A and Unit B during the same time period (1 January 2019-31 October 2021). The Fetal Pillow® is unavailable at Unit B. Methods: Cases were ascertained and clinical data were extracted from electronic clinical databases and records. Outcome data were adjusted and presented as adjusted odds ratios (aOR) with 95% CI. Main Outcome Measures: Primary outcome "any" uterine incision extension; secondary outcomes included major extension (into adjacent structures), and a composite neonatal outcome. Results: In all, 1703 caesareans were included; 375 with the device and 1328 without. Uterine incision extension rates were: at Unit A before versus after introduction: 26.8% versus 24.8% (aOR 0.88, 95% CI 0.65-1.19); at Unit A with the Fetal Pillow® versus without: 26.1% versus 23.8% (aOR 1.14, 95% CI 0.83-1.57); and at Unit A versus Unit B: 24.2% versus 29.2% (aOR 0.73, 95% CI 0.54-0.99). No differences were found in major extensions, or neonatal composite outcome. Conclusions: Despite the relatively large size of this study, it could not rule out either a positive or a negative association between use of the Fetal Pillow® and uterine extensions, major uterine incision extensions, and neonatal morbidity. Randomised controlled trial evidence is required to assess efficacy. abstract_id: PUBMED:31478414 Comparison of maternal and perinatal morbidity between elective and emergency caesarean section in singleton-term breech presentation. The objective of the study was to compare maternal and neonatal adverse outcomes between elective caesarean section (ElCS) and emergency caesarean section (EmCS) for singleton-term breech presentation. This study included women with singleton breech presentation who underwent ElCS or EmCS at term during 2007-2015 at Siriraj Hospital (Thailand). Complete data were collected for 2178 pregnant women. Of those, 1322 (60.7%) women underwent EmCS, and 856 (39.3%) delivered by ElCS. Maternal and perinatal morbidity were compared. There was no maternal or perinatal death in either group. Maternal morbidity was comparable between groups, except for longer hospital stay in the EmCS group (p = .047). One-minute Apgar score was significantly lower in the EmCS group (p = .040). There was no significant difference in 5-min Apgar score between groups. No significant difference was observed for serious maternal and neonatal morbidity between women who underwent ElCS versus those who underwent EmCS for singleton-term breech presentation.IMPACT STATEMENTWhat is already known on this subject? Emergency caesarean section (EmCS) is generally known to be associated with a higher risk of maternal and neonatal complications than elective caesarean section (ElCS).What do the results of this study add? In singleton-term breech presentation, EmCS in tertiary care setting was not associated with an increase in serious maternal and neonatal morbidity compared with EICS. Cord prolapse as an indication for emergency caesarean section was significantly associated with adverse outcomes while advanced cervical dilation ≥7 cm or low foetal station ≥+1 did not have an impact on maternal and neonatal complications.What are the implications of these findings for clinical practice and/or further research? Mean gestational age in both the ElCS and EmCS groups was approximately 38 weeks and 5 d; there were no neonatal cases with respiratory distress syndrome. Our findings suggest further prospective study in planned caesarean section scheduled for 38-39 weeks in patients with term breech presentation. The results of such a study could yield lower rates of both EmCS and potential adverse outcomes. abstract_id: PUBMED:32783334 Maternal characteristics and neonatal outcomes of emergency repeat caesarean deliveries due to early-term spontaneous labour onset. Background: The optimal timing of elective repeat caesarean delivery has yet to be determined. One of the reasons to schedule an elective repeat caesarean delivery before 39 weeks gestation is to avoid emergency caesarean delivery due to spontaneous onset of labour. Aims: By ascertaining maternal characteristics and neonatal outcomes associated with early-term onset of spontaneous labour, we aim to determine the optimal timing for each individual repeat caesarean delivery. Materials And Methods: We performed a retrospective analysis of women with repeat caesarean deliveries planned at 38 weeks gestation between 2005 and 2019 at a tertiary referral hospital in Japan. A multivariate logistic regression analysis was adopted to identify independent contributing factors for early-term spontaneous labour onset. We also compared the rate of neonatal adverse events between women who underwent emergency repeat caesarean deliveries due to the onset of early-term labour and the ones who underwent elective repeat caesarean deliveries at 38 weeks. Results: We included 1152 women. History of vaginal deliveries (adjusted odds ratio (AOR), 2.12; 95% confidence interval (95% CI), 1.21-3.74), history of preterm deliveries (AOR, 2.28; 95% CI, 1.38-3.77), and inadequate maternal weight gain during pregnancy (AOR, 1.78; 95% CI, 1.15-2.75) significantly increased the risk of early-term spontaneous labour onset. In terms of occurrence rate of neonatal complications, we found no significant difference between the groups. Conclusion: These maternal factors are significant predictors for early-term labour onset of repeat caesarean deliveries. The onset of early-term labour did not increase the likelihood of neonatal complications. abstract_id: PUBMED:25356739 Timing of elective repeat caesarean does matter: Importance of avoiding early-term delivery especially in diabetic patients. Five hundred and three elective repeat caesarean sections were analysed to examine the impact of timing of delivery between 37 and 40 weeks' gestation on foetal and maternal short-term outcome. Gestational age, Apgar scores and admission to the neonatal intensive care unit (NICU)-based foetal outcome. Maternal complications were comparatively evaluated. Due to the increasing incidence of gestational diabetes a subgroup analysis on this issue was performed. Neonates born by elective repeat caesarean in early term had a 3.2 times increased risk of being admitted to the NICU due to foetal adaption disorders in the early post-natal phase. Overall maternal peri-operative risks were low and did not differ significantly between 37 and 41 weeks' gestation. Maternal gestational diabetes constituted an additional independent risk factor in early term. In summary, elective early-term caesarean delivery appears to negatively impact immediate neonatal outcome. Waiting at least until 38 completed weeks' gestation improves foetal outcome, especially in diabetic patients. abstract_id: PUBMED:26364513 Case series of multiple repeat caesarean sections: operative, maternal, and neonatal outcome. Objective: The objective of this study is to review the maternal and neonatal morbidity and mortality associated with six or more caesarean section (CS). Methods: We conducted a retrospective chart review, at King Abdulaziz University Hospital (KAUH) in Jeddah, for all patients admitted between 2000 through 2010 and identified five patients having more than six CS deliveries. Results: Deliveries occurred in the ranges of 31-38 weeks, from which four cases required emergency CS. There were two cases in the series with a placenta previa. There was a single case of uterine dehiscence. Only one case required a blood transfusion and was complicated with a placenta accreta, bladder injury, urinary tract infection, and prolonged maternal hospital stay with neonatal intensive care unit (NICU) admission. All cases had moderate to severe adhesion intra-operatively. Operative time was long in all cases with a range 55-106 min. One of the five cases had a postoperative wound infection. Finally, none of the current series showed fetal or maternal mortalities. Conclusions: The long-term complications associated with CS should be discussed with patients in the first and subsequent pregnancies. This case series highlighted the outcomes in these unique cases of higher order caesareans. abstract_id: PUBMED:24506582 Indications for caesarean sections at ≥34 weeks among nulliparous women and differential composite maternal and neonatal morbidity. Objective: To compare composite maternal and neonatal morbidities (CMM, CNM) among nulliparous women with primary indications for caesarean section (CS) as acute clinical emergency (group I; ACE), non-reassuring fetal heart rate (group II) and arrest disorder (group III). Design: A multicentre prospective study. Setting: Nineteen academic centres in the USA, with deliveries in 1999-2002. Population: Nulliparous women (n = 9829) that had CS. Methods: Nulliparous women undergoing CS for three categories of indications were compared using logistic regression model, adjusted for five variables. Main Outcome Measures: CMM was defined as the presence of any of the following: intrapartum or postpartum transfusion, uterine rupture, hysterectomy, cystotomy, ureteral or bowel injury or death; CNM was defined as the presence of any of the following: umbilical arterial pH &lt;7.00, neonatal seizure, cardiac, hepatic, renal dysfunction, hypoxic ischaemic encephalopathy or neonatal death. Results: The primary reasons for CS were ACE in 1% (group I, n = 114) non-reassuring FHR in 29% (group II; n = 2822) and failed induction/dystocia in the remaining 70% (group III; n = 6893). The overall risks of CMM and CNM were 2.5% (95% confidence intervals, CI, 2.2-2.8%) and 1.9% (95% CI 1.7-2.2), respectively. The risk of CMM was higher in group I than in group II (RR 4.1, 95% CI 3.1, 5.3), and group III (RR 3.2, 95% CI 2.7, 3.7). The risk of CNM was also higher in group I than in group II (RR 2.8, 95% CI 2.3, 3.4) and group III (RR 14.1, 95% CI 10.7, 18.7). Conclusions: Nulliparous women who have acute clinically emergent caesarean sections are at the highest risks of both composite maternal and neonatal morbidity and mortality. Answer: Yes, multiple repeat caesarean deliveries (MRCDs) appear to increase maternal and neonatal morbidity. A study that evaluated the effects of the increased number of caesarean deliveries (CDs) on maternal and neonatal morbidity found that as the number of CDs increased, there were statistically significant differences in terms of mean maternal age, gravida, APGAR scores, hospital stay, and operation time. Additionally, there were significant differences for severe adhesion, bladder injury, and premature birth. However, no significant difference was observed with respect to placenta previa, placenta accreta, caesarean hysterectomy, or uterine scar rupture among the groups with varying numbers of CDs (PUBMED:27125601). Furthermore, the timing of elective repeat caesarean section is crucial to avoid increased morbidity and mortality for both mother and child. A review aimed to understand the associated morbidity and mortality with elective repeat caesarean section and emphasized the challenge of determining the optimal timing for such procedures (PUBMED:20230324). Another study focusing on women with three previous CDs found that adverse maternal outcomes did not increase with gestational age beyond 37 weeks, but early term elective repeat CDs were associated with higher neonatal morbidity. It concluded that elective delivery at 39 weeks of gestation is safe in the absence of maternal or fetal indications for early term delivery (PUBMED:30739400). In summary, the evidence suggests that MRCDs are associated with increased maternal and neonatal morbidity, and careful consideration of the timing for elective repeat caesarean sections is important to minimize these risks.
Instruction: Urothelial and incidental prostate carcinoma in prostates from cystoprostatectomies for bladder cancer: is there a relationship between urothelial and prostate cancer? Abstracts: abstract_id: PUBMED:25586692 Atypical urothelial tissue fragments in noninstrumented voided urine specimens are associated with low but significantly higher rates of urothelial neoplasia than benign-appearing urothelial tissue fragments. Background: The interpretation of urothelial tissue fragments (UTF) in voided urine (VU) specimens is controversial. If UTF contain cytomorphologically atypical cells, the diagnosis often becomes more challenging. The authors previously analyzed the outcome of patients with benign-appearing UTF in 274 noninstrumented VU specimens. In this retrospective study, noninstrumented VU specimens containing UTF with atypical cytomorphological features (AUTF) were evaluated and compared with the previous results. Methods: The Johns Hopkins Hospital electronic pathology database was searched for VU cases containing UTF over a 5-year period. Results: A total of 170 noninstrumented VU specimens containing AUTF were identified. Twenty-four specimens had subsequent or coincidental surgical pathology specimens with the following overall rates of neoplasia: high-grade urothelial carcinoma: 8.8% (15 specimens), low-grade urothelial neoplasia: 1.2% (2 specimens), and prostate carcinoma invading the bladder: 0.6% (1 specimen). A total of 49 specimens (28.8%) were diagnosed with urolithiasis on follow-up. Twenty-five cases had follow-up cytology specimens, all of which were negative for malignancy. Of 72 cases without histopathologic, radiologic, or cytopathologic follow-up, 62 (86.1%) had a mean clinical follow-up of 22.5 months and 10 cases did not have clinical follow-up. Conclusions: The presence of AUTF in noninstrumented VU is associated with low rates of urothelial neoplasia but a statistically higher risk of urothelial neoplasia than the presence of BUTF (10.0% vs 4.4%; P&lt;.05). In particular, the rate of high-grade urothelial carcinoma is significantly higher in noninstrumented VU specimens containing AUTF than those containing BUTF (8.8% vs 0.7%; P&lt;.0001). In addition, urolithiasis is associated with AUTF in a substantial percentage of noninstrumented VU specimens. abstract_id: PUBMED:23707451 Incidental prostate cancer in patients with bladder urothelial carcinoma: comprehensive analysis of 1,476 radical cystoprostatectomy specimens. Purpose: We identified risk factors and determined the incidence and prognosis of incidental, clinically significant prostatic adenocarcinoma, prostatic urothelial carcinoma and HGPIN in patients treated with radical cystoprostatectomy for urothelial carcinoma of the bladder. Materials And Methods: We analyzed the records of 1,476 patients without a history of prostatic adenocarcinoma. We determined the incidence of clinically significant prostatic adenocarcinoma, prostatic urothelial carcinoma and HGPIN in the total cohort and in select patient subgroups. Prostatic urothelial carcinoma was stratified as prostatic stromal and prostatic urethral/duct involvement. Univariate and multivariate analyses were performed with multiple variables. Recurrence-free and overall survival rates were calculated. Median followup was 13.2 years. Results: Of the 1,476 patients 753 (51.0%) had cancer involving the prostate. Prostatic adenocarcinoma, clinically significant prostatic adenocarcinoma, prostatic urothelial carcinoma and HGPIN were present in 37.9%, 8.3%, 21.1% and 51.2% of patients, respectively. Of the 312 patients (21.1%) with prostatic urothelial carcinoma 163 (11.0%) had prostatic urethral/duct involvement only and 149 (10.1%) had prostatic stromal involvement. We identified risk factors for clinically significant prostatic adenocarcinoma, prostatic urothelial carcinoma and HGPIN but the absence of these risk factors did not rule out their presence. Ten-year overall survival in patients with no prostatic urothelial carcinoma, and prostatic urethral/duct and prostatic stromal involvement was 47.1%, 43.3% and 21.7%, respectively (p&lt;0.001). No patient with clinically significant prostatic adenocarcinoma died of prostatic cancer. Conclusions: More than half of the patients undergoing radical cystoprostatectomy had cancer involving the prostate. Prostatic urothelial carcinoma, particularly with prostatic stromal involvement, was associated with a worse prognosis, while clinically significant prostatic adenocarcinoma did not alter survival. Preoperative clinical and histopathological risk factors are not reliable enough to accurately predict clinically significant prostatic adenocarcinoma and/or prostatic urothelial carcinoma. abstract_id: PUBMED:34755067 The secondary tumor of the prostate derived from upper tract urothelial carcinoma: An autopsy case. Introduction: Metastatic urothelial carcinomas are common in lung, liver, and lymph nodes. We present rare secondary tumor of the prostate metastasized from upper tract urothelial carcinoma. Case Presentation: An 87-year-old man was diagnosed as urothelial carcinoma of left upper tract and bladder. Only transurethral resection of bladder tumor was performed as palliative therapy to control hematuria. Thereafter, the tumor of left upper tract showed aggressive progression with multiple metastases involving lymph nodes and bilateral lungs. Finally, autopsy revealed swelling of left kidney due to tumor growth and systemic cancer disseminations involving bilateral lungs and renal hilar lymph nodes. In addition, prostate tumor was found incidentally. Histological examination including immunohistochemistry revealed the prostate tumor as metastatic tumor from urothelial carcinoma of left renal pelvis. Conclusion: We reported rare secondary tumor of the prostate, derived from upper tract urothelial carcinoma. Further consideration would be required to provide better knowledge of the disease. abstract_id: PUBMED:37228532 Correlation of HER2 With p53 and p63 in Urothelial Bladder Carcinoma. Urothelial carcinomas account for the majority of all primary bladder cancers, making bladder cancer the second most frequent genitourinary malignancy after prostate cancer. Bladder cancer risk rises with age and most of them return after resection due to their multifocal distribution, and they often develop in superficial locations. Like many other cancers, bladder carcinoma is associated with a few tumor markers that have been evaluated in the past. They include p53, p63, and HER2. This study was conducted on 88 patients suspected of urinary bladder carcinoma. This prospective study was done at the Department of Pathology, Osmania General Hospital, Hyderabad from August 2017 to July 2019. Of the 88 patients, 76 were diagnosed with bladder carcinoma and the remaining 12 were non-neoplastic. The primary neoplastic lesions of the urinary bladder were predominantly seen in patients older than 40 years and were found to be statistically significant (p&lt;0.01). Of the 34 cases of high-grade papillary urothelial carcinoma (PUC), 26 (76.47%) were males, eight cases (23.53%) were females, while among the 25 cases of low-grade PUC, 20 cases (80%) were males, and five cases (20%) were females. In seven cases of squamous cell carcinoma, six cases (85.71%) were males and only one case (14.29%) was female. Of the two cases of adenocarcinoma, male and female gender accounted for one case each (50%). The two cases of papillary urothelial neoplasm of low malignant potential were males in the study. On the whole, the primary urinary bladder lesions are more predominant in the males (77.63%) than the females (22.37%). Overexpression of p53 is negatively connected to p63 expression, and HER2 and p53 were strongly associated with high tumor grade in urothelial carcinoma. abstract_id: PUBMED:30595755 ASCO 2018: highlights of urothelial cancer and prostate cancer. Prostate cancer and urothelial carcinoma are the two most common urological cancers. The aim of this short review is to highlight abstracts from this year's ASCO Annual Meeting. The phase III SPCG-13 trial showed no difference in biochemical disease-free survival by the addition of docetaxel after primary radiation therapy of localized high-risk prostate cancer. In bone dominant metastatic castration resistant prostate cancer, the phase II radium-223 dose escalation study concluded that the currently used dose with 6 cycles of 55 kBq/kg remains the standard of care. The PARP inhibitor olaparib plus abiraterone provided a significant benefit in radiological progression-free survival compared with abiraterone alone, independent of homologous recombination repair (HRR) mutation status. In localized muscle-invasive urothelial carcinoma, two phase II trials (ABACUS and PURE-01) exploring the pathological complete remission rate of atezolizumab and pembrolizumab prior to cystectomy in cisplatin-unfit or cisplatin-fit patients are presented. Novel targeted therapies such as fibroblast growth factor receptor (FGFR) inhibitors or monoclonal antibodies against nectin-4 confirmed astonishing objective response rates in heavily pretreated metastatic urothelial carcinoma (mUC) patients, resulting in a median overall survival (OS) up to 13.8 months. Finally, updated 1‑year and 2‑year OS survival rates of pembrolizumab and atezolizumab in the first line setting of mUC are presented. abstract_id: PUBMED:19076141 Urothelial and incidental prostate carcinoma in prostates from cystoprostatectomies for bladder cancer: is there a relationship between urothelial and prostate cancer? Objective: To determine the incidence and features of urothelial carcinoma (UC) involving the prostate (UCP) and of prostate adenocarcinoma (PA) in radical cystoprostatectomy (RCP) for bladder cancer. Patients And Methods: The whole-mount prostate sections of 248 RCP consecutively examined from 1995 to 2007 were reviewed to determine the incidence and features of UCP and PA. UCP was separately evaluated for UC originating from the urethra and peri-urethral ducts (PUC) and for direct extension of bladder UC. Results: There was UCP in 94 (37.9%) of 248 patients, whereas PUC was present in 78 (31.5%). UC in situ and noninvasive papillary PUC was present in 42 (53.9%) of the 78, whereas stromal invasion was present in 36 (46.1%). Direct extension of UC from the bladder only was present in 16 (6.5%) patients. PA was present in 123 (49.6%) of 248 patients; 96 (78.1%) were in the peripheral zone. In 107 patients (87%) the Gleason score was &lt;or=6; 96 (78.1%) were pT2a, with a lower frequency in other pT categories. The margins were negative in 96.7% of cases. All patients were pN0 for PA; in 116 (95.1%) the volume was &lt;0.5 mL. Of the 123 patients with incidental prostate cancer, 100 cancers (81.3%) were considered clinically insignificant. UCP coexisted with PA in 43 (17.3%) of 248 patients, whereas PUC and PA occurred together in the same prostate in 32 (12.9%). Direct extension of UC from the bladder and PA occurred together in the same prostate in 11 (4.4%) patients. Conclusions: UC involving the prostate and PA are present in most RCP specimens. UC can arise from extension of trigonal or bladder-neck tumours, proximal prostate ducts/urethra, or from cell implantation from manipulation of vesical neoplasms. The frequent high coincidence of prostate and bladder cancer can be explained by a common pathway of carcinogenesis. abstract_id: PUBMED:33156041 Incidental Detection of Urothelial Carcinoma on 18F-Fluciclovine PET/CT. Abstract: 18F-Fluciclovine PET/CT has become a common diagnostic imaging study used in the evaluation of biochemical recurrence in prostate cancer since its approval in 2016. We present a case report of an 82-year-old man with history of both prostate and bladder cancer who presented for a fluciclovine study due to rising PSA levels. There was incidental detection of focal penile activity, and a subsequent urethral biopsy performed showed urothelial carcinoma, which was also seen on a subsequent MRI study. abstract_id: PUBMED:28947891 Co-existence of mucin-producing urothelial-type adenocarcinoma of the prostate and inverted papilloma of the bladder. Adenocarcinoma of prostate with mucinous differentiation arising in the male urethra is extremely rare, with only 21 cases reported in the previous literature. A diagnosis of mucin-producing urothelial carcinoma of the prostate is based on the pathology, immunohistochemistry, and clinical examination by excluding the secondary adenocarcinoma of the prostate. We present a case of unexpected mucinous urothelial carcinoma of prostate with co-existing inverted papilloma of bladder in a 57-year-old man. The patient underwent transurethral resection of the prostate (TURP) and transurethral resection of a bladder tumour (TUR-Bt), and the pathologic result showed mucinous prostate carcinoma and bladder inverted papilloma. Immunohistological stain was negative for prostate-specific antigen (PSA), prostate-specific acid phosphatase (PSAP), and P63, but positive for cytokeratin 7 (CK 7), CK 20, clone 34βE12 and P504S. A complete endoscopic examination was performed to exclude the secondary adenocarcinoma of prostate. This case illustrates the clinical and pathological features of a rare and unexpected mucin-producing urothelial carcinoma of prostate in a bladder neoplasm patient. abstract_id: PUBMED:27069958 Papillary urothelial carcinoma with squamous differentiation in association with human papilloma virus: case report and literature review. Background: The human papilloma virus (HPV) is a carcinogen known for its strong association with cervical cancers and cervical lesions. It is also known to be associated with a variety of squamous cell carcinomas in other areas, such as the penis, vulva, anus and head and neck. However, the association with urothelial carcinoma remains controversial. Here, we report a case of urothelial carcinoma with squamous differentiation associated with HPV-6/HPV-11. Case Presentation: This is a case of a 70 year old man who presented with nocturia and pressure during urination. During the TURP procedure for what was clinically thought to be benign prostate hyperplasia with pathologic diagnosis as prostate carcinoma, a 2 cm papillary mass was found in the distal penile urethra. The papillary mass was found to be a high grade urothelial carcinoma positive for GATA 3 expression, with focal areas of squamous differentiation. The areas with squamous differentiation demonstrated koilocytic differentiation, which were positive for strong p16 expression. The tumor was found to harbor low risk HPV 6/11 by in situ hybridization. Conclusions: This study case demonstrates HPV infection with a low risk subtype (HPV 6/11) associated with an urothelial carcinoma with squamous differentiation and condylomatous features. abstract_id: PUBMED:34898072 Primary urothelial carcinoma of the prostate: A case report and review of the literature Objective: To investigate the clinical diagnosis, treatment and prognosis of primary urothelial carcinoma of the prostate. Methods: We retrospectively analyzed the clinical data on one case of primary urothelial carcinoma of the prostate and reviewed the relevant published literature. Results: The patient, aged 83 years old and diagnosed with primary urothelial carcinoma of the prostate, was treated by neoadjuvant chemotherapy with gemcitabine plus cisplatin preoperatively, injected with 5-fluorouracil into the cutting edge of the tumor intraoperatively, and pathologically confirmed with high-grade invasive primary urothelial carcinoma of the prostate, followed by adjuvant chemotherapy with gemcitabine plus cisplatin. No recurrence or metastasis was observed during 1-year follow-up. Conclusions: Primary urothelial carcinoma of the prostate is rare and highly malignant. If the tumor is localized, the patient needs to be treated by neoadjuvant chemotherapy preoperatively, injection with anti-tumor drugs into the cutting edge of the tumor intraoperatively, and adjuvant chemotherapy and regular follow-up postoperatively. Answer: The relationship between urothelial and prostate cancer in the context of cystoprostatectomies for bladder cancer has been explored in various studies. The frequent coexistence of prostate and bladder cancer has been observed, suggesting a possible common pathway of carcinogenesis (PUBMED:19076141). More than half of the patients undergoing radical cystoprostatectomy for urothelial carcinoma of the bladder were found to have cancer involving the prostate (PUBMED:23707451). This included a significant incidence of incidental, clinically significant prostatic adenocarcinoma, prostatic urothelial carcinoma, and high-grade prostatic intraepithelial neoplasia (HGPIN). Prostatic urothelial carcinoma, particularly with prostatic stromal involvement, was associated with a worse prognosis, while clinically significant prostatic adenocarcinoma did not alter survival (PUBMED:23707451). The presence of urothelial carcinoma involving the prostate (UCP) and prostate adenocarcinoma (PA) was present in most radical cystoprostatectomy specimens, with UC arising from extension of trigonal or bladder-neck tumors, proximal prostate ducts/urethra, or from cell implantation from manipulation of vesical neoplasms (PUBMED:19076141). Additionally, incidental detection of urothelial carcinoma on 18F-Fluciclovine PET/CT, which is commonly used in the evaluation of biochemical recurrence in prostate cancer, has been reported, indicating that diagnostic imaging for prostate cancer can incidentally detect urothelial carcinoma (PUBMED:33156041). In summary, there is a significant relationship between urothelial and prostate cancer in patients undergoing cystoprostatectomy for bladder cancer, with a high incidence of cancer involving the prostate. The coexistence of these cancers may be due to a shared carcinogenic pathway, and the presence of prostatic urothelial carcinoma can have prognostic implications.
Instruction: Is the change in body mass index among children newly diagnosed with type 1 diabetes mellitus associated with obesity at transition from pediatric to adult care? Abstracts: abstract_id: PUBMED:26748829 Is the change in body mass index among children newly diagnosed with type 1 diabetes mellitus associated with obesity at transition from pediatric to adult care? Background: Insulin therapy is lifesaving treatment for individuals with type 1 diabetes (T1D). Its initiation maybe associated with significant weight gain because of change from a catabolic to an anabolic state. Excessive weight-gain increases the risk of obesity and is associated with chronic disease. Objective: To examine if change in body mass index (BMI) among children in the 6 months after diagnosis with type 1 diabetes mellitus is associated with long-term obesity. Methods: This was a population-based retrospective study of 377 children (aged 2-18 yr) with type 1 diabetes. Measured heights and weights were used to calculate BMI z-scores based on Centers for Disease Control and Prevention (CDC) cut-points. Generalized Linear Models using BMI group, and age group at diagnosis; postdiagnosis weight change; and sex were applied to assess associations between postdiagnosis weight change and BMI z-score at transition to adult care. Results: Mean BMI z-score increased from 0.28 at diagnosis, to 0.53 at 6 months and 0.66 at transition to adult care. Change in BMI z-scores differed by initial BMI group and magnitude of postdiagnosis weight change. Younger children (&lt;11 yr) had higher (p = 0.004) BMI z-scores at diagnosis but not at last visit (p = 0.1) than older (≥11 yr) children at diagnosis. BMI z-score at diagnosis, postdiagnosis weight change, female sex, and longer duration with TID were associated with higher BMI z-score at time of transition. Conclusions: BMI z-score at diagnosis was the strongest predictor of BMI z-score at time of transition to adult care, however; its effect was mediated by magnitude of weight change 6 months after diagnosis, sex, and age group at diagnosis. abstract_id: PUBMED:31990164 Increased Incidence of Type 1 Diabetes in Children and No Change in the Age of Diagnosis and BMI-SDS at the Onset - is the Accelerator Hypothesis not Working? Objective: One of the hypothesized reasons for the observed increase in type 1 diabetes incidence in children is weight gain, causing accelerated disease development in predisposed individuals. This so-called accelerator hypothesis is, however, controversial. The aim was to analyze whether, in the ethnically homogeneous population of Lesser Poland, an increase in the number of cases of diabetes among children was associated with younger age and higher body mass index-standard deviation score (BMI-SDS) at the time of diagnosis. Methods: Retrospective data analysis from medical records of all patients &lt;14 years (n=559; 50.6% male), with newly diagnosed type 1 diabetes, in Lesser Poland between 1st January 2006 and 31st December 2017 (11 years). Results: The incidence ratio ranged significantly (p&lt;0.001) from the lowest in 2006 (11.2/100,000/year) to the highest in 2012 (21.9/100,000/year). The mean age of diagnosis was 8.2±3.5 years. There was no trend in decreasing diagnosis age (p=0.43). The mean BMI-SDS was -0.4±1.2. Almost all children (91.6%) presented with BMI-SDS within the normal range at the time of diagnosis, with only 2.7% of cases being obese and 5.7% underweight at the moment of diagnosis. There was no clear trend at all in BMI-SDS over the study period. Conclusion: These results do not corroborate an increase of type 1 incidence in paediatric population being associated with younger age of diagnosis and higher BMI-SDS. This implies that the accelerator hypothesis does not hold true in the study population. abstract_id: PUBMED:33981216 Analysis of Time Course and Dose Effect From Metformin on Body Mass Index in Children and Adolescents. The purpose of this study was to analyze the time course and dose effect from metformin on body mass index (BMI) in children and adolescents by model-based meta-analysis (MBMA). Searching randomized controlled trial (RCT) studies of metformin on BMI in children and adolescents. The change rates of BMI from baseline values were used as indicator of evaluating metformin efficacy. A total of 18 RCT studies, 1,228 children and adolescents, were included for analysis, including patients with obesity, patients with type 1 diabetes mellitus, patients with nonalcoholic fatty liver, and patients with precocity. In order to achieve better effect of metformin on BMI in children and adolescents, the present study recommended that for patients with obesity, 1,000 mg/day metformin was required for at least 15.2 weeks and 60.8 weeks to achieve the plateau of metformin effect; for patients with type 1 diabetes mellitus, 1,000 mg/day metformin was required for at least 25.2 weeks and 100.8 weeks to achieve the plateau of metformin effect; for patients with nonalcoholic fatty liver, 1,000 mg/day metformin was required for at least 6.57 weeks and 26.28 weeks to achieve the plateau of metformin effect; for patients with precocity, 425 mg/day metformin was required for at least 12.4 weeks and 49.6 weeks to achieve the plateau of metformin effect. It was the first time to analyze the time course and dose effect from metformin on BMI and to recommend dosage and duration of treatment for metformin in children and adolescents with different disease types. abstract_id: PUBMED:35011031 An Attempt to Assess the Impact of Pandemic Restrictions on the Lifestyle, Diet, and Body Mass Index of Children with Endocrine Diseases-Preliminary Results. Background: Home isolation during the coronavirus 2019 (COVID-19) pandemic lockdown strongly impacted everyday life, affecting, in particular, eating habits and everyday activity. The aim of this study was to analyze the impact of the pandemic on behaviors and subsequent changes in body mass index (BMI) in children from Southern Poland. Methods: The study included 206 participants (104 females and 102 males) with a complete analysis of 177 participants (96 females and 81 males) with a mean age of 12.8 ± 2.6 years admitted to three pediatric endocrinology clinics (Rzeszów, Kraków, and Katowice) due to simple obesity, type 1 diabetes mellitus, somatotropin pituitary deficiency on growth hormone replacement therapy, and other endocrine and metabolic disorders between June and September 2020. The study used a self-prepared questionnaire regarding eating habits, physical activity, screen time, and sleep before and during the lockdown. Anthropometric measurements were performed under clinical settings twice (before the pandemic in January-March 2020, and in June-September 2020). Results: During the lockdown, BMI z-scores increased over the whole group, especially in obese children (0.073 ± 0.18, p = 0.002). The number of children who declared low and high physical activity of more than 60 min per day declined from 41.2% and 18.6% to 31.1% and 6.2% (p = 0.03 and p &lt; 0.001), respectively; sleep times over 8 h increased (46.9% vs. 60.4% p = 0.007); screen times over 5 h daily increased (14.7% to 46.9%, p &lt; 0.001). Eating habits did not change significantly. Conclusions: Daily physical activity and sleep levels were affected by the pandemic leading to the increase of BMI, especially in obese patients with endocrine disorders. During the COVID-19 pandemic, forward-thinking strategies must be developed to prevent childhood obesity. abstract_id: PUBMED:31498940 Differences in sleep architecture according to body mass index in children with type 1 diabetes. Slow wave sleep (SWS), or deep sleep, is thought to be the most restorative stage of sleep and may be of a particular interest in the pathophysiology of obesity. The aim of this study was to investigate differences in sleep architecture based on body mass index (BMI) among a pediatric population with type 1 diabetes mellitus (T1DM). We hypothesized that children with T1DM who are obese would have less SWS than those who are not obese. Of 105 children with T1DM (mean age 13.54 years, 49.5% females) in this study, 19% were obese, 22% were overweight, and 59% had a normal BMI (81% non-obese). The overall SWS% among the participants was 13.2%. In contrast to our hypothesis, there was no significant difference in SWS% between obese and non-obese participants. However, the percent of time spent in rapid eye movement (REM) sleep among obese participants was significantly lower than those who were not obese (P = .022), which remained after adjusting the result for multiple covariates. While we found no significant association between the SWS time and BMI, obese adolescents with T1DM spent less time in REM sleep than those who were not obese. This study adds to the growing body of evidence supporting the importance of addressing sleep in clinical care of youth with T1DM. abstract_id: PUBMED:20455418 Blood pressure in children and adolescents with type 1 diabetes mellitus--the influence of body mass index and fat mass. Introduction: Prevalence of arterial hypertension and prehypertension in children and adolescents with type 1 diabetes mellitus is higher than in healthy people, even when diabetic nephropathy symptoms (microalbuminuria) in these patients are not detected. Aim Of The Study: To estimate the influence of body mass and body composition on blood pressure in children and adolescents with type 1 diabetes mellitus. Material And Methods: In 164 patients (M-92, F-72), aged 14.8 +/- 2.4 years, with diabetes duration 6.0 +/- 4.2 years, body weight, height, casual blood pressure, and HbA1c level were measured. Body composition (fat mass - %FAT) was determined by bioimpedance analysis. Body mass index (BMI) and daily dose of insulin were calculated. Results: Based on BMI value obesity was diagnosed in 8 (4.9%) and overweight in 37 (22.6%) patients. In multivariate analysis we a found positive correlation between systolic blood pressure (SBP) with BMI and %FAT and between diastolic blood pressure (DBP) with BMI and %FAT. Patients with BMI within normal range had the lowest value of SBP. DBP in patients with normal BMI value was lower than in overweight subjects. No differences of SBP and DBP between overweight and obese patients were noted. DBP value increased from the lower to the upper quartile of %FAT. No significant correlation was found between SBP and the quartile value of %FAT. Conclusion: There is a significant association between body mass index and body fat mass with blood pressure level in children and adolescents with type 1 diabetes mellitus. abstract_id: PUBMED:9625360 Serum leptin levels in children and adolescents with insulin-dependent diabetes mellitus in relation to metabolic control and body mass index. The ob protein, termed leptin, is produced by adipocytes and is thought to act as an afferent satiety signal regulating weight through suppressing appetite and stimulating energy expenditure in humans and/or rodents. Insulin has been found to be a potent stimulator of leptin expression in rodents. It is unclear at present whether this insulin action is a direct or an indirect effect. To investigate whether leptin concentrations in children and adolescents with type 1 diabetes (IDDM) were related to metabolic status, body weight, body mass index and insulin treatment, we have measured leptin concentrations in serum from 13 newly diagnosed IDDM patients before the beginning of insulin treatment (8 girls, 5 boys, aged 4.7-17.5 years) and in 134 patients with IDDM during treatment (64 girls, 70 boys, aged 2.6-20.1 years) using a specific radioimmunoassay. The data from patients with diabetes were compared with normative data that were derived from a large cohort of healthy children and adolescents. Serum from children with newly diagnosed diabetes had significantly lower levels of leptin (mean 1.28+/-1.60 ng/ml, range 0.14-6.13 ng/ml) compared with healthy children (n=710) (mean 2.2 ng/ml, range 0.26-14.4ng/ml) and compared with insulin-treated children and adolescents (mean 5.18+/-5.48 ng/ml, range 0.26-29.77 ng/ml) (P&lt;0.0001) even after adjustment for gender and body mass index (BMI). Serum leptin levels in patients with IDDM were significantly correlated with BMI (r=0.42, P&lt;0.0001). Multiple regression analysis showed that age and BMI were significantly correlated with leptin levels, while duration of diabetes, mean HbA1c levels, insulin dose and plasma glucose, triglyceride and cholesterol levels were not. Females had higher serum leptin concentrations than males even when adjusted for BMI (P&lt;0.0001). Surprisingly and most importantly, leptin levels in insulin-treated young adult (Tanner stage 5) patients were significantly higher than values found in the healthy nondiabetic reference population when adjusted for sex, Tanner stage and BMI. These findings suggest that leptin levels in IDDM patients show a similar dependency on adipose tissue and age as in healthy, normal children. The data provide evidence that insulin may be of importance as a regulator of serum leptin levels in vivo not only in rodents but also in humans. It is hypothesized that the elevated BMI-adjusted leptin levels in adolescents with IDDM could indicate either that these patients may be oversubstituted by the intensified insulin therapy that they are receiving or that their body composition and body fat content may differ from that of healthy adolescents in the sense that they have a relative increase in fat mass. abstract_id: PUBMED:25612637 Analysis of body composition with the use of bioimpedance in children with type 1 diabetes Introduction: Patients with diagnosed type 1 diabetes are a group particularly exposed to cardiovascular complications related to obesity. According to some previous data, one of the useful methods to analyze body composition may be a method of bioelectrical impedance. Aim Of The Study: The aim of this research was to make an attempt of finding a correlation between anthropometric indicators and results of lipids profile and data obtained from bioelectrical impedance tests in children with diabetes. Material And Methods: The research sample comprised of 104 children with diabetes type 1 and 313 children without diabetes (a reference group). Anthropometric measurements were made in both groups. Furthermore data from lipids profile results was collected, as well as data from bioelectrical impedance analysis, such as: percentage and quantity (in kilograms) content of adipose tissue, fat-free body mass and total water content in the body. Results: The tested and the reference groups were not different in respect of percentage content of adipose tissue, fat mass, fat-free body mass and total water content in the body. Substantially lower fat mass was noticed among children with overweight or obesity and diabetes than in the control children with overweight/obesity. In the whole sample a connection between percentage fat content and fat mass and standardized body mass index and waist circumference was noticed. The connection was stronger in the reference group, than among children with diabetes. Correlation between percentage content of adipose tissue with the fraction HLD-cholesterol and LDL-cholesterol and values of blood pressure were observed in the whole sample. Conclusions: The results of our research the confirm correlation between anthropometric parameters and the data collected from bioelectrical impedance. Those connections are, however, stronger in the group of children without diabetes than with diabetes, which questions the usefulness of this method in evaluation of adipose tissue among children treated with insulin. abstract_id: PUBMED:15658614 Age at diagnosis, body mass index and physical morbidity in children and adults with the Prader-Willi syndrome. Age at diagnosis, Body Mass Index and physical morbidity in children and adults with the Prader-Willi syndrome: The medical findings of a population of 54 Prader-Willi patients with a molecular confirmed diagnosis are discussed. In the age group aged 18 or younger, a reasonably good control of weight as measured by Body Mass Index (BMI) is found. This is probably due to the fact that diagnosis was made at an early age and intensive diet management was started early. Despite their relatively low BMI, these children remain at high risk for developing scoliosis requiring active treatment (28% of the children). Adults (older than 18) diagnosed at the age of 10 or later have a high risk for developing obesity and obesity related health problems such as hypertension (38%), non-insulin dependant diabetes mellitus (11%) and cardio respiratory failure (16%). abstract_id: PUBMED:34272437 Trajectories in glycated hemoglobin and body mass index in children and adolescents with diabetes using the common data model. We evaluated trajectories of glycated hemoglobin (HbA1c) levels and body mass index z-scores (BMIz) for 5 years after diagnosis among Korean children and adolescents with type 1 diabetes (T1D) or type 2 diabetes (T2D) using the common data model. From the de-identified database of three hospitals, 889 patients &lt; 15 years of age diagnosed with T1D or T2D (393 boys, 664 T1D patients) were enrolled. Diagnosis was defined as first exposure to antidiabetic drug at each center. Compared with T2D patients, T1D patients had lower BMIz at diagnosis (- 0.4 ± 1.2 vs. 1.5 ± 1.4, p &lt; 0.001) and 3 months (- 0.1 ± 1.0 vs. 1.5 ± 1.5, p &lt; 0.001), and higher HbA1c levels at diagnosis (10.0 ± 2.6% vs. 9.5 ± 2.7%, p &lt; 0.01). After 3 months, HbA1c levels reached a nadir of 7.6% and 6.5% in T1D and T2D patients, respectively, followed by progressive increases; only 10.4% of T1D and 29.7% of T2D patients achieved the recommended HbA1c target (&lt; 7.0%) at 60 months. T1D patients showed consistent increases in BMIz; T2D patients showed no significant change in BMIz during follow-up. Peri-pubertal girls with T1D had higher HbA1c and BMIz values. Achieving optimal glycemic control and preventing obesity should be emphasized in pediatric diabetes care. Answer: Yes, the change in body mass index (BMI) among children newly diagnosed with type 1 diabetes mellitus (T1D) is associated with obesity at the transition from pediatric to adult care. A population-based retrospective study of 377 children with T1D found that the mean BMI z-score increased from 0.28 at diagnosis to 0.66 at transition to adult care. The change in BMI z-scores was influenced by the initial BMI group and the magnitude of postdiagnosis weight change. Factors such as BMI z-score at diagnosis, postdiagnosis weight change, female sex, and longer duration with T1D were associated with higher BMI z-score at the time of transition to adult care (PUBMED:26748829). This suggests that children who have a higher BMI at diagnosis or experience significant weight gain after diagnosis are more likely to be obese when transitioning to adult care.
Instruction: Diabetes guidelines: easier to preach than to practise? Abstracts: abstract_id: PUBMED:16999670 Diabetes guidelines: easier to preach than to practise? Objective: To review the management of glycaemia, blood pressure and serum lipids in a hospital outpatient diabetes clinic, the director of which co-authored the current national diabetes management guidelines. Design: Retrospective audit. Setting: Outpatient diabetes clinic in a tertiary referral teaching hospital, Sydney, NSW. Study Population: 96 patients with type 1 diabetes (mean age, 44.4 [SD, 12.8] years) and 509 patients with type 2 diabetes (mean age, 64.4 [SD, 12.0] years) attending the clinic in 2003, who had undergone formal review of complications. Main Outcome Measures: Weight, height, control and treatment of glycaemia, blood pressure and serum lipids, and prevalence of diabetic microvascular complications. Results: Glycated haemoglobin (HbA(1c)) was &lt; 7% in 13% of type 1 and 30% of type 2 diabetes patients, and &gt; 8% in 47% and 34%, respectively. 35% of patients with type 1 diabetes and 71% of patients with type 2 diabetes were treated with antihypertensive agents. Of these patients, 29% and 24%, respectively, had blood pressure readings &lt;/= 130/80 mmHg. Among patients not treated with hypertensive agents, blood pressure readings were &lt;/= 130/80 mmHg in 60% of type 1 and 38% of type 2 diabetes patients. About 30% of patients with type 1 diabetes and 50% of those with type 2 diabetes were being treated with lipid-lowering agents; of these, about 60% had low-density lipoprotein (LDL) cholesterol levels &lt; 2.6 mmol/L. Among patients not treated with lipid-lowering agents, about 40% had LDL cholesterol levels &lt; 2.6 mmol/L. Retinopathy was documented in 52% and 18%, and nephropathy in 9% and 36% of type 1 and type 2 diabetes patients, respectively. Conclusions: Despite the demonstrated benefits of tight glucose, blood pressure and lipid control in reducing the risk of macrovascular and microvascular complications in type 1 and type 2 diabetes, our results suggest that treatment targets are not being met in a large proportion of patients attending a tertiary referral hospital. Responsible practice suggests that treatment targets and the current means to achieve them should both be examined. abstract_id: PUBMED:28694977 Learning to practise the Guided Self-Determination approach in type 2 diabetes in primary care: A qualitative pilot study. Aim: To describe how diabetes nurses in primary care experience the process of learning to practise the person-centred counselling approach Guided Self-Determination among adults with type 2 diabetes. Design: A descriptive qualitative design. Method: Data were collected in 2014-2015 by means of individual interviews with four diabetes nurses at two points in time. The data were analysed using qualitative content analysis. Results: Three themes that reflect nurses' processes in learning to use the Guided Self-Determination approach were identified: (1) from an unfamiliar interaction to "cracking the code"; (2) from an unspecific approach to a structured, reflective, but demanding approach; and (3) from a nurse-centred to a patient-centred approach. The overall findings indicate that the process of learning to practise Guided Self-Determination increased the nurses' counselling competence. Moreover, the nurses perceived the approach to be generally helpful, as it stimulated reflections about diabetes management and about their own counselling practices. abstract_id: PUBMED:27404687 Prevalence and outcomes of trans-radial access for percutaneous coronary intervention in contemporary practise. Background: Trans-radial access for percutaneous coronary intervention (PCI) has been associated with lower vascular complication rates and improved outcomes. We assessed the current uptake of trans-radial PCI in Victoria, Australia, and evaluated if patients were selected according to baseline bleeding risk in contemporary clinical practise, and compared selected clinical outcomes. Methods: PCI data of all patients between 1st January 2013 and 31st December 2014 were analysed using The Victorian Cardiac Outcomes Registry (VCOR). Propensity-matched analysis was performed to compare the clinical outcomes. Results: 11,711 procedures were analysed. The femoral route was the predominant access site (66%). Patients undergoing trans-radial access PCI were younger (63.9±11.6 vs. 67.2±11.8; p&lt;0.001), had a higher BMI (28.9±5.5 vs. 28.5±5.2; p&lt;0.001), more likely to be male (80.0 vs. 74.9%;p&lt;0.001), less likely to have presented with cardiogenic shock (0.9 vs. 2.8%; p&lt;0.001) or have the following comorbidities: diabetes (19.8 vs. 23.1%; p&lt;0.001), peripheral vascular disease (2.9 vs. 4.3%; p=0.005) or renal impairment (13.6 vs. 22.1%; p&lt;0.001). The radial group had less bleeding events (3.2 vs. 4.6%; p&lt;0.001) and shorter hospital length of stay (3.1±4.7 vs. 3.3±3.9; p=0.006). There was no significant difference in mortality (1.0 vs. 1.4%; p=0.095). Conclusions: Trans-femoral approach remains the dominant access site for PCI in Victoria. The choice of route does not appear to be selected by consideration of bleeding risk. The radial route is associated with improved clinical outcomes of reduced bleeding and length of stay consistent with previous findings, and this supports the efficacy and safety of trans-radial PCI in real-world clinical practise. abstract_id: PUBMED:35066115 Population median imputation was noninferior to complex approaches for imputing missing values in cardiovascular prediction models in clinical practice. Objectives: To compare the validity and robustness of five methods for handling missing characteristics when using cardiovascular disease risk prediction models for individual patients in a real-world clinical setting. Study Design And Setting: The performance of the missing data methods was assessed using data from the Swedish National Diabetes Registry (n = 419,533) with external validation using the Scottish Care Information - diabetes database (n = 226,953). Five methods for handling missing data were compared. Two methods using submodels for each combination of available data, two imputation methods: conditional imputation and median imputation, and one alternative modeling method, called the naïve approach, based on hazard ratios and populations statistics of known risk factors only. The validity was compared using calibration plots and c-statistics. Results: C-statistics were similar across methods in both development and validation data sets, that is, 0.82 (95% CI 0.82-0.83) in the Swedish National Diabetes Registry and 0.74 (95% CI 0.74-0.75) in Scottish Care Information-diabetes database. Differences were only observed after random introduction of missing data in the most important predictor variable (i.e., age). Conclusion: Validity and robustness of median imputation was not dissimilar to more complex methods for handling missing values, provided that the most important predictor variables, such as age, are not missing. abstract_id: PUBMED:23790179 Interdental cleaning among persons with diabetes: relationships with individual characteristics. Objectives: Given the existence of many potential oral health complications for adults with diabetes (especially for those who do not practise regular oral self-care), and the specific importance of regular interdental cleaning, the research determined the proportion of U.S. adults with diabetes who practise daily interdental cleaning and their socio-demographic, economic and oral health characteristics related to this practice. Methods: Analyses were conducted using data collected from 573 dentulous adults with diabetes ≥ 30 years who participated in the U.S. 2009-2010 National Health and Nutrition Examination survey. Using complex sample survey software, findings were extrapolated to &gt;15 million U.S. adults. Descriptive statistics were used to determine the frequency of interdental cleaning, and chi-square tests were used to identify salient individual characteristics related to this practice. Results: 41.2% reported that they never used any interdental device, while 24.8% indicated that they practised interdental cleaning daily. Statistically significant relationships (P &lt; 0.05) with daily interdental cleaning included female sex, ever having had treatment for gum disease and using mouthwash daily for a dental problem. Conclusions: Because so many adults with diabetes do not practise regular interdental cleaning, and in view of the important role that dental hygienists fulfil as oral healthcare educators, there is a great need for dental hygienists to teach and motivate adults with diabetes to practise regular interdental cleaning. This need is especially great for subgroups of these adults who are men, have not been treated for periodontitis and do not regularly use a mouthwash for dental problems. abstract_id: PUBMED:27900895 Diabetes education should cover foot self-care. Patients with diabetes provide 99% of their own care but do not always practise foot care, especially if they are older people with functional or cognitive problems. Incidence of diabetes has increased exponentially and it is now the most common chronic condition affecting older people. abstract_id: PUBMED:20171948 Reviewing statin therapy in diabetes--towards the best practise. Statin therapy is considered critical both in primary and secondary prevention of cardiovascular disease in diabetes. Cholesterol Treatment Trialists' Collaborators meta-analysis of 14 randomised trials of statins in 18686 people with diabetes provides the latest and largest evidence showing a significant 21% reduction in major vascular events per mmol/l reduction in LDL cholesterol. Importantly, the risk reduction was similar in both types of diabetes. Growing evidence supports the view that statin therapy reduces microvascular complications as well. This review updates the current knowledge of statin therapy in preventing micro- and macrovascular complications in both type 1 and type 2 diabetes. abstract_id: PUBMED:33681058 The insulin sharps disposal study: Evaluation of a structured patient education initiative in an urban community health centre in India. Context: India has witnessed an increase in people suffering from diabetes mellitus and also those on insulin. The issue of handling used sharps in the community is an area of concern. Aims: Our study aimed to assess the change in knowledge and practise concerning the disposal of used insulin sharps in patients with diabetes, pre- and post-evaluation, a structured, health provider-initiated, patient-centred insulin-use health education (PIHE). Settings And Design: A pre- and post-evaluation study of PIHE set in an urban health centre in south India. Methods And Materials: A pre-evaluation study was done on a cross-sectional sample of type II diabetes patients, who visited the urban health centre and were on insulin for at least 1 year. A semi-structured pilot-tested questionnaire was used for the study. For 18 months, PIHE was provided to all insulin users at outpatient contact, after which a post-evaluation study among another cross-sectional sample of patients in the same centre, was done to assess change in knowledge and practise. Statistical Analysis Used: Univariate analysis followed by logistic regression. Results: The participants in the pre- and post-evaluation studies had a mean (SD) age of 54.9 (11.33) years and 57.2 (11.01) years, respectively. The baseline characteristics of gender, socioeconomic status and diabetes duration were similar. There was a statistically significant (P &lt; 0.001) difference in knowledge (19 [19%] to 81 [81%]) and practise (7 [7%] to 73 [73%]) of participants regarding the correct disposal method of used insulin syringes and sharps before and after PIHE. Conclusions: PIHE over an 18-month period for diabetic patients on insulin significantly improved knowledge and practises relating to insulin use and correct disposal of syringes and sharps. abstract_id: PUBMED:27489372 Timeliness of referral of children with new onset type 1 diabetes. Introduction: Type 1 diabetes (T1D) is characterised by autoimmune destruction of pancreatic β cells leading to insulin deficiency. Prompt referral to a specialist paediatric diabetes team (PDT) for insulin initiation and further management is important to prevent diabetic ketoacidosis (DKA), which remains the most common cause of death in this condition. Objective: The aim was to study the timeliness of referrals from general practitioners (GPs) to PDT, of children suspected of having TID. Method: We carried out a survey of the practice of GPs when they suspect TID in a young person, to investigate whether they recognised the need for urgent referral to PDT. In addition, we carried out retrospective case notes review of children diagnosed with diabetes mellitus between January 2005 and December 2014. Results: 111/300 (37%) of GPs replied to survey. Of these, 73/111 (65.8%) would have referred promptly to the PDT in accordance with National Guidelines. However, 34.2% would have taken an action that would have led to delay in referral to PDT. 96 children were diagnosed with TID during the audit period. There was a delay in referral in 35 (36.5%) children. 19/35 (54.3%) of these children presented with DKA. Mean duration of delay in presentation to the PDT was 1.8 days. In both survey and audit, the most common reason for delayed referral was GP attempting to confirm the diagnosis by undertaking further diagnostic tests. Conclusions: Our study has identified a modifiable reason for delayed referrals of children with TID. abstract_id: PUBMED:21295071 Aspirin resistance: effect of clinical, biochemical and genetic factors. Aspirin is one of the cornerstones of treatment for cardiovascular disease. However, some patients may be 'resistant' to its effect: this is associated with adverse cardiovascular outcomes and increased mortality. Measuring response to aspirin is often difficult and there is no accepted definition of aspirin resistance. Many assays are available to test aspirin sensitivity but most are not specific to aspirin and the degree of agreement between different assays is poor. Each assay has its own advantages and disadvantages, and there is currently no one assay that can be recommended for routine clinical practise. There are also many potential modifiers of aspirin response including aspirin dose, non-compliance, disease severity, genetic factors, inflammation, diabetes mellitus, hyperlipidaemia, smoking and interacting drugs. Treating the underlying cause may improve aspirin sensitivity but current data are contradictory with no large clinical trials that have addressed this. Further work is required in this area to determine whether and how aspirin resistance is important clinically, what the best measurement is phenotypically and how this should be used in clinical practise, and whether there are any genetic predisposing factors. This will require well designed prospective studies which take into account the numerous confounding factors that can modify aspirin resistance. Answer: The question "Diabetes guidelines: easier to preach than to practise?" reflects a common concern in the management of diabetes, where clinical guidelines may not always translate into practice. The retrospective audit conducted in a tertiary referral teaching hospital in Sydney, NSW, revealed that treatment targets for glycaemia, blood pressure, and serum lipids were not being met in a large proportion of patients attending the clinic, despite the director of the clinic co-authoring the national diabetes management guidelines (PUBMED:16999670). This suggests a gap between the ideal scenarios outlined in guidelines and the realities of clinical practice. Additionally, a qualitative pilot study on the Guided Self-Determination approach in primary care indicated that diabetes nurses found the person-centred counselling approach to be helpful in stimulating reflections about diabetes management and their own counselling practices (PUBMED:28694977). This suggests that while guidelines can provide a framework, the individualized approach to patient care is crucial and requires specific training and adaptation by healthcare providers. Moreover, the study on trans-radial access for percutaneous coronary intervention (PCI) in Victoria, Australia, showed that despite the benefits of the radial route, the femoral approach remained dominant and the choice of route did not appear to be selected by consideration of bleeding risk (PUBMED:27404687). This indicates that even when evidence supports a particular practice, it may not be widely adopted. The study on the disposal of used insulin sharps in India demonstrated that a structured patient education initiative significantly improved knowledge and practices relating to insulin use and correct disposal of syringes and sharps (PUBMED:33681058). This highlights the importance of patient education in bridging the gap between guidelines and practice. In summary, while diabetes guidelines provide evidence-based recommendations for optimal care, the implementation of these guidelines in clinical practice can be challenging. Factors such as individual patient needs, healthcare provider education, and system-level barriers can influence the extent to which guidelines are followed in real-world settings.
Instruction: Does reduced movement restrictions and use of assistive devices affect rehabilitation outcome after total hip replacement? Abstracts: abstract_id: PUBMED:24476806 Does reduced movement restrictions and use of assistive devices affect rehabilitation outcome after total hip replacement? A non-randomized, controlled study. Background: Improvements in surgical techniques and increase of femoral head size might have changed the rationale for movement restrictions after total hip replacement (THR). Aim: To evaluate the influence of movement restrictions and assistive devices on rehabilitation after fast track THR. Design: Non-randomized, controlled study. Setting: Inpatient. Population: 365 consecutively included THR patients. Methods: Patients included the 3 initial month of the study underwent rehabilitation with restrictions in hip movement and a standard package of assistive devices (restricted group). This group was compared to patients included the following 3 months with less restricted hip movement and use of assistive devices according to individual needs (unrestricted group). Questionnaires on function, pain, quality of life (HOOS), anxiety (HADS), working status and patient satisfaction were completed before THR, 3 and 6 weeks after. Results: The HOOS function score at the 3 measurement times was (mean ± SD); unrestricted group: 46 ± 17 - 76 ± 9 - 83 ± 14 compared to restricted group: 43 ± 16 - 81 ± 14 - 83 ± 13. Changes over time was significantly higher in the restricted group (P=0.004). Return to work 6 weeks after THR for the unrestricted group compared to restricted group was: 53% versus 32% (P=0.045). No significant differences between groups in pain, symptoms, quality of life, anxiety/depression, hip dislocations and patient satisfaction. Conclusion: This study showed slightly slower recovery in patient-reported function after reduction in movement restrictions and use of assistive devices, but the difference was eliminated after 6 weeks. Reduced movement restrictions did not affect the other patient-reported outcomes and led to earlier return to work. Clinical Rehabilitation Impact: It is possible to reduce movement restrictions and use of assistive devices considerably. More research on safety issues is needed to elucidate the effect of unrestricted rehabilitation on hip dislocation. abstract_id: PUBMED:24572057 Rehabilitation following total hip arthroplasty. Rehabilitation professionals play an important role in the comprehensive postoperative management of the patient who has undergone a total hip replacement. Understanding the general surgical considerations that eventually impact the rehabilitation process is essential. Coordination of physicians, physical and occupational therapists, social services, and family members results in better quality of care. The technology and design of hip prostheses and fixation methods impact the functional outcome of total hip arthroplasty. Professionals involved in total hip arthroplasty rehabilitation should also understand the potential complications following total hip arthroplasty that oftentimes cause delays or revisions in the rehabilitation program. When these are combined with appropriate preoperative patient selection and education, as well as postoperative physical and occupational therapy programs, most patients are able to achieve a satisfactory functional outcome, including independence in basic activities of daily living and independent ambulation with an assistive device. abstract_id: PUBMED:32045541 Hip precautions after total hip replacement and their discontinuation from practice: patient perceptions and experiences. Background: Hip precautions are routinely provided to reduce the risk of dislocation following total hip replacement despite evidence suggesting they provide no additional benefit and may, actually, impede recovery. Our aim was to report the views of patients who had been recruited into a trial comparing outcomes in participants who were prescribed hip precautions with those who were not. Methods: Semi-structured interviews were conducted. Topics explored included experiences and opinions of the service (either hip precautions or no hip precautions), information offered, and equipment provided. Thematic analysis was used to identify and report themes. Results: Six themes were identified: perceived justification, and advantages and disadvantages for the postoperative recovery regime prescribed, perceived risk, and fear of dislocation, adherence to the postoperative regime prescribed, and experiences of adaptive equipment. Conclusions: Participants who received hip precautions had mixed views about their use: some felt they were restrictive whilst others believed they provided reassurance. Participants who did not receive hip precautions were less anxious about dislocating their hip but were unsure how to progress their rehabilitation. The discontinuation of precautions may decrease patients' fears about dislocation but more guidance from rehabilitation staff about how to move safely during recovery is required.Implications for rehabilitationHip precautions may unnecessarily exacerbate patients' anxieties and fear about dislocation following total hip replacement.Hip precautions impact on patients' recovery both physically and psychologically.Patients should be advised about moving and returning to activities following total hip replacement, whether they receive hip precautions or not. abstract_id: PUBMED:21052634 Total hip replacement using the Staffelstein score: outcome of inpatient rehabilitation Background: The aim of the study was to evaluate the effectivity of rehabilitation procedures performed for several weeks after total hip arthroplasty and to analyze the factors influencing rehabilitation results. Methods: The data of 141 patients following total hip replacement were analyzed comparatively, using the Staffelstein score, both during admission and discharge. Of the 141 patients, 93 (66%) were female and 48 (34%) male, with an average age of 74.9 years. The score according to Staffelstein was taken into consideration in the questionnaire. Results: The average Staffelstein score on admission in patients undergoing total hip replacement was 73.1 points. At discharge an average of 103.2 points was reached after rehabilitation treatment. This represents a significant difference (&lt;0.001). The result of rehabilitation was independent of sex, age, or weight bearing. Conclusion: With this study and the use of the Staffelstein score we were able to establish the effectivity of rehabilitation procedures performed for several weeks after total hip arthroplasty. The rehabilitation results were independent of gender, age, and weight bearing. abstract_id: PUBMED:31191937 A review of wearable motion tracking systems used in rehabilitation following hip and knee replacement. Clinical teams are under increasing pressure to facilitate early hospital discharge for total hip replacement and total knee replacement patients following surgery. A wide variety of wearable devices are being marketed to assist with rehabilitation following surgery. A review of wearable devices was undertaken to assess the evidence supporting their efficacy in assisting rehabilitation following total hip replacement and total knee replacement. A search was conducted using the electronic databases including Medline, CINAHL, Cochrane, PsycARTICLES, and PubMed of studies from January 2000 to October 2017. Five studies met the eligibility criteria, and all used an accelerometer and a gyroscope for their technology. A review of the studies found very little evidence to support the efficacy of the technology, although they show that the use of the technology is feasible. Future work should establish which wearable technology is most valuable to patients, which ones improve patient outcomes, and the most economical model for deploying the technology. abstract_id: PUBMED:34574900 Rehabilitation Process Issues and Functional Performance after Total Hip and Knee Replacement. (1) Background: to ensure satisfactory outcomes in patients who have undergone total hip or knee replacement, it is crucial to prioritize postoperative rehabilitation process and its management rather than the successful surgery alone. The goal of our study was to investigate the outcomes of rehabilitation process after the total hip or knee replacement, including local orthopedic follow-up, communication with local health authorities, patients' satisfaction regarding rehabilitation, and their functional performance after the surgery. (2) Methods: the study included 523 patients who underwent total hip replacement, and 650 patients who underwent total knee replacement. All patients were surveyed via formalized phone interviews containing questions related to postoperative rehabilitation parameters and outcomes. (3) Results: in postsurgical period, nearly 70% of patients had regular local orthopedic follow-up. Lack of the latter at the local level was indicated by approximately 10% of patients, and the rest of the respondents note the practice of sporadic follow-up. Half of patients reported pains of various severity. Good physical activity (e.g., ability to walk unassisted within their residential district) was acknowledged by about a quarter of patients. Ability to walk unassisted was reported by just 54.5% of patients. (4) Conclusions: the disproportion between generally high patient satisfaction of rehabilitation management (&gt;80%) and low feedback level on the part of local health authorities (9.4%) demonstrated lack of communication between the key parties involved in the rehabilitation process. abstract_id: PUBMED:37990882 Effectiveness of tele-rehabilitation after total hip replacement: a systematic review and meta-analysis of randomized controlled trials. Purpose: To identify articles that evaluated the efficacy of tele-rehabilitation after total hip replacement and to provide a reference for clinical decision-making and continuous improvement of rehabilitation exercise intervention strategies for clinical medical staff. Methods: Embase, PubMed, Web of Science, Medline, China National Knowledge Network and Google Scholar databases were searched for randomized controlled trials of tele-rehabilitation after total hip replacement from inception to March 4, 2023. The two researchers used the PEDro scale to assess the methodological quality of the included studies. Two researchers used the PEDro scale to assess the methodological quality of the included studies. Results: Ten studies with 632 participants were analysed.Compared to face-to-face rehabilitation, Internet-based telerehabilitation showed better outcomes in Get-up and go test (SMD -0.54, 95% CI -0.79 to -0.29). telerehabilitation showed no significant difference in outcomes of other functional tests and functional questionnaires. Conclusion: In this systematic review and meta-analysis, internet-based tele-rehabilitation and face-to-face rehabilitation had the same effect on patients after total hip replacement, and compliance was higher in the tele-rehabilitation group, but attention should be devoted to standardizing this form of rehabilitation to avoid adverse events. In addition, the number and quality of included studies in this study are limited, and the outcome indicators and intervention means have not been unified. More high-quality studies are needed to verify these conclusions to better evaluate the effectiveness and advantages of telerehabilitation.Implications for rehabilitationWith the development of science and technology, remote rehabilitation technology will be applied to various fields of rehabilitation, providing personalized and extensive rehabilitation services.The application of tele-rehabilitation technology to postoperative rehabilitation after total hip arthroplasty is feasible and can reduce the workload of healthcare professionals to a certain extent.This review evaluated the randomized controlled trials of telerehabilitation after total hip arthroplasty, and the results showed that the clinical efficacy of telerehabilitation after total hip arthroplasty was not inferior to that of traditional rehabilitation. abstract_id: PUBMED:28330380 Rehabilitation after total joint replacement: a scoping study. Purpose: The evidence supporting rehabilitation after joint replacement, while vast, is of variable quality making it difficult for clinicians to apply the best evidence to their practice. We aimed to map key issues for rehabilitation following joint replacement, highlighting potential avenues for new research. Materials And Methods: We conducted a scoping study including research published between January 2013 and December 2016, evaluating effectiveness of rehabilitation following hip and knee total joint replacement. We reviewed this work in the context of outcomes described from previously published research. Results: Thirty individual studies and seven systematic reviews were included, with most research examining the effectiveness of physiotherapy-based exercise rehabilitation after total knee replacement using randomized control trial methods. Rehabilitation after hip and knee replacement whether carried out at the clinic or monitored at home, appears beneficial but type, intensity and duration of interventions were not consistently associated with outcomes. The burden of comorbidities rather than specific rehabilitation approach may better predict rehabilitation outcome. Monitoring of recovery and therapeutic attention appear important but little is known about optimal levels and methods required to maximize outcomes. Conclusions: More work exploring the role of comorbidities and key components of therapeutic attention and the therapy relationship, using a wider range of study methods may help to advance the field. Implications for Rehabilitation Physiotherapy-based exercise rehabilitation after total hip replacement and total knee replacement, whether carried out at the clinic or monitored at home, appears beneficial. Type, intensity, and duration of interventions do not appear consistently associated with outcomes. Monitoring a patient's recovery appears to be an important component. The available research provides limited guidance regarding optimal levels of monitoring needed to achieve gains following hip and knee replacement and more work is required to clarify these aspects. The burden of comorbidities appears to better predict outcomes regardless of rehabilitation approach. abstract_id: PUBMED:31035696 Application of NiTi in Assistive and Rehabilitation Devices: A Review. Shape memory alloys (SMAs) have found widespread applications as biomedical devices. Biocompatibility, corrosion resistance, and ductility make these alloys attractive for medical devices such as stents and filters. For these implants, the superelastic property is the primary function of SMAs. Additionally, these alloys, such as NiTi as the prime example, can be used for actuation. Several modes of actuation such as displacement control, force control, and compliance control have been used as harnesses with SMA devices. These two unique properties have opened another application in the form of neurosurgery and robot-assisted surgery devices, as well as controlled assistive and rehabilitation devices. This paper reviews the state of the art of application of SMAs in the latter category where control is applied to harness innovative medical devices. To this end, two major subsets of these devices: prosthesis and orthosis which take the advantage of SMAs in assistive and rehabilitation devices are studied. These devices are further categorized to hand prosthetics, elbow, knee and ankle orthotics. In most of these designs, SMA wires act as artificial muscles to mimic the motion of limbs in the target joints. The evolution of each category is explained, and the specific results of them are reported. The paper also reviews the SMA applications for neurological and neuromuscular rehabilitation. To this end, different categories of rehabilitation devices as a passive and aided exercise for the ankle, knee, and elbow are highlighted. The SMA actuator in these devices can be EMG-controlled to improved patient outcome. In addition to providing a comprehensive overview of the biomedical devices, this paper identifies several possible future directions of SMA related research in the area of assistive and rehabilitation devices. abstract_id: PUBMED:31228391 Example of physical training health activities after total hip replacement with Wright implant. The article presents the results of post-clinical observation of F., who had total replacement of two hip joints and goes to the gym, taking an individual programme with an elective part on training machines with a preset load vector that excludes a sharp dynamic effect on the musculoskeletal system. In 2014, based on the diagnosis of bilateral dysplastic coxarthrosis of the 3rd stage (function impairment 2), F. had total replacement of a right hip joint, followed by total replacement of a left hip joint in 2015; in both cases, the postsurgical period included the symptomatic therapy, rehabilitation exercises, joint movement training, and a physiotherapeutic treatment course. During the rehabilitation period, the patient's condition significantly improved, and after 3 months and the second operation, the orthopedist and physician admitted the patient to have health activities in a gum with some restrictions - to exclude inertial loads on a hip joint (running, jumps, free-weight exercises). The obtained results indicate that an individual programme with the use of training machines allows to solve medical and social tasks. Answer: Reduced movement restrictions and the use of assistive devices after total hip replacement (THR) have been evaluated in various studies with mixed results. A non-randomized, controlled study found that patients with less restricted hip movement and use of assistive devices according to individual needs (unrestricted group) showed slightly slower recovery in patient-reported function compared to those with more restrictions (restricted group). However, this difference was eliminated after 6 weeks. The unrestricted group also had a higher rate of return to work 6 weeks after THR. There were no significant differences between the groups in pain, symptoms, quality of life, anxiety/depression, hip dislocations, and patient satisfaction. This suggests that it is possible to reduce movement restrictions and use of assistive devices considerably without negatively affecting the overall rehabilitation outcome (PUBMED:24476806). Patients' perceptions and experiences also play a role in the rehabilitation process. Some patients who received hip precautions felt they were restrictive, while others believed they provided reassurance. Those who did not receive hip precautions were less anxious about dislocating their hip but desired more guidance on how to move safely during recovery. This indicates that while discontinuation of precautions may decrease patients' fears about dislocation, clear guidance from rehabilitation staff is necessary (PUBMED:32045541). The effectiveness of rehabilitation procedures after THR has been shown to be significant, with improvements in functional scores after rehabilitation treatment. These results were independent of gender, age, and weight bearing, suggesting that rehabilitation is effective across different patient demographics (PUBMED:21052634). In summary, reduced movement restrictions and tailored use of assistive devices do not seem to negatively affect the rehabilitation outcome after THR and may lead to earlier return to work. However, patient education and guidance are crucial to ensure safety and confidence during the recovery process. More research on safety issues is needed to fully understand the impact of unrestricted rehabilitation on outcomes such as hip dislocation (PUBMED:24476806).
Instruction: Multiple sclerosis in Iraq: does it have the same features encountered in Western countries? Abstracts: abstract_id: PUBMED:15121436 Changing patterns of adult (45-74 years) neurological deaths in the major Western world countries 1979-1997. Objectives: To compare changes in 'adult' (45-74 years) 'all-cause deaths' (ACDs) with all neurological death categories by age and gender in the 10 major Western countries between the 1970s (1979-1981) and the 1990s (1995-1997). Method: World Health Organization standardized mortality data for age and gender (1979/97) were used to examine changes in adult mortality rates per million based upon ICD-9 categories for ACDs, 'neurological deaths' and the special neurological categories of 'other neurological deaths' (ONDs) and 'mental disorder deaths' (MDDs), which include the dementias. Ratios of ratios were calculated to demonstrate how each individual country's pattern changed over the period by age and gender, resolving the problem of cross-national comparisons. Rates of change across the endpoints and between age groups (45-54, 55-64, 65-74 and 75+ years) were examined using analysis of variance, stepwise regression analysis and cross-tabulation analyses. Results: Meningitis deaths fell substantially, but there was little change in multiple sclerosis or epilepsy deaths. OND rates for the 1990s increased compared with the 1970s rates for males and female, in actual terms and relative to ACDs for almost all countries. Many of the relative rates of increase were substantially higher than 20%. There were significant statistical differences with respect to relative rates of ONDs between the 1970s and the 1990s data, even when the 75+ years age group was excluded. Significant differences were also found between age groups, but only in the 1990s data. MDD rates showed similar trends. Analyses of actual rates of increase in these causes of death showed that males outnumber females in all ages below 74 years. The extent of this difference remained constant across the endpoints. However, in those aged 75 years and over, females outnumbered males at both endpoints, but this disparity widened significantly in the 1990s data. Conclusions: The 1990s data indicate substantial increases compared with the 1970s data for ONDs (especially amongst 65-74 year olds), and rises in MDDs in 55-64 year olds in five countries, including England and Wales and Germany, and in 65-74 year olds in most countries, suggesting earlier onsets of the underlying conditions. Further country-specific research is required to explain the emerging morbidity and mortality. abstract_id: PUBMED:29377214 Western lifestyle and immunopathology of multiple sclerosis. There is increasing evidence for a sudden and unprecedented rise in the incidence of multiple sclerosis (MS) in Westernized countries over the past decades, emphasizing the role of environmental factors. Among many candidates, rapid changes in dietary habits seem to play a role in the pathogenesis of MS. Here, we summarize and discuss the available evidence for the role of dietary nutrients, such as table salt, fatty acids, and flavonoids, in the development and pathogenesis of MS. We also discuss new and emerging risk factors accompanying Western lifestyle, such as shift work, sleep, and circadian disruption. abstract_id: PUBMED:30151023 A Comparison of Optic Neuritis in Asian and in Western Countries. Acute demyelinating optic neuritis is a common optic neuropathy in young adults. There is usually satisfactory visual recovery. However, some patients convert to multiple sclerosis (MS) with potential sequelae of neurological disability. The Optic Neuritis Treatment Trial in the United States was conducted prospectively for 15 years and provided valuable data about clinical course, efficacy of steroid treatment, and risk of conversion to MS. Compared to the many studies in Western countries, research concerning optic neuritis in Asia has so far not been extensive. However, cumulative evidence shows that various features of this disorder differ between patients of Caucasian and Oriental descent. In this article we review up-to-date studies on optic neuritis in Asia and compare the results with prior literature. Prospective and multi-centre studies are currently underway in these regions to increase our understanding of optic neuritis in Asia. abstract_id: PUBMED:15946690 Multiple sclerosis in Iraq: does it have the same features encountered in Western countries? Objectives: To characterize the clinical, demographic and epidemiological features of MS patients from the only specialised MS centre in Iraq. Methods: Data for consecutive Iraqi MS patients attending the Baghdad Multidiscipline MS Clinic between 2000 and 2002 who fulfilled Poser et al. criteria for clinically definite (CD) and clinically probable (CP) MS were reviewed. Results: We identified a total of 300 MS patients (164 females, 54.7%; 136 males, 45.3%) with a mean age of onset being 29.2+/-7.8 years and the duration being 8.6+/-5.9 years. According to the year of clinical onset of MS, a progressive increase in cases in the last two decades and a trend towards more females was noted. Initial symptom was reported as motor in 31.7%, sensory in 28.3%, optic nerve in 24% and brainstem or cerebellar in 22.3% of patients. The course was relapsing-remitting in 199 (66.3%) patients, secondary progressive in 56 (18.7%) and primary progressive (PP) in 45 (15%) patients. Conclusions: MS is not rare in Iraq; its demographic and clinical data were, in general, similar to those reported in Caucasian populations. There was some evidence for North-South gradient and a possible increasing incidence characterized by an increase in female preponderance during the last 2 decades. abstract_id: PUBMED:16696904 MS in Asian countries. This paper reviews all available English language literature on MS from Asian countries published between 1970 and 2005. Although limited data are available, the review reveals that western Asia--including the Middle East--has the highest prevalence of MS across the continent, and that MS in Asia largely resembles conventional MS in western countries. Opticospinal MS (a distinct clinical entity from conventional MS) is more common in eastern Asian regions. Larger epidemiological and genetic studies, with more complete ascertainment in various Asian populations, are needed so that we can understand the diversity of Asian MS. abstract_id: PUBMED:23834430 Increased disease severity in non-Western immigrants with multiple sclerosis in Oslo, Norway. Background And Purpose: Non-Western immigrants to Norway acquire an increased risk of multiple sclerosis (MS) after migration. Ethnicity and the presence of oligoclonal bands (OCBs) in the cerebrospinal fluid (CSF) might influence the disease course. The aim of this study was to investigate differences in disease severity and in the presence of OCBs in ethnic Norwegian and immigrant MS patients. Methods: Clinical data and CSF findings from 47 non-Western immigrants with MS were compared with those from 447 Norwegian and 48 immigrant patients from Western countries. Results: The non-Western immigrants had a higher mean Multiple Sclerosis Severity Score (MSSS) than the Norwegian patients (5.68 vs. 4.13, P = 0.001). Age at onset was 4 years lower amongst the non-Western immigrants (P = 0.001). After adjusting for year of birth, age at onset, gender and disease course, the mean difference in MSSS between the groups was 2.17 (P &lt; 0.001). Amongst the non-Western immigrants, 70% received disease-modifying drugs, compared with 48% of the Norwegian patients (P = 0.005). In both groups, 88% were OCB-positive. Conclusions: Non-Western immigrants with MS had an increased disease severity compared with native Norwegians and immigrants from Western countries. The presence of OCBs in the CSF was not different between the groups. abstract_id: PUBMED:36573566 Current trends and prospects of multiple sclerosis research in Arab countries. Background: Multiple sclerosis is a complex and intractable neurological disease associated with substantial morbidity, healthcare utilization, management cost, and loss of productivity. There has been an alarming increase in the number of multiple sclerosis cases in Arab countries in recent years, which has spurred an increase in local research. Aims: To analyse the multiple sclerosis research profile in Arab countries. Methods: A total of 781 publications focusing on multiple sclerosis research in Arab countries from 1983 to 2021 were extracted from the Web of Science database and analysed using bibliometric techniques. Results: Publication on multiple sclerosis research increased sharply in the last decade, globally and in the Arab countries. However, Arab countries have only contributed 0.8% of the overall number of publications. Keyword pattern analysis showed that magnetic resonance imaging, optical coherence tomography, expanded disability status, demyelination, and epidemiology were the major themes of multiple sclerosis research in Arab countries. Case-control, cohort, and descriptive studies were the most prevalent study designs. However, there was a notable paucity of meta-analyses, randomized controlled trials, and clinical trials. Conclusion: Arab countries can improve their regional expertise and add a wealth of knowledge to global multiple sclerosis resources by diversifying their current research initiatives, and tracking recent advances in pathogenesis, diagnosis, and management of multiple sclerosis. abstract_id: PUBMED:29627005 Natural history of benign multiple sclerosis: Clinical and HLA correlates in a Western Australian cohort. Background: Benign multiple sclerosis (BMS) is a controversial term that has been used for MS patients with minimal disability decades after disease onset. Herein, we evaluated disease status after 20 years in a Western Australian cohort defined as BMS based on an Expanded Disability Status Scale (EDSS) score ≤ 3.0 at 10 years from onset. Methods: MS patients with an EDSS score ≤ 3.0 at 10 years from onset and minimum of 20 years follow up were included in the study. The 20-year EDSS score was considered the primary outcome. Associations with demographic and clinical characteristics and HLA-DRB1 genotype were investigated. Results: Among 120 patients with a benign course at 10 years, 78 (65%) remained benign at the 20-year follow up, but patients with an EDSS ≥ 2.5 were more likely to go on to develop more severe disability in the next decade. When considering factors associated with an increase in EDSS score ≤ 1 from 10 to 20 years, indicating limited progression, apart from the EDSS score at 10 years, poly-symptomatic presentation (p = 0.004) and cerebellar/brainstem mono-symptomatic presentation (p = 0.016) were independently associated with more rapid progression compared with other mono-symptomatic presentations. Carriage of the high risk HLA-DRB1*1501 allele was marginally associated with slower progression. Conclusions: In this geographically isolated MS cohort of predominantly Anglo-Celtic origin clinical progression in the benign MS group was similar to that in other published series from Western countries. These results are in keeping with the view that patients labeled as benign MS are part of a heterogeneous continuum of disease progression and do not possess unique clinical characteristics. Possible genetic determinants of a benign course warrant further investigation. abstract_id: PUBMED:37059571 Incidence and prevalence of multiple sclerosis in China and other Asian countries. The prevalence of multiple sclerosis (MS) in Asian countries is thought to be lower than in Western countries, with Asian populations presenting 80% less risk of MS than white populations. Incidence and prevalence rates in Asian countries are therefore not well defined and their association with rates in neighboring countries, as well as with ethnic, environmental, and socioeconomic factors, are not well understood. We performed a comprehensive literature review of epidemiological data from China and neighbouring countries to study the frequency of the disease, focusing on prevalence, and the progression over time and the influence of sex-related, environmental, dietary, and sociocultural factors. Prevalence rates in China range between 0.88 cases/100,000 population in 1986 and 5.2 cases/100,000 population in 2013, with a non-significant upwards trend (p = .08). The increase observed in Japan, where figures ranged between 8.1 and 18.6 cases/100,000 population was highly significant (p &lt; .001). Prevalence rates in countries with predominantly white populations are considerably higher and have increased over time, reaching 115 cases/100,000 population in 2015 (r2 = 0.79, p &lt; .0001). In conclusion, the prevalence of MS in China appears to have risen in recent years, although Asian populations (including Chinese and Japanese populations, among others) appear to present less risk than other populations. Within Asia, geographical latitude appears not to be a determining factor for developing MS. abstract_id: PUBMED:33069449 Incidence and prevalence of multiple sclerosis in China and other Asian countries. The prevalence of multiple sclerosis (MS) in Asian countries is thought to be lower than in Western countries, with Asian populations presenting 80% less risk of MS than white populations. Incidence and prevalence rates in Asian countries are therefore not well defined and their association with rates in neighboring countries, as well as with ethnic, environmental, and socioeconomic factors, are not well understood. We performed a comprehensive literature review of epidemiological data from China and neighbouring countries to study the frequency of the disease, focusing on prevalence, and the progression over time and the influence of sex-related, environmental, dietary, and sociocultural factors. Prevalence rates in China range between 0.88 cases/100,000 population in 1986 and 5.2 cases/100,000 population in 2013, with a non-significant upwards trend (p = .08). The increase observed in Japan, where figures ranged between 8.1 and 18.6 cases/100,000 population was highly significant (p &lt; .001). Prevalence rates in countries with predominantly white populations are considerably higher and have increased over time, reaching 115 cases/100,000 population in 2015 (r2 = 0.79, p &lt; .0001). In conclusion, the prevalence of MS in China appears to have risen in recent years, although Asian populations (including Chinese and Japanese populations, among others) appear to present less risk than other populations. Within Asia, geographical latitude appears not to be a determining factor for developing MS. Answer: The clinical, demographic, and epidemiological features of multiple sclerosis (MS) in Iraq generally resemble those reported in Caucasian populations, as indicated by a study conducted at the Baghdad Multidiscipline MS Clinic. This study reviewed data from 300 MS patients and found that the mean age of onset was 29.2 years, with a duration of 8.6 years. The initial symptoms and disease course in Iraqi patients were similar to those observed in Western countries, with a progressive increase in cases over the last two decades and a trend towards more females being affected. The course of the disease was relapsing-remitting in 66.3% of patients, secondary progressive in 18.7%, and primary progressive in 15% of patients. These findings suggest that MS in Iraq shares common features with MS in Western countries (PUBMED:15946690). Additionally, a review of English language literature on MS from Asian countries, including western Asia (the Middle East), indicates that MS in Asia largely resembles conventional MS in Western countries. The review also notes that opticospinal MS, which is a distinct clinical entity from conventional MS, is more common in eastern Asian regions. However, the prevalence of MS in Asian countries is thought to be lower than in Western countries, with Asian populations presenting 80% less risk of MS than white populations (PUBMED:16696904; PUBMED:33069449). In summary, the features of MS in Iraq appear to be similar to those encountered in Western countries, with some evidence of a North-South gradient and a possible increasing incidence characterized by an increase in female preponderance during the last two decades (PUBMED:15946690).
Instruction: A biomechanical comparison of youth baseball pitches: is the curveball potentially harmful? Abstracts: abstract_id: PUBMED:18055920 A biomechanical comparison of youth baseball pitches: is the curveball potentially harmful? Background: The curveball has been anecdotally considered as a dangerous pitch among youth pitchers, especially for their ulnar collateral ligaments. No biomechanical studies have been conducted among youth pitchers comparing different types of pitches. Hypothesis: The kinetics of the baseball throw varies significantly between the fastball, curveball, and change-up for youth pitchers. Kinematic and temporal differences are also expected. Study Design: Controlled laboratory study. Methods: Twenty-nine youth baseball pitchers (age, 12.5 +/- 1.7 years) pitched 5 fastballs, 5 curveballs, and 5 change-ups with maximum effort in an indoor laboratory setting. Data were collected with a 3-dimensional motion analysis system. Kinetic, kinematic, and temporal parameters were compared among the 3 pitches. Results: For elbow varus torque, shoulder internal rotation torque, elbow proximal force, and shoulder proximal force, the fastball produced the greatest values, followed by the curveball and then the change-up. The fastball also produced the greatest elbow flexion torque. Shoulder horizontal adduction torque and shoulder adduction torque were the least for the change-up. Several differences in body segment position, velocity, and timing were also found. Conclusions: In general, elbow and shoulder loads were the greatest in the fastball and least in the change-up. Kinematic and temporal differences were also found among the 3 pitch types. Clinical Relevance: The curveball may not be more potentially harmful than the fastball for youth pitchers. This finding is consistent with recent epidemiologic research indicating that amount of pitching is a stronger risk factor than type of pitches thrown. abstract_id: PUBMED:29090988 Youth Baseball Pitching Mechanics: A Systematic Review. Context: Pitching injuries in youth baseball are increasing in incidence. Poor pitching mechanics in young throwers have not been sufficiently evaluated due to the lack of a basic biomechanical understanding of the "normal" youth pitching motion. Objective: To provide a greater understanding of the kinetics and kinematics of the youth baseball pitching motion. Data Sources: PubMed, MEDLINE, and SPORTDiscus databases were searched from database inception through February 2017. Study Selection: A total of 10 biomechanical studies describing youth pitching mechanics were included. Study Design: Systematic review. Level Of Evidence: Level 3. Data Extraction: Manual extraction and compilation of demographic, methodology, kinetic, and kinematic variables from the included studies were completed. Results: In studies of healthy youth baseball pitchers, progressive external rotation of the shoulder occurs throughout the start of the pitching motion, reaching a maximum of 166° to 178.2°, before internally rotating throughout the remainder of the cycle, reaching a minimum of 13.2° to 17°. Elbow valgus torque reaches the highest level (18 ± 4 N·m) just prior to maximum shoulder external rotation and decreases throughout the remainder of the pitch cycle. Stride length is 66% to 85% of pitcher height. In comparison with a fastball, a curveball demonstrates less elbow varus torque (31.6 ± 15.3 vs 34.8 ± 15.4 N·m). Conclusion: Multiple studies show that maximum elbow valgus torque occurs just prior to maximum shoulder external rotation. Forces on the elbow and shoulder are greater for the fastball than the curveball. abstract_id: PUBMED:31711366 Impact of shoe and cleat type on youth baseball pitching biomechanics. Shoulder and elbow injuries among baseball pitchers of various ages and abilities continue to rise despite exhaustive efforts analysing pitch type and count; however, it has yet been determined if footwear plays a role in altering pitching mechanics and subsequently injury susceptibility. Therefore, the purpose of the study was to investigate the effect footwear might have on youth baseball pitching mechanics. Data were collected on eleven youth baseball pitchers wearing two different shoes on two different surface inclinations. A repeated measures ANOVA was utilised to determine differences between upper and lower extremity joint kinematics and kinetics (p &lt; 0.05). Results indicate wearing moulded cleats elicited significantly greater amounts of shoulder internal rotation torque, angle and velocity as well as elbow varus torque. Turf shoes caused an increased plantarflexion joint angle in the stride leg ankle and shoulder external rotation torque, angle and velocity. The findings of this study suggest that the footwear worn by a youth baseball pitcher does alter the shoulder and elbow dynamics in the dominant throwing arm as well as the amount of ankle plantarflexion in the stride leg. abstract_id: PUBMED:25367016 Arm pain in youth baseball players: a survey of healthy players. Background: Overuse injury in youth baseball players is increasing in prevalence, and these injuries have been correlated to pitching mechanics and pitch counts/types. Prior studies regarding arm pain in these athletes have focused simply on presence or absence of pain during the season rather than on detailed factors related to arm pain with respect to frequency, severity, and associated performance and psychosocial effect. Hypothesis/purpose: The goal of this study was to investigate frequency, quality, and effect of arm pain in healthy youth baseball players. The hypothesis was that arm pain will affect a majority of healthy baseball players and will be associated with adverse psychosocial effects. Study Design: Descriptive epidemiological study. Methods: A novel survey focusing on arm pain in youth baseball players was developed for the purpose of this study. Survey questions were formulated by a consortium of trainers, clinicians, and coaches. Surveys were administered to healthy youth baseball players throughout the states of New Jersey and New York. Results: A total of 203 healthy players completed the survey; 23% of players (n=47) reported a prior overuse injury. Only 26% and 20% of players reported that their arm never hurt when throwing or the day after throwing, respectively; 30% of players reported that arm pain at least sometimes caused them to have less fun playing; and 46% of players reported at least once being encouraged to keep playing despite having arm pain. Pitchers were more likely to report arm pain while throwing and the day after throwing and to indicate that arm pain held them back from being a better player (all P&lt;.05). Those with prior overuse injury were more likely to have arm pain while throwing, to have arm fatigue during a game or practice, and to be encouraged to keep playing despite having pain (all P&lt;.05). Conclusion: A majority of healthy (actively competing) youth baseball players report at least some baseline arm pain and fatigue, and many players suffer adverse psychosocial effects from this pain. abstract_id: PUBMED:38251298 Assessing Biomechanical and Clinical Outcomes of an Elbow Orthosis Intervention in Youth Baseball: Preliminary Results. Proper orthosis design may help youth baseball players develop safer pitching mechanics to prevent elbow injuries. This study evaluated the impact of a custom elbow orthosis on pitching biomechanics and adverse events. Ten 11-12-year-old players (mean age 11.5 years) from a regional league team were recruited. The inclusion criteria were at least two years of baseball experience. Six players were randomly assigned to the orthosis group, with four in the control group. Anthropometric data and baseline characteristics were recorded. A video analysis assessed elbow flexion angle during pitching at baseline and at 2 months. The frequency of orthosis wear was also tracked. Adverse events during twice-weekly practices were documented. Post-study surveys evaluated orthosis comfort, stability, and safety perceptions. In the orthosis group (n = 6), four participants showed improved elbow flexion angle, and two of the six participants showed almost no change. The overall median difference was 23.5°. In the control group (n = 4), three participants showed improvement, with a median improvement of 5.5°. Twelve adverse events, including pain, were reported by players not wearing orthoses, whereas no events occurred with orthosis use. Individual players in the control group or who did not wear the orthosis correctly experienced multiple episodes of pain from pitching over the study period. This preliminary study indicates a custom elbow orthosis can optimize pitching biomechanics and prevent adverse events in youth baseball players over the course of two months. abstract_id: PUBMED:32232067 Youth Single-Sport Specialization in Professional Baseball Players. Background: An increasing number of youth baseball athletes are specializing in playing baseball at younger ages. Purpose: The purpose of our study was to describe the age and prevalence of single-sport specialization in a cohort of current professional baseball athletes. In addition, we sought to understand the trends surrounding single-sport specialization in professional baseball players raised within and outside the United States (US). Study Design: Cross-sectional study; Level of evidence, 3. Methods: A survey was distributed to male professional baseball athletes via individual team athletic trainers. Athletes were asked if and at what age they had chosen to specialize in playing baseball at the exclusion of other sports, and data were then collected pertaining to this decision. We analyzed the rate and age of specialization, the reasons for specialization, and the athlete's perception of injuries related to specialization. Results: A total of 1673 professional baseball athletes completed the survey, representing 26 of the 30 Major League Baseball (MLB) organizations. Less than half (44.5%) of professional athletes specialized in playing a single sport during their childhood/adolescence. Those who reported specializing in their youth did so at a mean age of 14.09 ± 2.79 years. MLB players who grew up outside the US specialized at a significantly earlier age than MLB players native to the US (12.30 ± 3.07 vs 14.89 ± 2.24 years, respectively; P &lt; .001). Additionally, MLB players raised in the US recalled a significantly higher incidence of sustaining an injury attributed to specializing in baseball than MLB athletes raised outside the US (27.7% vs 20.6%, respectively; P = .05). Conclusion: This study challenges the current trends toward early youth sport specialization, finding that the majority of professional baseball athletes studied did not specialize as youth and that those who did specialize did so at a mean age of 14 years. With the potential cumulative effects of pitching and overhead throwing on an athlete's arm, the trend identified in this study toward earlier specialization within baseball is concerning. abstract_id: PUBMED:29353376 Injury Prevention in Baseball: from Youth to the Pros. Purpose Of Review: This review examines recent literature on injury epidemiology and prevention in the sport of baseball from youth to professional levels. Recent Findings: Overuse is the predominant mechanism of injury in youth baseball players. Newer stretching and exercise protocols may help prevent these injuries. At the professional level, however, overuse has not been proven to correlate with increased injury incidence, but pitch counts are still monitored. There continues to be a rise in operations performed for baseball-related injuries, including ulnar collateral ligament (UCL) reconstruction, particularly in younger athletes. As the level of play increases, there is significant loss of participation due to injuries, such as the UCL in the upper extremity and the hamstrings in the lower extremity. Baseball is a widely popular sport, which has led to a focus on injury epidemiology and prevention. While the majority of research regarding baseball injuries focuses on pitchers, fielders and catchers are also at risk. In an attempt to decrease non-contact injuries in the upper and lower extremities, stretching and strengthening exercises are vitally important. Because injury profiles demonstrate significant variability from youth to professional baseball, unique prevention strategies are likely necessary at each level. More research is needed to develop and validate appropriately targeted injury prevention programs. abstract_id: PUBMED:34778475 Pitching Behaviors in Youth Baseball: Comparison With the Pitch Smart Guidelines. Background: The Pitch Smart guidelines aim to limit youth baseball pitching behaviors associated with overuse injuries. Despite many youth baseball leagues being compliant with the guidelines, during tournaments, pitch count restrictions or guidelines are often not followed. Purpose: To perform a quantitative analysis of pitch counts in youth baseball players and evaluate compliance with regard to the Pitch Smart guidelines in the tournament setting. Study Design: Cross-sectional study; Level of evidence, 3. Methods: Included in the analysis were 100 youth baseball teams that competed in the 8-and-under to 14-and-under age divisions during the 2019 tournament season. Pitching data were compared with the Pitch Smart guidelines. Violations were identified as (1) exceeding maximum daily pitch count, (2) inadequate rest between pitching events, and (3) pitching more than 1 event on the same day. Pitcher and game factors were analyzed for possible relationships to guideline violations using mixed-effects negative binomial regression models, with comparisons of violations using rate ratios (RRs). Results: Analysis included 1046 pitchers and 2439 games. There were 1866 total Pitch Smart guideline violations, with 48.6% of pitchers having at least 1 violation. Inadequate rest was the most common reason for violation, with noncompliance occurring in 43.3% of pitchers. The highest rate of any violation (0.32 per appearance) occurred in the 8-and-under age division. High-volume pitchers had increased violation rates in each category compared with low-volume pitchers (P &lt; .001). Violation rates were increased more than twice the rate when pitchers participated in ≥5 consecutive games without a rest day when compared with a single game (RR, 2.48; P &lt; .001). Conclusion: Noncompliance with Pitch Smart guidelines in tournament settings occurred in more than 90% of teams and almost half of all pitchers. Factors associated with noncompliance included younger pitcher age, high-volume pitching, and pitching in multiple consecutive games. Education of tournament directors, coaches, parents, and athletes regarding pitching guidelines is warranted in order to limit the risk of injury. abstract_id: PUBMED:27864504 Youth Baseball Pitching Stride Length: Normal Values and Correlation With Field Testing. Background: Pitching biomechanical analysis has been recommended as an important component of performance, injury prevention, and rehabilitation. Normal values for youth pitching stride length have not been established, leading to application of normative values found among professional pitchers to youth pitchers. Hypotheses: The average youth pitching stride length will be significantly less than that of college and professional pitchers. There will be a positive correlation between stride length, lower extremity power, balance, and pitching experience. Study Design: Prospective cohort study. Level Of Evidence: Level 3. Methods: Ninety-two youth baseball pitchers (aged 9-14 years) met the inclusion/exclusion criteria and completed the study. Stride length was recorded using a Dartfish video system over 3 maximal effort pitches. Both intra- and interrater reliability was calculated for the assessment of stride length. Double-leg vertical jump, single-leg stance time, leg length, weight, age, and pitching experience were also recorded. Results: Mean (SD) stride length was 66.0% (7.1%) of height. Stride length was correlated ( P &lt; 0.01) with vertical jump (0.38), pitching experience (0.36), and single-leg balance (0.28), with excellent intra- and interrater reliability (0.985 or higher). No significant correlations between stride length and body weight, leg length, or age existed. Conclusions: There was a significant difference between youth pitching stride length and the current published norms for older and more elite throwers. There was a positive correlation between stride length and lower extremity power, pitching experience, and single-leg balance. Clinical Relevance: Two-dimensional analysis of stride length allows for the assessment of pitching biomechanics in a practical manner. These values can be used for return to pitching parameters after an injury and designing injury prevention and performance programs. abstract_id: PUBMED:28968146 Changes in Youth Baseball Pitching Biomechanics: A 7-Year Longitudinal Study. Background: Pitching biomechanics are associated with performance and risk of injury in baseball. Previous studies have identified biomechanical differences between youth and adult pitchers but have not investigated changes within individual young pitchers as they mature. Hypothesis: Pitching kinematics and kinetics will change significantly during a youth pitcher's career. Study Design: Descriptive laboratory study. Methods: Pitching biomechanics were captured in an indoor laboratory with a 12-camera, 240-Hz motion analysis system for 51 youth pitchers who were in their first season of organized baseball with pitching. Each participant was retested annually for the next 6 years or until he was no longer pitching. Thirty kinematic and kinetic parameters were computed and averaged for 10 fastballs thrown by each player. Data were statistically analyzed for the 35 participants who were tested at least 3 times. Within-participant changes for each kinematic and kinetic parameter were tested by use of a mixed linear model with random effects ( P &lt; .05). Least squares means for sequential ages were compared via Tukey's honestly significant difference test ( P &lt; .05). Results: Three kinematic parameters that occur at the instant of foot contact-stride length, lead foot placement to the closed side, and trunk separation-increased with age. With age, shoulder external rotation at foot contact decreased while maximum shoulder external rotation increased. Shoulder and elbow forces and torques increased significantly with age. Year-to-year changes were most significant between 9 and 13 years of age for kinematics and between 13 and 15 years for normalized kinetics (ie, scaled by bodyweight and height). Conclusion: During their first few years, youth pitchers improve their kinematics. Elbow and shoulder kinetics increase with time, particularly after age 13. Thus, prepubescent pitchers may work with their coaches to improve the motions and flexibility of the players' bodies and the paths of their arms. Once proper mechanics are developed, adolescent pitchers can focus more on improving strength and power. Answer: Based on the biomechanical comparison of youth baseball pitches, the curveball may not be more potentially harmful than the fastball for youth pitchers. A controlled laboratory study involving twenty-nine youth baseball pitchers (aged approximately 12.5 years) compared the kinetics of the baseball throw between the fastball, curveball, and change-up. The results indicated that for elbow varus torque, shoulder internal rotation torque, elbow proximal force, and shoulder proximal force, the fastball produced the greatest values, followed by the curveball and then the change-up. The fastball also produced the greatest elbow flexion torque. Kinematic and temporal differences were found among the three pitch types, but in general, elbow and shoulder loads were greatest in the fastball and least in the change-up. This suggests that the curveball may not be more harmful than the fastball for youth pitchers, and the amount of pitching is a stronger risk factor than the type of pitches thrown (PUBMED:18055920). Additionally, a systematic review of youth baseball pitching mechanics found that forces on the elbow and shoulder are greater for the fastball than the curveball. The review included 10 biomechanical studies and showed that elbow valgus torque, which is a key indicator of stress on the elbow, was higher for the fastball compared to the curveball (PUBMED:29090988). These findings are consistent with recent epidemiologic research indicating that the type of pitch may not be as significant a risk factor for injury as the amount of pitching. Therefore, while the curveball has been anecdotally considered dangerous, the scientific evidence does not necessarily support the notion that it is more harmful than other pitches like the fastball when thrown by youth pitchers.
Instruction: Is there a role for enhanced recovery after laparoscopic bariatric surgery? Abstracts: abstract_id: PUBMED:31372891 Implementation of an enhanced recovery program for bariatric surgery. Introduction: Enhanced recovery after surgery (ERAS) programs have been successfully implemented in several surgical fields; however, there have been mixed results observed in bariatric surgery. Our institution implemented an enhanced recovery program with specific pre-, intra-, and post-operative protocols aimed at patients, nursing staff, and physicians. The aim of the study is to assess the effectiveness of the ERAS program. Methods: Patients who underwent bariatric surgery prior to the implementation of the enhanced recovery program in the calendar year 2015 were compared to those who had surgery after implementation in 2017. Data for our institution was drawn from the Premier Hospital Database. Poisson and quantile regressions were used to examine the association between ERAS protocol and LOS and cost, respectively. Logistic regression was used to assess the impact of ERAS on 30-day complications and readmissions. Results: 277 bariatric surgical procedures were performed in the pre-ERAS group, compared to 348 procedures post-ERAS. While there was a 25.6% increase in volume, there was no statistical difference between the patient populations or the type of procedure performed between the 2 years. A decrease in length of stay was observed from 2.77 days in 2015 to 1.77 days in 2017 (p &lt; 0.001), while median cost was also cut from $11,739.03 to $9482.18 (p &lt; 0.001). 30-day readmission rate also decreased from 7.94% to 2.86% (p = 0.011). After controlling for other factors, ERAS protocol was associated with decreased LOS (IRR 0.65, p &lt; 0.001), cost (- $2256.88, p &lt; 0.001), and risk of 30-day readmission (OR 0.37, p = 0.011). Conclusion: The implementation of a standardized enhanced recovery program resulted in reduced length of stay, cost, and 30-day readmissions. Total costs saved were greater than $800,000 in one calendar year. This study highlights that the value of an enhanced recovery program can be observed in bariatric surgery, benefiting both patients and hospital systems. abstract_id: PUBMED:25892343 Is there a role for enhanced recovery after laparoscopic bariatric surgery? Preliminary results from a specialist obesity treatment center. Background: There has been a relative lack of research on the effect of enhanced recovery in the context of morbid obesity surgery. Objectives: To determine if the application of enhanced recovery after surgery (ERAS) principles can contribute to reduce postoperative hospital length of stay after bariatric surgery, controlling for other factors that may influence safe discharge on the first postoperative day. Setting: University teaching hospital, United Kingdom. Methods: Between February 2011 and December 2014, prospectively collected data on all patients undergoing laparoscopic bariatric surgery under the care of a single surgeon were reviewed. From January 2012, all patients were enrolled in an ERAS protocol and were assessed for fitness for early discharge (within 24 hr from the operation). Baseline patient characteristics and additional concomitant procedures data were compared for patients treated before and after implementation of the ERAS protocol; 30-day readmission data were analyzed for patients discharged on the first postoperative day and those discharged later. The effect of the implementation of the ERAS protocol on discharge on the first postoperative day was analyzed using multivariate analysis, while taking into account the effects of potential confounders (e.g., age, gender, American Society of Anesthesiologists [ASA] grade, concomitant surgical procedures, etc.). Results: Two-hundred and eighty-eight consecutive patients underwent bariatric surgery. Of these, 278 (96.5%) were potentially suitable for early discharge, while 10 (3.5%) patients developed an acute postoperative complication that delayed discharge irrespective of the effect of ERAS. All these patients required a reoperation within 48 hours and therefore were not considered suitable for early discharge and were not included in the statistical analysis. During the entire study period, 100 of 278 (36%) patients were discharged on the first postoperative day, 28.5% after laparoscopic Roux-en-Y gastric bypass (LRYGB) and 60.9% after laparoscopic sleeve gastrectomy (LSG); 178 of 278 (64%) patients were discharged after ≥ 2 days (mean: 2.58, range: 2-5). After implementation of the ERAS protocol in January 2012, the rate of patients discharged on the first postoperative day increased significantly from 1.6% to 39.7% after LRYGB (P&lt;.01). Early discharge increased from 50% to 67.5% after LSG; although this change did not reach statistical significance (P = .294), it nevertheless represents a clinically relevant result. Four (4%) patients were readmitted after having been discharged on the first postoperative day, 10 (5.3%) patients after having been discharged ≥ 2 postoperative days. This difference was not statistically significant (P = .620). Conclusions: The implementation of an enhanced recovery program after bariatric surgery is feasible, well tolerated, and can significantly reduce the length of hospital stay without increasing readmission rates. Controlling for several possible confounders, implementation of the ERAS protocol remained the strongest predictor of discharge on the first postoperative day after laparoscopic bariatric surgery. abstract_id: PUBMED:32077059 The Impact of an Enhanced Recovery Protocol for Patients Undergoing Laparoscopic Revisional Bariatric Surgery. Enhanced Recovery After Surgery (ERAS) protocols have been instituted in various subspecialties of surgery. This study aims to provide evidence that ERAS protocols are safe and feasible in revisional bariatric surgery. A retrospective chart review was performed for all patients who underwent conversion from laparoscopic gastric band (LAGB) or sleeve gastrectomy (SG) to Roux-en-Y gastric bypass (RYGB) from January 2016 to February 2018 at a single independent academic medical center. We calculated the average LOS for these patients as well as the 30-day readmission and 30-day reoperation rates. Median length of stay (LOS) was 1 day (range 1-5) with 92.9% of all patients leaving by postoperative day 3. No patients were readmitted to the hospital within 30 days and none required reoperation. abstract_id: PUBMED:31221424 Enhanced recovery after bariatric surgery. Enhanced recovery after surgery (ERAS) protocols are care programs based on scientific evidence and focused on postoperative recovery. They encompass all aspects of patient care and require multidisciplinary management, with the participation of diverse specialists. The implementation of these protocols is being extended to several abdominal and extra-abdominal surgeries, including bariatric approaches. Diverse specialists with wide experience in the management of morbidly obese patients have taken part in the working group that developed this protocol. A bibliographic search about ERAS in bariatric surgery in several databases was performed to evaluate the current scientific evidence, establishing evidence levels and recommendations according to the GRADE methodology. The items included in this protocol are separated into preoperative, perioperative and postoperative guidelines. abstract_id: PUBMED:38286675 Clinical practice guidelines for perioperative care in bariatric surgery 2023: Adapted ERAS (enhanced recovery after surgery) guidelines with consensual voting of the working group of the Joint Bariatric and Metabolic Surgery Section of the Czech Surgery…. Introduction: Currently, bariatric surgery is the most effective treatment for the morbid obesity. It provides sustained weight loss as well as demonstrated positive effects on obesity-related comorbidities. The number of procedures performed worldwide has seen a sharp increase in the past twenty years. Therefore, an effort has been developed to establish a consensus in perioperative care based on best evidence. Methods: The working group of the Joint Bariatric and Metabolic Surgery Section of the Czech Surgery Society and Czech Society of Obesitology prepared clinical practice guidelines for the ERAS (enhanced recovery after surgery) concept in perioperative care in bariatric surgery. The working group based its guidelines on ERAS guidelines published in 2021. The working group adopted the original text and then adapted the text and added its comments to specific items as appropriate. Electronic voting of all members of the working group was the final phase, by which the strength of consensus was expressed with respect to individual elements of the guidelines. Results: The Czech working group reached a consensus with ERABS (enhanced recovery after bariatric surgery) guidelines for most elements. The quality of evidence is low for some interventions of the ERAS protocol for bariatric surgery. Therefore, extrapolation from other surgeries and fields is needed for evidence-based practice. Conclusion: The guidelines are intended for clinical practice in bariatric surgery with the ERAS protocol based on updated evidence and guidelines. It is based on recent and comprehensive ERAS guidelines adopted and adapted by the Czech working group of the Joint Bariatric and Metabolic Surgery Section of the Czech Surgery Society and Czech Society of Obesitology. Some supplementations and specifications are reflected in comments added to the Czech version. abstract_id: PUBMED:31123991 Enhanced Recovery after Bariatric Surgery: 202 Consecutive Patients in an Italian Bariatric Center. Background: Enhanced Recovery After Surgery (ERAS) pathways have been shown to improve postoperative outcomes. However, its application in bariatric surgery is still limited. The aim of the study was to define the safety of ERAS in bariatric patients with regard to postoperative complications, length of hospital stay (LOS), and readmission rates within 30 days from surgery. Methods: The effectiveness and safety of an ERAS protocol was prospectively investigated in morbidly obese patients who underwent bariatric surgery in a single-institute experience over a 2-year period. Results: Between June 2016 and September 2018, a total of 89 laparoscopic sleeve gastrectomy (SG), 105 Roux-en-Y gastric bypass (RYGB), and 8 one-anastomosis gastric bypass (OAGB) were performed. Twenty patients (9.9%) were revisional cases. Mean (standard deviation, SD) BMI and age at time of surgery were 43.2 (± 6.2) kg/m2 and 46 (± 11.3) years, respectively. Median (range) surgical time was 118 (45-255) minutes. Overall postoperative complication rate was 7.4%, with 6 (3.0%) patients developing grade III-IV complications according to the Clavien-Dindo classification. Median (range) LOS was 2 (1-50) days, with mean (SD) LOS of 2.3 (± 3.6) days. Overall, 36.6% of patients were discharged by first postoperative day and 77.7% by second postoperative day. Readmission rate was 4.5%. No mortality was observed during the study period. Conclusions: According to the results of the present study, ERAS in primary and revisional bariatric surgery is safe and feasible, with short LOS, low morbidity and readmission rates, and no mortality. A significant reduction of mean LOS was progressively noted over the study period. abstract_id: PUBMED:36701011 Enhanced Recovery After Surgery Protocol in Bariatric Surgery Leads to Decreased Complications and Shorter Length of Stay. Purpose: Enhanced recovery after surgery (ERAS) programs have been shown in some specialties to improve short-term outcomes following surgical procedures. There is no consensus regarding the optimal perioperative care for bariatric surgical patients. The purpose of this study was to develop a bariatric ERAS protocol and determine whether it improved outcomes following surgery. Materials And Methods: An IRB-approved prospectively maintained database was retrospectively reviewed for all patients undergoing bariatric surgery from October 2018 to January 2020. Propensity matching was used to compare post-ERAS implementation patients to pre-ERAS implementation. Results: There were 319 patients (87 ERAS, 232 pre-ERAS) who underwent bariatric operations between October 2018 and January 2020. Seventy-nine patients were kept on the ERAS protocol whereas 8 deviated. Patients who deviated from the ERAS protocol had a longer length of stay when compared to patients who completed the protocol. The use of any ERAS protocol (completed or deviated) reduced the odds of complications by 54% and decreased length of stay by 15%. Furthermore, patients who completed the ERAS protocol had an 83% reduction in odds of complications and 31% decrease in length of stay. Similar trends were observed in the matched cohort with 74% reduction in odds of complications and 26% reduction in length of stay when ERAS was used. Conclusions: ERAS protocol decreases complications and reduces length of stay in bariatric patients. abstract_id: PUBMED:35236576 Anesthesia and Enhanced Recovery After Surgery in Bariatric Surgery. The Enhanced Recovery After Surgery Society published guidelines for bariatric surgery reviewing the evidence and providing specific care recommendations. These guidelines emphasize preoperative nutrition, multimodal analgesia, postoperative nausea and vomiting prophylaxis, anesthetic technique, nutrition, and mobilization. Several studies have since evaluated these pathways, showing them to be safe and effective at decreasing hospital length of stay and postoperative nausea and vomiting. This article emphasizes anesthetic management in the perioperative period and outlines future directions, including the application of Enhanced Recovery After Surgery principles in patients with extreme obesity, diabetes, and metabolic disease and standardization of the pathways to decrease heterogeneity. abstract_id: PUBMED:29488124 Meta-Analysis of Enhanced Recovery Protocols in Bariatric Surgery. Background: Enhanced recovery after surgery (ERAS) guidelines, fast-track protocols, and alternative clinical pathways have been widely promoted in a variety of disciplines leading to improved outcomes in post-operative morbidity and length of stay (LOS). This meta-analysis assesses the implications of standardized management protocols in bariatric surgery. Methods: The PRISMA guidelines were adhered to. Databases were searched with the application of pre-defined inclusion and exclusion criteria. Results were reported as mean differences or pooled odds ratios (OR) with 95% confidence intervals (95% CI). Individual protocols and surgical approaches were assessed through subgroup analysis, and sensitivity analysis of methodological quality was performed. Results: A total of 1536 studies were screened; 13 studies were eventually included for meta-analysis involving a total of 6172 patients. Standardized perioperative techniques were associated with a savings of 19.5 min in operative time (p &lt; 0.01), as well as a LOS which was shortened by 1.5 days (p &lt; 0.01). Pooled post-operative morbidity rates also favored enhanced recovery care protocols (OR 0.7%, 95% CI 0.6-0.9%, p &lt; 0.01). Conclusion: Bariatric surgery involves a complex cohort of patients who require high-quality evidence-based care to improve outcomes. Consensus guidelines on the feasibility of ERAS and alternative clinical pathways are required in the setting of bariatric surgery. abstract_id: PUBMED:38380206 Transforming Bariatric Surgery Outcomes: The Pivotal Role of Enhanced Recovery After Surgery (ERAS) Protocols in Patient-Centered Care. Bariatric surgery is a critical strategy in managing morbid obesity. Enhanced recovery after surgery (ERAS) protocols have revolutionized perioperative care in this field. This systematic review aims to synthesize current evidence on the impact of ERAS protocols on patient-centered outcomes in bariatric surgery. A comprehensive search across multiple databases was conducted, adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Studies involving adult patients undergoing bariatric surgery and focusing on the implementation and outcomes of ERAS protocols were included. Data extraction and analysis emphasized patient recovery, well-being, and satisfaction. Eleven studies met the inclusion criteria. The review revealed that ERAS protocols are associated with reduced postoperative recovery times, decreased hospital stays, and enhanced patient satisfaction. Notably, ERAS protocols effectively reduced complications and optimized resource utilization in bariatric surgery. Comparative insights from non-bariatric surgeries highlighted the versatility and adaptability of ERAS protocols across different surgical disciplines. ERAS protocols significantly improve patient-centered outcomes in bariatric surgery. Their adoption facilitates a patient-focused approach, accelerating recovery and enhancing overall patient well-being. The findings advocate for the broader implementation of ERAS protocols in surgical care, emphasizing the need for continuous refinement to meet evolving healthcare demands. This review supports the paradigm shift toward integrating ERAS protocols in bariatric surgery and potentially other surgical fields. Answer: Yes, there is a role for enhanced recovery after laparoscopic bariatric surgery. Enhanced Recovery After Surgery (ERAS) programs have been successfully implemented in bariatric surgery and have shown to be beneficial in improving postoperative outcomes. Studies have demonstrated that the application of ERAS principles can significantly reduce the length of hospital stay without increasing readmission rates (PUBMED:25892343). The implementation of a standardized enhanced recovery program has resulted in reduced length of stay, cost, and 30-day readmissions in bariatric surgery (PUBMED:31372891). Furthermore, ERAS protocols have been found to be safe and feasible in revisional bariatric surgery, with a median length of stay of 1 day and no patients requiring readmission or reoperation within 30 days (PUBMED:32077059). The implementation of ERAS protocols in bariatric surgery is associated with decreased complications and shorter length of stay (PUBMED:36701011). Additionally, a meta-analysis has shown that standardized perioperative techniques associated with ERAS lead to a reduction in operative time, length of stay, and post-operative morbidity rates (PUBMED:29488124). ERAS protocols have also been adapted and adopted in various countries, with guidelines being developed for clinical practice in bariatric surgery based on updated evidence (PUBMED:38286675). The application of ERAS in primary and revisional bariatric surgery has been reported to be safe and feasible, with short length of stay, low morbidity and readmission rates, and no mortality (PUBMED:31123991). Anesthesia management is also an integral part of ERAS protocols in bariatric surgery, emphasizing preoperative nutrition, multimodal analgesia, and postoperative nausea and vomiting prophylaxis (PUBMED:35236576). Overall, the evidence suggests that ERAS protocols play a pivotal role in enhancing recovery after laparoscopic bariatric surgery, leading to improved patient-centered outcomes, reduced recovery times, and optimized resource utilization (PUBMED:38380206).
Instruction: Do male and female cowbirds see their world differently? Abstracts: abstract_id: PUBMED:23544049 Do male and female cowbirds see their world differently? Implications for sex differences in the sensory system of an avian brood parasite. Background: Male and female avian brood parasites are subject to different selection pressures: males compete for mates but do not provide parental care or territories and only females locate hosts to lay eggs. This sex difference may affect brain architecture in some avian brood parasites, but relatively little is known about their sensory systems and behaviors used to obtain sensory information. Our goal was to study the visual resolution and visual information gathering behavior (i.e., scanning) of brown-headed cowbirds. Methodology/principal Findings: We measured the density of single cone photoreceptors, associated with chromatic vision, and double cone photoreceptors, associated with motion detection and achromatic vision. We also measured head movement rates, as indicators of visual information gathering behavior, when exposed to an object. We found that females had significantly lower density of single and double cones than males around the fovea and in the periphery of the retina. Additionally, females had significantly higher head-movement rates than males. Conclusions: Overall, we suggest that female cowbirds have lower chromatic and achromatic visual resolution than males (without sex differences in visual contrast perception). Females might compensate for the lower visual resolution by gazing alternatively with both foveae in quicker succession than males, increasing their head movement rates. However, other physiological factors may have influenced the behavioral differences observed. Our results bring up relevant questions about the sensory basis of sex differences in behavior. One possibility is that female and male cowbirds differentially allocate costly sensory resources, as a recent study found that females actually have greater auditory resolution than males. abstract_id: PUBMED:29436765 Cognition and the brain of brood parasitic cowbirds. Cowbirds are brood parasites. Females lay their eggs in the nests of other species, which then incubate the cowbird eggs and raise the young cowbirds. Finding and returning to heterospecific nests presents cowbirds with several cognitive challenges. In some species, such as brown-headed cowbirds (Molothrus ater), females but not males search for and remember the locations of potential host nests. We describe recent research on sex differences in cognition and the hippocampus associated with this sex difference in search for host nests. Female brown-headed cowbirds perform better than males on some, but not all, tests of spatial memory and females show a pattern of adult hippocampal neurogenesis not found in males or in closely related non-parasitic birds. Because of the apparent specialization of the hippocampus, brown-headed cowbirds may be a good model in which to examine spatial information processing in the avian hippocampus and we also describe recent research on the spatial response properties of brown-headed cowbird hippocampal neurons. abstract_id: PUBMED:22567131 When less is best: female brown-headed cowbirds prefer less intense male displays. Sexual selection theory predicts that females should prefer males with the most intense courtship displays. However, wing-spread song displays that male brown-headed cowbirds (Molothrus ater) direct at females are generally less intense than versions of this display that are directed at other males. Because male-directed displays are used in aggressive signaling, we hypothesized that females should prefer lower intensity performances of this display. To test this hypothesis, we played audiovisual recordings showing the same males performing both high intensity male-directed and low intensity female-directed displays to females (N = 8) and recorded the females' copulation solicitation display (CSD) responses. All eight females responded strongly to both categories of playbacks but were more sexually stimulated by the low intensity female-directed displays. Because each pair of high and low intensity playback videos had the exact same audio track, the divergent responses of females must have been based on differences in the visual content of the displays shown in the videos. Preferences female cowbirds show in acoustic CSD studies are correlated with mate choice in field and captivity studies and this is also likely to be true for preferences elucidated by playback of audiovisual displays. Female preferences for low intensity female-directed displays may explain why male cowbirds rarely use high intensity displays when signaling to females. Repetitive high intensity displays may demonstrate a male's current condition and explain why these displays are used in male-male interactions which can escalate into physical fights in which males in poorer condition could be injured or killed. This is the first study in songbirds to use audiovisual playbacks to assess how female sexual behavior varies in response to variation in a male visual display. abstract_id: PUBMED:34405287 Decision-making at the time of parasitism: cowbirds prefer to peck eggs with weaker shells. Interspecific avian brood parasites, like cuckoos and cowbirds, lay their eggs in nests of other species, the hosts, which take over the entire parental care of the parasite's eggs and chicks. This breeding strategy requires decisions that may affect the parasite's reproductive success. During the breeding season, cowbirds search for host nests and revisit them to monitor its progress and parasitize at the time host laying begins. When visiting hosts nests, they repeatedly peck the nest contents trying to destroy one or more eggs. This behaviour favours parasite's offspring by reducing the competition for food with nestmates. We evaluated if the egg-pecking behaviour of female shiny (Molothrus bonariensis) and screaming (M. rufoaxillaris) cowbirds is affected by the strength and the size of the eggs they find in the nest. We presented to wild-caught females artificial clutches with two natural eggs that differ in size and shell strength. We found that female shiny and screaming cowbirds adjusted egg-pecking behaviour based on the strength but not on the size of the eggs. When differences in strength between eggs were high, both cowbird species pecked more frequently the egg with the weaker shell, increasing the probability of a successful puncture. Our results indicate that female cowbirds can discriminate eggs through the strength of the shell, and by choosing the weaker egg to peck, they increase the probability of puncturing. abstract_id: PUBMED:28835558 Nest prospecting brown-headed cowbirds 'parasitize' social information when the value of personal information is lacking. Brood parasites face considerable cognitive challenges in locating and selecting host nests for their young. Here, we test whether female brown-headed cowbirds, Molothrus ater, could use information acquired from observing the nest prospecting patterns of conspecifics to influence their own patterns of nest selection. In laboratory-based experiments, we created a disparity in the amount of personal information females had about the quality of nests. Females with less personal information about the quality of two nests spent more time investigating the nest that more knowledgeable females investigated. Furthermore, there was a strong negative relationship between individual's ability to track nest quality using personal information and their tendency to copy others. These two contrasting strategies for selecting nests are equally effective, but lead to different patterns of parasitism. abstract_id: PUBMED:12083613 The presumption of sociality: social learning in diverse contexts in brown-headed cowbirds (Molothrus ater). Data are presented on social and vocal learning in cowbirds (Molothrus ater) housed in large aviaries and given more degrees of freedom than in conventional experimental studies. The studies show that social and vocal outcomes are facultative responses to social contexts. Several findings are reviewed: First, cowbirds quickly self-organize into groups by age and sex; second, opportunities to interact across age and sex do exist and affect courtship competence; third, female cowbirds organize themselves differently in the presence and absence of male competition; and fourth, young, naive cowbirds show rapid and differential sensitivity to group dynamics. Taken as a whole, the data show that social Umwelten are dynamic, developmental ecologies. abstract_id: PUBMED:26248015 Sex differences in retention after a visual or a spatial discrimination learning task in brood parasitic shiny cowbirds. Females of avian brood parasites, like the shiny cowbird (Molothrus bonariensis), locate host nests and on subsequent days return to parasitize them. This ecological pressure for remembering the precise location of multiple host nests may have selected for superior spatial memory abilities. We tested the hypothesis that shiny cowbirds show sex differences in spatial memory abilities associated with sex differences in host nest searching behavior and relative hippocampus volume. We evaluated sex differences during acquisition, reversal and retention after extinction in a visual and a spatial discrimination learning task. Contrary to our prediction, females did not outperform males in the spatial task in either the acquisition or the reversal phases. Similarly, there were no sex differences in either phase in the visual task. During extinction, in both tasks the retention of females was significantly higher than expected by chance up to 50 days after the last rewarded session (∼85-90% of the trials with correct responses), but the performance of males at that time did not differ than that expected by chance. This last result shows a long-term memory capacity of female shiny cowbirds, which were able to remember information learned using either spatial or visual cues after a long retention interval. abstract_id: PUBMED:8730816 Species and sex differences in hippocampus size in parasitic and non-parasitic cowbirds. To test the hypothesis that selection for spatial abilities which require birds to locate and to return accurately to host nests has produced an enlarged hippocampus in brood parasites, three species of cowbird were compared. In shiny cowbirds, females search for host nests without the assistance of the male; in screaming cowbirds, males and females inspect hosts' nests together; in bay-winged cowbirds, neither sex searches because this species is not a brood parasite. As predicted, the two parasitic species had a relatively larger hippocampus than the non-parasitic species. There were no sex differences in relative hippocampus size in screaming or bay-winged cowbirds, but female shiny cowbirds had a larger hippocampus than the male. abstract_id: PUBMED:29903628 How female and male physicians' communication is perceived differently. Objective: This paper is based on a 2017 Baltimore International Conference on Communication in Healthcare (ICCH) plenary presentation by the first author and addresses how female and male physicians' communication is perceived and evaluated differently. Female physicians use patient-centered communication which is the interaction style clearly preferred by patients. Logically, patients should be much more satisfied with female than male physicians. However, research shows that this is not the case. Methods: This article provides an overview on how female and male physician communication is evaluated and perceived differently by patients and discusses whether and how gender stereotypes can explain these differences in perception and evaluation. Results: Male physicians obtain good patient outcomes when verbally expressing patient-centeredness while female physicians have patients who report better outcomes when they adapt their nonverbal communication to the different needs of their patients. Conclusion: The analysis reveals that existing empirical findings cannot simply be explained by the adherence or not to gender stereotypes. Female physicians do not always get credit for showing gender role congruent behavior. All in all, female and male physicians do not obtain credit for the same behaviors. Practice Implications: Physician communication training might put different accents for female and male physicians. abstract_id: PUBMED:9236024 Host responses to cowbirds near the nest: cues for recognition Two cues (bill shape and vocalizations) that yellow warblers, Dendroica petechiamay use to recognize brown-headed cowbirds, Molothrus aterwere examined experimentally. Female yellow warblers responded more intensely to a control cowbird than one with a longer, thinner bill, which suggests that bill shape was an important cue in recognition. Responses involved close approaches, 'seet' alarm calls and sitting in the nest. Female yellow warblers distinguished between vocalizations of male and female cowbirds and those of a song sparrow, Melospiza melodia (control). They responded most intensely to female cowbird 'chatter calls' and least intensely to the sparrow song; responses to male cowbird 'perched song' were intermediate in intensity. The results suggest that female yellow warblers recognize cowbirds on the basis of bill shape and vocalizations. Furthermore, these results, coupled with previous findings, suggest that female yellow warblers distinguish between male and female cowbirds by vocalizations but not appearance. Answer: Yes, male and female cowbirds do seem to see their world differently. Research has shown that there are sex differences in the sensory systems of brown-headed cowbirds, which are avian brood parasites. Specifically, female cowbirds have a significantly lower density of single and double cone photoreceptors than males, which suggests that they have lower chromatic and achromatic visual resolution. Despite this, females exhibit higher head-movement rates, which may be a compensatory behavior for their lower visual resolution, allowing them to gather visual information more quickly by gazing alternatively with both foveae in quicker succession than males (PUBMED:23544049). Additionally, female cowbirds have been found to perform better than males on some tests of spatial memory, which is associated with their role in searching for and remembering the locations of potential host nests. This difference in cognitive abilities is also reflected in the pattern of adult hippocampal neurogenesis, which is different in females compared to males and non-parasitic birds (PUBMED:29436765). These findings suggest that female and male cowbirds allocate their sensory resources differently, possibly due to the different ecological pressures and selection pressures they face related to their brood parasitic lifestyle.
Instruction: Can we improve the detection rate and interobserver agreement in capsule endoscopy? Abstracts: abstract_id: PUBMED:37036637 Interobserver agreement on landmark and flexure identification in colon capsule endoscopy. Purpose: When an optical colonoscopy is carried out, Scope Guide can assist the endoscopist in determining the localization. In colon capsule endoscopy (CCE), this support is not available. To our knowledge, the interobserver agreement on landmark identification has never been studied. This study aims to investigate the interobserver agreement on landmark identification in CCE. Methods: An interobserver study was carried out comparing the landmark identification (the ileocecal valve, hepatic flexure, splenic flexure, and anus) in CCE investigations between an external private contractor and three in-house CCE readers with different levels of experience. All CCE investigations analyzed in this study were carried out as a part of the Danish screening program for colorectal cancer. Patients were between 50 and 74 years old with a positive fecal immunochemical test (FIT). A random sample of 20 CCE investigations was taken from the total sample of more than 800 videos. Results: Overall interobserver agreement on all landmarks was 51%. Interobserver agreement on the first cecal image (ileocecal valve), hepatic flexure, splenic flexure, and last rectal image (anus) was 72%, 29%, 22%, and 83%, respectively. The overall interobserver agreement, including only examinations with adequate bowel preparation (n = 16), was 54%, and for individual landmarks, 73%, 32%, 24%, and 85%. Conclusion: Overall interobserver agreement on all four landmarks from CCE was poor. Measures are needed to improve landmark identification in CCE investigations. Artificial intelligence could be a possible solution to this problem. abstract_id: PUBMED:22858420 Can we improve the detection rate and interobserver agreement in capsule endoscopy? Background: Data about strategies for improving the diagnostic ability of capsule endoscopy readers are lacking. Aim: (1) To evaluate the detection rate and the interobserver agreement among readers with different experience; (2) to verify the impact of a specific training (hands-on training plus expert tutorial) on these parameters. Methods: 17 readers reviewed 12 videos twice; between the two readings they underwent the training. The identified small bowel findings were described by a simplified version of Structured Terminology and classifies as clinically significant/non-significant. Findings identified by the readers were compared with those identified by three experts (Reference Standard). Results: The Reference Standard identified 26 clinically significant findings. The mean detection rate of overall readers for significant findings was low (about 50%) and did not change after the training (46.2% and 46.4%, respectively). There was no difference in the detection rate among readers with different experience. The interobserver agreement with the Reference Standard in describing significant findings was moderate (k = 0.44; CI95%: 0.39-0.50) and did not change after the training (k = 0.44; CI95%: 0.38-0.49) or stratifying readers according to their experience. Conclusions: Both the interobserver agreement and the detection rate of significant findings are low, regardless of the readers' experience. Our training did not significantly increase the performance of readers with different experience. abstract_id: PUBMED:30574327 Intra and inter-observer agreement on polyp detection in colon capsule endoscopy evaluations. Background: Colon capsule endoscopy is a promising technique for evaluation of the colon, but its reproducibility is still unknown. Objective: This study assesses intra and inter-observer agreement in evaluations of colon capsule endoscopy videos. Methods: Forty-two complete colon capsule endoscopy investigations were analysed by three experts and two beginners. Intra-observer agreement was assessed in paired readings of two experts and two beginners. Agreement was determined by the intraclass correlation coefficient: poor (&lt;0.5), moderate (0.5-0.75), good (0.75-0.9) and excellent (&gt;0.9). Results: Agreement on 'indication for a following colonoscopy' based on the number and size of detected polyps and bowel cleansing quality was poor among all observers. Agreement among experts on the detection of large polyps and number of polyps was moderate, but agreement on bowel cleansing quality was poor. Beginners were in moderate agreement with the experts on polyp detection. Intra-observer agreement in experts was moderate to excellent for the detection of large polyps (≥10 mm), excellent for the number of polyps, and poor to moderate for bowel cleansing quality. Intra-observer agreement in beginners was poor to moderate for all variables. Conclusions: This study shows a poor agreement on 'indication for a following colonoscopy', but a high intra and inter-observer agreement for polyp detection among experts, as well as a moderate agreement between beginners and experts.Trial registration: NCT02303756. abstract_id: PUBMED:20817579 Interobserver agreement in describing video capsule endoscopy findings: a multicentre prospective study. Background And Aim: Few studies have specifically addressed interobserver agreement in describing lesions identified during capsule endoscopy. The aim of our study is to evaluate interobserver agreement in the description of capsule endoscopy findings. Materials And Methods: Consecutive short segments of capsule endoscopy were prospectively observed by 8 investigators. Seventy-five videos were prepared by an external investigator (gold standard). The description of the findings was reported by the investigators using the same validated and standardized capsule endoscopy structured terminology. The agreement was assessed using Cohen's kappa statistic. Results: As concerns the ability to detect a lesion, the agreement with the gold standard was moderate (kappa 0.48), as well as the agreement relating to the final diagnosis (κ 0.45). The best agreement was observed in identifying the presence of active bleeding (κ 0.72), whereas the poorest agreement concerned the lesion size (κ 0.32). The agreement with the GS was significantly better in endoscopists with higher case/volume of capsule endoscopy per year. Diagnostic concordance was better in the presence of angiectasia than in the presence of polyps or ulcers/erosions. Conclusions: Correct lesion identification and diagnosis seem more likely to occur in presence of angiectasia, and for readers with more experience in capsule endoscopy reading. abstract_id: PUBMED:18058656 Esophageal capsule endoscopy in patients with suspected esophageal disease: double blinded comparison with esophagogastroduodenoscopy and assessment of interobserver variability. Background And Study Aims: The study aim was to compare the diagnostic yield of capsule endoscopy and esophagogastroduodenoscopy (EGD) in patients with suspected esophageal disease. Secondary aims were to assess interobserver variability of capsule endoscopy readings and safety. Patients And Methods: In total, 98 patients (53 men, 53 +/- 13 years) with an indication for EGD were included. The patient population was artificially enriched to include two thirds of patients with abnormal esophageal findings at EGD, which was followed by capsule endoscopy. Capsule recordings were blindly read by three endoscopists, one from the center that recruited the patient and two from the other center. Study outcomes were the findings described on EGD and capsule endoscopy, agreement between EGD and capsule endoscopy for findings, quality of the capsule recording, and interobserver agreement for capsule endoscopy quality and findings. Results: EGD was normal in 34 patients and showed esophageal findings in 62 (esophagitis 28, hiatus hernia 21, varices 21, Barrett's esophagus 11, others 7). Average esophageal transit time of the capsule was 361 +/- 393 seconds. Capsule endoscopy was normal in 36 patients but detected esophagitis in 23, hiatus hernia in 0, varices in 23, Barrett's esophagus in 18, and others in 4. The positive predictive value of capsule endoscopy was 80.0 % and the negative predictive value was 61.1 %. Overall agreement per patient was moderate between EGD and capsule endoscopy for the per-patient (kappa = 0.42) and per-findings (kappa = 0.40) analyses. Interobserver agreement between capsule endoscopy readings was moderate for findings (kappa = 0.39) and quality assessment (kappa = 0.24). No adverse event was observed after either EGD or capsule endoscopy. Conclusion: In this study, despite artificial prevalence enrichment, capsule endoscopy showed a moderate sensitivity and specificity in the detection of esophageal diseases. abstract_id: PUBMED:30019190 Intra and interobserver agreement of narrow band imaging for the detection of head and neck tumors. Background: Narrow band imaging (NBI) in combination with white light endoscopy (WLE) has improved the accuracy for the diagnosis and follow-up of head and neck carcinomas by identifying changes in the vascular patterns of the mucosa. However, NBI evaluation is explorer-dependent and may be influenced by the learning curve. The aim of this study is to assess the intra and interobserver agreement of NBI and WLE at the office, under local anaesthesia, by either experienced or non-experienced observers. Methods: Eighty-seven images of head and neck lesions were routinely collected under WLE and NBI. A group of three experienced otolaryngologists and three medical students assessed the images after a brief training. No additional patient information was provided. The same protocol was repeated after three weeks. Intra and interobserver agreement were calculated with the kappa index. Results: NBI intraobserver agreement was substantial (κ = 0.62) and better than with WLE alone, which was moderate (κ = 0.57) in both groups. Interobserver agreement was moderate with WLE (κ = 0.58) and substantial with NBI (κ = 0.63). Both groups improved intraobserver and interobserver agreement with the implementation of NBI. Conclusions: Intra and interobserver agreement with NBI for the evaluation of head and neck lesions are substantial, and improve the results of WLE alone in both, professionals and trainees. abstract_id: PUBMED:16010604 Video capsule endoscopy for investigation of obscure gastrointestinal bleeding: feasibility, results, and interobserver agreement. Background And Study Aims: The aim of the study was to assess the feasibility, diagnostic yield, and interobserver agreement of capsule endoscopy in the investigation of patients with obscure or occult gastrointestinal bleeding. Patients And Methods: A total of 64 consecutive patients with occult bleeding (31 %) or overt bleeding (69 %) were assessed using capsule endoscopy after negative upper and lower endoscopy and small-bowel radiology. The quality of visualization of the small-bowel mucosa was scored from 1 (poor) to 4 (excellent). Thirty video capsule recordings with normal or abnormal findings were blindly assessed by four independent endoscopists. Interobserver agreement was evaluated using the kappa index. Results: The small bowel was completely visualized in 57/64 patients (89 %). Incomplete small-bowel transit was most commonly due to prolonged gastric retention (five patients). The mucosa visualization scores (means) for the proximal, middle, and distal thirds of the small bowel were 3.7, 3.3, and 2.2 respectively. Visualization of the distal ileum was good (&gt; or = 3) in 38 % and a bleeding site was found in 45 % of patients. Push-enteroscopy was also performed in 56 patients. The results of the two techniques were similar in 37 patients, capsule endoscopy was superior in 12 patients, and push-enteroscopy was superior in seven patients. Interobserver agreement was good for bleeding and for angiodysplasia, but poor for ulcers and tumors. Mean interobserver agreement was better among experienced endoscopists than among junior endoscopists. Conclusions: Capsule endoscopy allowed the whole small intestine to be explored in 89 % of patients, with good visualization of the mucosa, except distally. Interobserver agreement was better among the experienced endoscopists and was better for red-colored abnormalities (bleeding and angiodysplasia) than for ulcers and tumors. abstract_id: PUBMED:30725289 Magnetic Steering of Capsule Endoscopy Improves Small Bowel Capsule Endoscopy Completion Rate. Background: Capsule endoscopy is currently available as a noninvasive and effective diagnostic modality to identify small bowel abnormalities, with a completion rate to the cecum between 75.1 and 95.6%. A novel magnetically controlled capsule endoscopy (MCE) system could facilitate passage of the capsule through the pylorus, thereby reducing the gastric transit time (GTT). Objective: We performed this study to determine whether magnetic steering could improve the capsule endoscopy completion rate (CECR) compared to standard protocol. Methods: Patients referred for MCE in our center from June 2017 to November 2017 were prospectively enrolled. Magnetic steering of the capsule through the pylorus was performed after standard gastric examination. CECR, GTT, pyloric transit time (PTT), and rapid gastric transit (GTT ≤ 30 min) rate were compared with a historical control group enrolled from January 2017 to May 2017. Results: CECR was significantly higher in the intervention group (n = 107) than control group (n = 120) (100% vs. 94.2%, P = 0.02), with a significantly shorter GTT (22.2 vs. 84.5 min, P &lt; 0.001) and PTT (4.4 vs. 56.7 min, P &lt; 0.001). Rapid gastric transit rate in the intervention group was significantly higher than the control group (58.9% vs. 15.0%, P &lt; 0.001). There were no statistical differences in the diagnostic yields between the two groups. Conclusions: Magnetic steering of capsule endoscopy improves small bowel CECR by reducing GTT, adding further support to MCE as a practical tool for noninvasive examination of both the stomach and small bowel. Trial registration ClinicalTrials.gov, ID: NCT03482661. abstract_id: PUBMED:19500517 Intra- and interobserver variation in capsule endoscopy reviews Introduction: It is unclear which professional groups can and should perform reviews of capsule endoscopy (CE) exams of the GI tract. We investigate whether a junior doctor or an endoscopy nurse can review CE films with the same diagnostic results as a specialist. Material And Methods: An endoscopy nurse and a junior doctor, both with no CE experience, reviewed 30 CE films twice and noted their findings. A distinction was made between clinically important and non-important findings. An endoscopy specialist and the Given Imaging Review Service reviewed the films once each, and their findings were used as the study's gold standard. Time consumption, intra- and interobserver agreement, sensitivity and specificity were assessed. Results: The junior doctor improved in speed from 1st to 2nd review, but the nurse did not. Both performed poorly compared with Given Imaging Review Service regarding diagnostic accuracy. The junior doctor improved in diagnostic accuracy from 1st to 2nd review, while the nurse did not. The nurse showed a decrease in sensitivity from 1st to 2nd review from 89% to 62%, whereas the junior doctor's sensitivity increased from 48% to 62%. Both missed many pathological findings and their results were not reproducible. Conclusion: The nurse and junior doctor did not obtain the same diagnostic results as the gold standard. The results raise questions of whether nurses or junior doctors should perform capsule endoscopy reviews. abstract_id: PUBMED:18924357 Advances of capsule endoscopy. Presentation of the book: "Atlas of capsule endoscopy" Capsule endoscopy is a new technique which has meant a real change in clinical medicine regarding diagnosis and therapy applied to many illnesses in the digestive tract. Nowadays, thanks to the different prototypes available, capsule endoscopy can be used to study esophageal, intestinal and colonic pathologies, being mainly recommended for obscure gastrointestinal bleeding. The aim of Capsule Endoscopy Atlas, directed by Professors Herrerías and Mascarenhas, in which the inventors of the technique have also taken part together with some other worldwide re-known experts, is to spread the current step-forwards in this new form of endoscopy. Answer: The detection rate and interobserver agreement in capsule endoscopy are areas of concern that have been studied in various research projects. According to the studies, both the interobserver agreement and the detection rate of significant findings are generally low, regardless of the readers' experience. For instance, one study found that the mean detection rate of overall readers for significant findings was about 50% and did not change after specific training aimed at improving these parameters (PUBMED:22858420). This suggests that simply providing training may not be sufficient to significantly increase the performance of readers with different levels of experience. Another study assessing interobserver agreement in colon capsule endoscopy (CCE) found poor overall interobserver agreement on all four landmarks (ileocecal valve, hepatic flexure, splenic flexure, and anus) from CCE, with overall agreement being just 51% (PUBMED:37036637). This indicates a need for measures to improve landmark identification in CCE investigations, with artificial intelligence proposed as a possible solution. In terms of polyp detection in CCE evaluations, there was a poor agreement on the 'indication for a following colonoscopy' among all observers, but a high intra and inter-observer agreement for polyp detection among experts, as well as a moderate agreement between beginners and experts (PUBMED:30574327). This suggests that while there is room for improvement in some areas, there is a reasonable level of agreement among more experienced readers for certain findings. Overall, the studies suggest that while there is potential for improvement, achieving high detection rates and interobserver agreement in capsule endoscopy is challenging. The use of standardized terminology, structured training, and possibly the incorporation of advanced technologies like artificial intelligence may help improve these aspects of capsule endoscopy (PUBMED:22858420, PUBMED:37036637, PUBMED:30574327).
Instruction: Is there a role for Tc-99m (V) DMSA scintigraphy in ischemic colitis? Abstracts: abstract_id: PUBMED:12925713 Active inflammatory bowel disease: evaluation with 99mTc (V) DMSA scintigraphy. Purpose: To evaluate the use of pentavalent (V) technetium 99m (99mTc) dimercaptosuccinic acid (DMSA) scintigraphy for the assessment of disease activity in patients with inflammatory bowel disease (IBD). Materials And Methods: 99mTc (V) DMSA scintigraphy was performed in 76 patients. There were 36 patients with active IBD (11 with ulcerative colitis, 25 with Crohn disease), 28 patients with inactive disease (eight with ulcerative colitis, 20 with Crohn disease), and 12 patients with miscellaneous bowel disease. Sensitivity and specificity of 99mTc (V) DMSA scintigraphy in the diagnosis of IBD were calculated. In the group with active IBD, the disease activity and laboratory indices, as well as the endoscopic and histologic activity, were compared with the scanning activity index. Correlation coefficients between them were calculated with the Spearman rank test. Results: 99mTc (V) DMSA scintigraphy had a 92% (33 of 36) sensitivity and an 86% (24 of 28) specificity in the detection of active IBD. A significant correlation between disease activity indices and scintigraphy score was demonstrated. Endoscopic and histologic activity was significantly correlated (P =.005 and.02, respectively, overall disease activity) with the scanning activity score. Of the group of patients with miscellaneous bowel disease, three with ischemic colitis had negative findings at scintigraphy. Conclusion: 99mTc (V) DMSA scintigraphy provides a noninvasive, practical, and accurate assessment of IBD activity. abstract_id: PUBMED:18803355 Is there a role for Tc-99m (V) DMSA scintigraphy in ischemic colitis? Aim: To evaluate the role of pentavalent Tc-99m dimercaptosuccinic acid [Tc-99m (V) DMSA] in the diagnosis of ischemic colitis. Methods: Fourteen patients with endoscopically and histologically confirmed ischemic colitis were included in the study. Tc-99m (V) DMSA scintigraphy was performed within 2 d after colonoscopy. Images were considered positive when an area of increased activity was observed in the region of interest and negative when no abnormal tracer uptake was detected. Results: In 3 out of the 14 patients, Tc-99m (V) DMSA images showed moderate activity in the bowel. The scintigraphic results corresponded with the endoscopic findings. In the other 11 patients, no abnormal tracer uptake was detected in the abdomen. Conclusion: Besides the limited number of patients, Tc-99m (V) DMSA could not be considered as a useful imaging modality for the evaluation of ischemic colitis. abstract_id: PUBMED:9509932 Ischemic colitis: Tc-99m HMPAO leukocyte scintigraphy and correlative imaging. A 62-year-old woman became progressively ill after aorto-bifemoral graft surgery. A Tc-99m hexamethylpropyleneamine oxime (HMPAO) leukocyte scan was performed because of fever and suspected graft infection. The anterior abdominal and pelvic views demonstrated marked uptake within the sigmoid colon. Follow-up studies, including CT scan and contrast intestinal x-rays, confirmed the presence of an abnormal sigmoid loop. Subsequent surgical resection and pathological examination of the sigmoid colon revealed ischemic changes. To our knowledge, detection of ischemic colitis using Tc-99m HMPAO labeled leukocytes has not been reported. Clinical and routine radiological findings of ischemic colitis are typically nonspecific and may result in delayed diagnosis that can lead to high mortality. Tc-99m HMPAO leukocyte scintigraphy may be helpful in the noninvasive diagnosis of ischemic injury to the colon, especially after aortic graft surgery. abstract_id: PUBMED:7498228 Simultaneous administration of 111In-human immunoglobulin and 99mTc-HMPAO labelled leucocytes in inflammatory bowel disease. Technetium-99m hexamethylpropylene amine oxime (HMPAO) labelled leucocytes and indium-111 polyclonal immunoglobulin (IgG) were simultaneously injected into a group of 27 patients routinely referred for the investigation of inflammatory bowel disease (IBD). Ten-minute anterior abdomen and tail on detector views were obtained at 30 min, 4 h and 24 h p.i. of both tracers. The diagnosis of IBD was obtained in all cases by endoscopy with biopsy and/or surgery. Images were blindly evaluated by two experienced observers who only knew of the clinical suspicion of IBD. IBD was confirmed in 20 patients (12 with Crohn's disease and eight with ulcerative colitis). Sensitivity, specificity and accuracy were 100%, 85% and 96% respectively for labelled leucocytes and 70%, 85% and 74% for IgG. Both IgG and leucocyte scans were normal in six out of seven patients in whom a diagnosis of IBD was excluded; the remaining patient, with ischaemic colitis, was falsely positive with both agents. As far as disease extension is concerned, the IgG study localized 27 diseased segments, whereas 49 were seen with the leucocyte study. Eighty-four segments were normal and 25 showed tracer uptake with both agents. Twenty-four were positive only with the leucocyte study and two were positive only with the IgG study. Agreement between the agents was 80.7%. These results confirm that 111In-human polyclonal scintigraphy is less sensitive than 99mTc-HMPAO scintigraphy both for the diagnosis of IBD and in the evaluation of disease extension. Nevertheless, if leucocyte labelling is not available, labelled IgG can be used only for diagnostic purposes. abstract_id: PUBMED:34003444 Homozygous Factor V Leiden presenting as irreversible chronic colon ischemia resulting from inferior mesenteric vein thrombosis. A 42-year-old man presented with 6 months of unexplained left lower quadrant abdominal pain and hematochezia accompanied by weight loss despite extensive evaluations. Stool studies for pathogens were unrevealing, but an abdominal contrast-enhanced computed tomography revealed findings of chronic inferior mesenteric vein thrombosis. Colonoscopy demonstrated ulcerated strictures and gangrene confined to the sigmoid and descending colons, and biopsies confirmed changes of chronic irreversible colon ischemia. A homozygous Factor V Leiden mutation was diagnosed. The patient underwent colectomy and was treated with lifelong anticoagulation. While mesenteric venous thrombosis is a well-recognized cause of colon ischemia in hypercoagulable states, thrombosis of the inferior mesenteric vein is uncommon; when chronic it is rarely clinically apparent. Similarly, while Factor V Leiden mutation is a common hereditary thrombophilia, it uncommonly causes mesenteric venous thrombosis, and homozygotes of the mutation typically present earlier in the fourth decade and with non-mesenteric venous thromboembolism. This case is valuable and adds to the existing literature in describing a rare, clinically atypical, and late index presentation of homozygous Factor V Leiden mutation as chronic inferior mesenteric vein thrombosis yielding irreversible colon ischemia. abstract_id: PUBMED:32433167 Hepatobiliary Scintigraphy in Cholecystocolonic Fistula. We report a case of a 50-year-old man diagnosed with cholecystocolonic fistula (CCF), confirmed by scintigraphy. After negative initial radiologic workup, a colonoscopy revealed findings of right-sided ischemic colitis. Soon after, a febrile episode prompted a repeat abdominal CT scan demonstrating a possible CCF, later confirmed with a nuclear medicine hepatobiliary scintigraphy and treated surgically. Less than 10% of CCFs are diagnosed preoperatively. abstract_id: PUBMED:18504175 Recurrent ischemic colitis in a patient with leiden factor V mutation and systemic lupus erythematous with antiphospholipid syndrome. Ischemic colitis results from insufficient blood supply to the large intestine and is often associated with hypercoagulable states. The condition comprises a wide range presenting with mild to fulminant forms. Diagnosis remains difficult because these patients may present with non-specific abdominal symptoms. We report a 51- year-old female patient with known Leiden factor V mutation as well as systemic lupus erythematous along with antiphospholipid syndrome suffering from recurrent ischemic colitis. At admission, the patient complained about abdominal pain, diarrhea and rectal bleeding lasting for 24 hours. Laboratory tests showed an increased C-reactive protein (29.5 mg/dl), while the performed abdominal CT-scan revealed only a dilatation of the descending colon along with a thickening of the bowel wall. Laparotomy was performed showing an ischemic colon and massive peritonitis. Histological examination proved the suspected ischemic colitis. Consecutively, an anti-coagulation therapy with coumarin and aspirin 100 was initiated. Up to the time point of a follow up examination no further ischemic events had occurred. This case illustrates well the non-specific clinical presentation of ischemic colitis. A high index of suspicion, recognition of risk factors and a history of non-specific abdominal symptoms should alert the clinicians to the possibility of ischemic disease. Early diagnosis and initiation of anticoagulation therapy or surgical intervention in case of peritonitis are the major goals of therapy. abstract_id: PUBMED:11441986 Factor V Leiden and prothrombin G20210A in relation to arterial and/or vein rethrombosis: two cases. The factor V Leiden (FV Leiden) and prothrombin G20210A mutations, are the most common established genetic risk factors for deep vein thrombosis (DVT). However, the relationship between these mutations and arterial thrombotic syndromes (coronary heart disease, myocardial infarction, stroke) has not been established. Some studies have suggested a relationship between them, but other authors have considered it unlikely that these anomalies are a major risk factor for arterial thrombosis. From the clinical point of view, a question arises concerning the risk of repeated thrombosis in patients carrying one of these two mutations. The question is whether the recurrence is attributable to the mutations or to the presence of additional circumstantial risk factors. As the risk of repeated thrombosis varies considerably from one patient to another, decisions about long-term treatment require weighing the persistence of risk factors for vascular disease (venous and arterial), especially in selected cases such as young patients or patients with thrombosis of unusual localization. abstract_id: PUBMED:10744608 Ischemic colitis associated with factor V Leiden mutation. N/A abstract_id: PUBMED:14560981 Low plasma protein Z levels in patients with ischemic colitis. Hypercoagulable states have been suggested to play an important role in the pathogenesis of ischemic colitis. Since protein Z is, as recently demonstrated, important in the regulation of coagulation, we investigated the plasma levels of protein Z in connection to factor V Leiden (FVL) and anti-phospholipid antibodies in patients with a definite diagnosis of ischemic colitis. The plasma levels of protein Z were measured using a commercially available enzyme-linked immunosorbent assay in 33 patients with ischemic colitis, 13 patients with diverticulitis, and 33 healthy controls. Mean plasma protein Z levels were 1.38 +/- 0.52 microg/ml in patients with ischemic colitis and were significantly lower compared to healthy controls (1.86 +/- 0.49 microg/ml) and patients with diverticulitis (1.72 +/- 0.53 microg/ml) (P = 0.001). Protein Z deficiency was found in patients cases with ischemic colitis (18.2%) compared to one with diverticulitis (7.7%) and one healthy control (3.0%). In conclusion, our results suggest that low plasma protein Z levels may play a role in the disease development in some cases with ischemic colitis. Answer: The role of Tc-99m (V) DMSA scintigraphy in the diagnosis of ischemic colitis appears to be limited. In a study evaluating the use of Tc-99m (V) DMSA scintigraphy for ischemic colitis, only 3 out of 14 patients showed moderate activity in the bowel that corresponded with endoscopic findings. The other 11 patients did not show any abnormal tracer uptake. The conclusion of this study was that Tc-99m (V) DMSA could not be considered a useful imaging modality for the evaluation of ischemic colitis due to the limited number of patients and the low detection rate (PUBMED:18803355). In contrast, Tc-99m (V) DMSA scintigraphy has been shown to have a high sensitivity (92%) and specificity (86%) in the detection of active inflammatory bowel disease (IBD), with significant correlations between disease activity indices and scintigraphy score (PUBMED:12925713). However, this does not necessarily translate to its effectiveness in diagnosing ischemic colitis, which is a different condition. Other imaging modalities, such as Tc-99m HMPAO leukocyte scintigraphy, have been reported to be helpful in the noninvasive diagnosis of ischemic injury to the colon, especially after aortic graft surgery (PUBMED:9509932). Additionally, simultaneous administration of 111In-human immunoglobulin and 99mTc-HMPAO labeled leukocytes has been used in the investigation of IBD, with 99mTc-HMPAO labeled leukocytes showing higher sensitivity and accuracy for the diagnosis and evaluation of disease extension compared to 111In-human polyclonal immunoglobulin (PUBMED:7498228). Therefore, while Tc-99m (V) DMSA scintigraphy has been found to be effective in assessing active IBD, its role in the diagnosis of ischemic colitis is not supported by the available evidence, and other imaging techniques may be more appropriate for this purpose.
Instruction: Measuring morbidity: self-report or health care records? Abstracts: abstract_id: PUBMED:20019091 Measuring morbidity: self-report or health care records? Background: Epidemiological surveys often rely on self-report as a measure of morbidity in a population. However, these data can also be extracted from primary care records. Objective: To compare morbidity estimates based on self-report with those obtained from primary care records. Methods: A cross-sectional survey and accompanying medical record review were carried out in all consenting adults aged &gt; or =50 years in three general practices in North Staffordshire, UK. Self-reported morbidity was compared with computerized general practice consultation records for the 2 years prior to the survey. Results: Of the 7878 survey responders, 5889 consented to medical record review. Agreement between self-reported and consultation data was excellent for diabetes. Agreement between the two sources of data was lower for hypertension, heart problems, chest problems and eyesight problems. It was poor for deafness and falls. Conclusions: Self-report and consultation data provide comparable estimates of the prevalence of specific diagnoses such as diabetes. For other conditions, self-report and consultation records provide different measures of prevalence, and the choice of measure will depend on the morbidity being studied. abstract_id: PUBMED:23920809 Measuring the performance of electronic health records: a case study in residential aged care in Australia. Measuring the performance of electronic health records (EHR) is an important, yet un-resolved challenge. Various measurements have addressed different aspects of EHR success, yet a holistic, comprehensive measurement tool needs to be developed to capture the potential EHR success variables completely. A self-administered questionnaire survey instrument was developed based on the theoretical framework of the DeLone and McLean Information Systems Success Model. It measures nigh variables of EHR success: system quality, information quality, service quality, training, self efficacy, intention to use, use, user satisfaction and net benefits. The instrument was used to measure the performance of aged care EHR systems in three aged care organizations. The results suggest that the instrument was reliable. abstract_id: PUBMED:23388252 A scoping review on health records for child-in-care. A scoping review was conducted to determine the current state of knowledge on child-in-care health records in academic literature. Eight studies describing five such health records were found. Different terms were found between countries. A key finding from the studies was that research needs to report on "what worked" to inform policy and practice for positive changes. Complete, accurate and consistent health records for child-in-care are needed that can support care and be aggregated to identify policy and practice gaps and interventions that were effective. Such health records enabled moving from reactive to proactive care for the child. Social work case data elements found in a child-in-care health record not included in a child personal health record include: court dates, dental, abuse, placement, and education. Including these data elements allows looking at the overall wellbeing and development of the child. With the exception of two, all studies reported positively on their implementation. Further, all studies advocated for continued development of a tailored child-in-care health record. The evidence points toward child-in-care health records as a tool toward achieving healthy outcomes and policy development. abstract_id: PUBMED:10537608 Medical records as an alternative to self-report for measuring mammography utilization. A pilot study assessed whether medical records were a viable alternative to self-report for measuring mammography use in a population-based sample. Of 98 women contacted by telephone, 62 (63.3%) ultimately provided written consent to obtain their mammogram reports. Although all physicians complied with requests for records, an average of three physician contacts per woman were required and 87 percent of mammogram reports were located; therefore, records were available for only 56 percent of women contacted initially. This, coupled with the effort associated with obtaining the records, does not support the use of medical records as an alternative to self-report to measure mammography utilization in the general population. abstract_id: PUBMED:24988819 Adoption and use of electronic health records and mobile technology by home health and hospice care agencies. Objective: This report presents national estimates on the adoption and use of electronic health records and mobile technology by home health and hospice care agencies, as well as the agency characteristics associated with adoption. Methods: Estimates are based on data from the 2007 National Home and Hospice Care Survey, conducted by the Centers for Disease Control and Prevention's National Center for Health Statistics. Results: In 2007, 28% of home health and hospice care agencies adopted both electronic health records and mobile technology, while slightly over half (54%) adopted neither. Sixteen percent of agencies adopted only electronic health records. Adoption of both technologies was associated with number of patients served and agency type. Agencies that were for-profit or were jointly owned with a hospital were more likely to have adopted neither technology. Among agencies with electronic health records, the most commonly used functionalities were patient demographics and clinical notes. Among agencies with mobile technology, functionalities for the Outcome and Assessment Information Set (OASIS), e-mail, and appointment scheduling were the most commonly used. Similar percentages of agencies with electronic health records or mobile technology used clinical decision support systems, computerized physician order entry, electronic reminders for tests, and viewing of test results. abstract_id: PUBMED:25848412 Association between electronic health records and health care utilization. Background: The federal government is investing approximately $20 billion in electronic health records (EHRs), in part to address escalating health care costs. However, empirical evidence that provider use of EHRs decreases health care costs is limited. Objective: To determine any association between EHRs and health care utilization. Methods: We conducted a cohort study (2008-2009) in the Hudson Valley, a multi-payer, multiprovider community in New York State. We included 328 primary care physicians in predominantly small practices (median practice size four primary care physicians), who were caring for 223,772 patients. Data from an independent practice association was used to determine adoption of EHRs. Claims data aggregated across five commercial health plans was used to characterize seven types of health care utilization: primary care visits, specialist visits, radiology tests, laboratory tests, emergency department visits, hospital admissions, and readmissions. We used negative binomial regression to determine associations between EHR adoption and each utilization outcome, adjusting for ten physician characteristics. Results: Approximately half (48%) of the physicians were using paper records and half (52%) were using EHRs. For every 100 patients seen by physicians using EHRs, there were 14 fewer specialist visits (adjusted p &lt; 0.01) and 9 fewer radiology tests (adjusted p = 0.01). There were no significant differences in rates of primary care visits, laboratory tests, emergency department visits, hospitalizations or readmissions. Conclusions: Patients of primary care providers who used EHRs were less likely to have specialist visits and radiology tests than patients of primary care providers who did not use EHRs. abstract_id: PUBMED:31160011 Measuring electronic communication networks in virtual care teams using electronic health records access-log data. Objective: To develop methods for measuring electronic communication networks in virtual care teams using electronic health records (EHR) access-log data. Methods: For a convenient sample of 100 surgical colorectal cancer patients, we used time-stamped EHR access-log data extracted from an academic medical center's EHR system to construct communication networks among healthcare professionals (HCPs) in each patient's virtual care team. We measured communication linkages between HCPs using the inverse of the average time between access events in which the source HCPs sent information to and the destination HCPs retrieved information from the EHR system. Social network analysis was used to examine and visualize communication network structures, identify principal care teams, and detect meaningful structural differences across networks. We conducted a non-parametric multivariate analysis of variance (MANOVA) to test the association between care teams' communication network structures and patients' cancer stage and site. Results: The 100 communication networks showed substantial variations in size and structures. Principal care teams, the subset of HCPs who formed the core of the communication networks, had higher proportions of nurses, physicians, and pharmacists and a lower proportion of laboratory medical technologists than the overall networks. The distributions of conditional uniform graph quantiles suggested that our network-construction technique captured meaningful underlying structures that were different from random unstructured networks. MANOVA results found that the networks' topologies were associated with patients' cancer stage and site. Conclusions: This study demonstrates that it is feasible to use EHR access-log data to measure and examine communication networks in virtual care teams. The proposed methods captured salient communication patterns in care teams that were associated with patients' clinical differences. abstract_id: PUBMED:24152578 Use of electronic health records in residential care communities. Key Findings: In 2010, only 17% of residential care communities in the United States used electronic health records. Residential care communities that used electronic health records were more likely to be larger, not-for-profit, chain-affiliated, colocated with another care setting, and in a nonmetropolitan statistical area. The types of information most commonly tracked electronically by residential care communities that used electronic health records were medical provider information, resident demographics, individual service plans, and lists of residents' medications and active medication allergies. Four in 10 residential care communities that used electronic health records also had support for electronic exchange of health information with service providers; nearly 25% could exchange with pharmacies, and 17% could exchange with physicians. abstract_id: PUBMED:22490301 New paradigms for measuring clinical performance using electronic health records. Unlabelled: Measures of provider success are the centerpiece of quality improvement and pay-for-performance programs around the globe. In most nations, these measures are derived from administrative records, paper charts and consumer surveys; increasingly, electronic patient record systems are also being used. We use the term 'e-QMs' to describe quality measures that are based on data found within electronic health records and other related health information technology (HIT). We offer a framework or typology for e-QMs and describe opportunities and impediments associated with the transition from old to new Data Sources: If public and private systems of care are to effectively use HIT to support and evaluate health-care system quality and safety, the quality measurement field must embrace new paradigms and strategically address a series of technical, conceptual and practical challenges. abstract_id: PUBMED:27677443 Care Consistency With Documented Care Preferences: Methodologic Considerations for Implementing the "Measuring What Matters" Quality Indicator. A basic tenet of palliative care is discerning patient treatment preferences and then honoring these preferences, reflected by the inclusion of "Care Consistency With Documented Care Preferences" as one of 10 "Measuring What Matters quality" indicators. Measuring What Matters indicators are intended to serve as a foundation for quality measurement in health care settings. However, there are a number of logistic and practical issues to be considered in the application of this quality indicator to clinical practice. In this brief methodologic report, we describe how care consistency with documented care preferences has been measured in research on patients near the end of life. Furthermore, we outline methodologic challenges in using this indicator in both research and practice, such as documentation, specificity and relevance, preference stability, and measuring nonevents. Recommendations to strengthen the accuracy of measurement of this important quality marker in health care settings include consistent recording of preferences in the medical record, considerations for selection of treatment preferences for tracking, establishing a protocol for review of preferences, and adoption of a consistent measurement approach. Answer: When measuring morbidity, both self-report and health care records can be used, but they may yield different results depending on the condition being studied. A cross-sectional survey and medical record review conducted in North Staffordshire, UK, found that agreement between self-reported morbidity and consultation data was excellent for diabetes but lower for other conditions such as hypertension, heart problems, chest problems, eyesight problems, deafness, and falls. This suggests that self-report and consultation records provide comparable estimates of prevalence for specific diagnoses like diabetes, but for other conditions, the choice of measure will depend on the morbidity being studied (PUBMED:20019091). In contrast, a pilot study assessing the viability of medical records as an alternative to self-report for measuring mammography use found that obtaining records was effortful and not always successful, with records available for only 56% of women initially contacted. This suggests that medical records may not be a practical alternative to self-report for measuring utilization of certain health services in the general population (PUBMED:10537608). The use of electronic health records (EHRs) has been associated with changes in health care utilization, such as fewer specialist visits and radiology tests for patients of primary care providers who used EHRs compared to those who did not (PUBMED:25848412). EHRs also enable the measurement of electronic communication networks in virtual care teams, which can capture salient communication patterns associated with patients' clinical differences (PUBMED:31160011). Overall, the choice between self-report and health care records for measuring morbidity may depend on various factors, including the specific condition being measured, the practicality of obtaining accurate records, and the context in which the data is being used.
Instruction: Stroke physician versus stroke neurologist: can anyone thrombolyse? Abstracts: abstract_id: PUBMED:25533873 Stroke physician versus stroke neurologist: can anyone thrombolyse? Background/aim: The aim of this study is to compare the outcomes of thrombolysis under standard clinical settings between subjects treated by a stroke neurologist versus those treated by a non-neurologist stroke physician. Methods: Single-centre, observational cohort study of subjects thrombolysed in a calendar year, stratified according to the physician type authorising thrombolysis. Endpoints measured include proportion of subjects with symptomatic intracranial haemorrhage, door-to-needle time, change in National Institute of Health Stroke Scale and discharge destination. Results: Forty-nine subjects with a mean age 76 ± 16 years underwent thrombolysis, 21 were under the care of a stroke neurologist and 28 by a non-neurologist stroke physician. No symptomatic intracranial haemorrhages were observed. There was no difference in terms of door-to-needle time, proportion of individuals with haemorrhagic transformation, mortality or discharge destination between the two groups. Conclusion: Due to the single-centre, observational nature of this study, the equivalent outcomes between those thrombolysed by a stroke neurologist versus those thrombolysed by a stroke physician must be interpreted with caution pending further studies. Nevertheless, in the current setting, no signal for harm has been detected. This study is unique as it is the first to our knowledge comparing outcomes between a neurologist and non-neurologist following thrombolysis. abstract_id: PUBMED:24842502 Scope of practice and autonomy of physician assistants in rural versus urban emergency departments. Objectives: This was a study of the scope of practice and autonomy of emergency medicine (EM) physician assistants (PAs) practicing in rural versus urban emergency departments (EDs). Methods: Using the American Academy of Physician Assistants (AAPA) Masterfile, a random sample of 200 U.S. EM PAs were surveyed, with oversampling of an additional 200 rural PAs. Location was classified by zip code-based rural-urban commuting area codes, and responses were compared about conditions managed, procedures performed, and physician supervision between rural versus urban groups. Results: A total of 237 responses were received from PAs in 44 U.S. states, of which (201) were valid responses (105 rural, 96 urban) from PAs currently practicing in EDs (59.3% exclusion-adjusted response rate). Compared to urban PAs, rural PAs more frequently managed cardiac arrest (67% vs. 44%), stroke (86% vs. 72%), multisystem trauma (83% vs. 70%), active labor (44% vs. 23%), and critically ill children (82% vs. 65%) in the past year. They were more likely to have performed intubation (65% vs. 44%), needle thoracostomy (21% vs. 8%), and tube thoracostomy (46% vs. 26%). Rural PAs more often reported never having a physician present in the ED (38% vs. 0%) and less often reported always having a physician present (50% vs. 98%). Rural PAs were also less likely to report that a physician evaluates more than 75% of their patients (8% vs. 18%) and more likely that a physician never evaluates all of their patients (19% vs. 7%). Conclusions: Rural PAs reported a broader scope of practice, more autonomy, and less access to physician supervision than urban PAs. abstract_id: PUBMED:11845630 Resource use by physician assistant services versus teaching services. Objective: To compare resource use in physician assistant (PA) services versus teaching services for 5 medical diagnostic groups in a large community teaching hospital, while controlling for attending physician. Methods: The sample was drawn from an administrative database of patients admitted to an internal medicine service in which attending physicians' cases rotated between the PA and teaching services on a preassigned schedule. Diagnoses included in the study were stroke (diagnosis-related group [DRG] 014), pneumonia (DRG 089), acute myocardial infarction (DRG 122), congestive heart failure (DRG 127), and gastrointestinal hemorrhage (DRG 174). Cases discharged between January 1, 1994, and June 30, 1995, were evaluated. Inclusion of cases was based on the Integrated Inpatient Management Model (IIMM). Resource use was measured using direct costs expressed as relative value units (RVUs) for radiology, laboratory, and total resource use, as well as for length of stay (LOS). Results: After adjustment for admitting physician effects and other covariates, the mixed model analyses indicated that PAs used fewer total ancillary resources for patients with pneumonia and fewer laboratory resources for patients with stroke, pneumonia, and congestive heart failure than did residents, on average. No significant differences were noted in average LOS or use of radiology resources between PA and teaching services. In no cases did PAs use significantly more resources than residents. However, significantly higher mortality among pneumonia cases was detected for PAs. Conclusions: In the same practice setting, PAs used resources as effectively as, or more effectively than, residents. abstract_id: PUBMED:26332628 Stroke physician versus stroke neurologist: can anyone thrombolyse? A general hospital experience and perspective. N/A abstract_id: PUBMED:30371165 Advanced Practice Provider Versus Physician-Only Outpatient Follow-Up After Acute Myocardial Infarction. Background Physician shortages and reimbursement changes have led to greater use of advanced practice providers ( APP s). Prevalence of and outcomes associated with APP care following myocardial infarction are unknown. Methods and Results We examined outpatient cardiology or primary care visits within 90 days post-myocardial infarction among 29 477 Medicare-insured patients aged ≥65 years from 364 hospitals in Acute Coronary Treatment Intervention Outcomes Network Registry. We compared medication adherence, all-cause readmission risk, mortality, and major adverse cardiovascular events between patients seen by APP s versus physicians only. Overall, 11% of myocardial infarction patients were treated by an APP . Patients seen by APP s were more likely to have diabetes mellitus (37% versus 33%) and heart failure (20% versus 16%), be discharged to a nursing facility (21% versus 13%) and had more outpatient visits within 90 days post-discharge (median 6 versus 5, P&lt;0.01 for all) than those seen by physicians only. Adherence to evidence-based medications (adjusted odds ratio, 0.98; 95% confidence interval, 0.89-1.08) and readmission risks (adjusted hazard ratio, 1.11; 95% confidence interval, 0.99-1.26) were similar between patients seen by APP s versus physicians only. Risks of 90-day mortality (adjusted hazard ratio, 1.18; 95% confidence interval, 0.98-1.42) and major adverse cardiovascular events (adjusted hazard ratio, 1.06; 95% confidence interval, 0.90-1.23) were also similar between patients seen by APP s versus physicians only. Conclusions APP s were likely used to provide more frequent monitoring of high-risk post- MI patients. Medication adherence, readmission risk, mortality, and major adverse cardiovascular events did not differ substantially between patients seen by physician- APP teams than those seen by physicians only. abstract_id: PUBMED:26563662 Intravenous thrombolysis in ischemic stroke: Therapeutic perspectives New therapeutic strategies are under evaluation to improve the treatment of acute ischemic stroke (AIS). Approaches combining intravenous (IV) thrombolysis with recombinant tissue plasminogen activator (rt-PA) and antithrombotic agents are currently evaluated. The combination of IV rt-PA and aspirin showed a high rate of intracranial hemorrhage whereas the association of rt-PA and eptifibatide seems more promising. The results of recent studies evaluating the administration of eptifibatide or argatroban in conjunction with conventional IV thrombolysis with rt-PA are expected to clarify the safety and efficacy of these treatments. More fibrin-specific plasminogen activators, tenecteplase and desmoteplase, are also investigated. These fibrinolytic agents showed a favorable safety profile but their efficacy in AIS remains uncertain. While phase III studies, DIAS-3 and DIAS-4, evaluating IV desmoteplase up to nine hours after stroke onset did not meet the primary endpoint, the results of studies comparing IV tenecteplase and IV rt-PA are expected. abstract_id: PUBMED:2889347 Coronary arteriography performed by a physician assistant. Funding constraints and an oversupply of cardiologists mitigate against continued training of increasing numbers of cardiology fellows. In some institutions, the workload of the catheterization laboratory is an overriding factor. The ability of a physician assistant to perform some of this work was tested to determine if the number of fellows and the content of the fellowship training program could be uncoupled from the catheterization laboratory workload. Among the first 150 patients in whom coronary arteriography was performed by a physician assistant, no patient died or had a myocardial infarction or stroke. Two patients (1.3%) had minor complications: a retinal embolus and an infected puncture site. The complication rate in 150 consecutive cases performed by fellows was also 1.3%, a small myocardial infarction and a transient ischemic attack. Procedure times for the physician assistant and for the fellows were 41 +/- 13 and 44 +/- 18 minutes for preoperative patients and 62 +/- 24 and 70 +/- 20 minutes for postoperative patients. Corresponding fluoroscopy times were 11 +/- 5 and 12 +/- 7 minutes for the preoperative and 22 +/- 12 and 20 +/- 6 for postoperative patients. Only preoperative fluoroscopy times were statistically different (p = 0.02). Thus, substituting a physician assistant for a fellow to perform coronary arteriography is an option in institutions at which the number of studies exceeds the training needs of fellows. abstract_id: PUBMED:38312477 Evaluation of Continuity of Care: What Can Physician Survey Add? Background: The evaluation of continuity of care is usually based on the indicators of the frequency of patients' contacts with specific providers. There are some first attempts to use physician survey for the evaluation. Objective: Is to get additional information on the continuity of care in Russia by a newly developed physician questionnaire with detailed questions related to the specific areas of providers' interaction in the health system. Methods: The questionnaire was developed to increase the number of characteristics and indicators for the evaluation of informational, longitudinal and interpersonal continuity. Each of 17 questions was pretested by a group of experts. A small physician survey was conducted through the mobile App with 2690 respondents. A sample is skewed to young and urban respondents. The attempts have been made to increase its representativeness. Results And Discussion: We identified the areas of low continuity of care in Russia. Access to electronic medical records is limited. Outpatient and inpatient physicians rarely contact with each other. Primary care physicians are unaware of the substantial part of hospital admissions and emergency visits of their patients, which makes them unprepared for the follow-up treatment. Home visits to patients with heart attack and stroke after hospital discharge are rare. The lack of timely transfer of hospital cases to rehabilitative and social care settings also limits continuity of care. However, a small scale of the survey and its online operation limit its representativeness and robustness. Bigger scale of the survey with the same or similar questionnaire can improve its results. Conclusion: Physician survey can be a useful instrument of care continuity evaluation. The content of the suggested survey can be valuable for collecting the international evidence. abstract_id: PUBMED:34806602 Physician Diagnosis and Knowledge of Mild Cognitive Impairment. Background: Older adults with mild cognitive impairment (MCI) receive fewer guideline-concordant treatments for multiple health conditions than those with normal cognition. Reasons for this disparity are unclear. Objective: To better understand this disparity, we describe physician understanding and experience with patient MCI, particularly physician identification of MCI, ability to distinguish between MCI and dementia, and perspectives on education and training in MCI and dementia. Methods: As part of a mixed-methods study assessing the influence of patient MCI on physician recommendations for acute myocardial infraction and stroke treatments, we conducted a descriptive qualitative study using semi-structured interviews of physicians from three specialties. Key question topics included participants' identification of MCI, impressions of MCI and dementia awareness within their practice specialty, and perspectives on training and education in MCI. Results: The study included 22 physicians (8 cardiologists, 7 neurologists, and 7 internists). We identified two primary themes: There is 1) a lack of adequate understanding of the distinction between MCI and dementia; and 2) variation in physician approaches to identifying whether an older adult has MCI. Conclusion: These findings suggest that physicians have a poor understanding of MCI. Our results suggest that interventions that improve physician knowledge of MCI are needed. abstract_id: PUBMED:29875917 Thrombolytic treatment of vertebro-basilar ischemic stroke in N'Djamena, Republic of Chad Ischemic stroke is rare among the French Armed Forces, but Army doctors on missions may be called on to manage it, in particular in sub-Saharan Africa. Therefore, this is a critical emergency requiring early multidisciplinary approach, with limited means, involving neurologists from mainland France, in order to ensure patient optimal treatment. We here report the case of a patient with vertebro-basilar ischemic stroke treated with intravenous thrombolysis. Answer: The question of whether any physician can administer thrombolysis, or if it should be limited to stroke neurologists, has been explored in the literature. A study comparing outcomes of thrombolysis between patients treated by a stroke neurologist and those treated by a non-neurologist stroke physician found no significant differences in terms of symptomatic intracranial hemorrhages, door-to-needle time, proportion of individuals with hemorrhagic transformation, mortality, or discharge destination (PUBMED:25533873). This suggests that, at least in the setting of this study, non-neurologist stroke physicians were able to thrombolyse without an increased risk of harm compared to their neurologist counterparts. However, the authors caution that the findings should be interpreted carefully due to the single-center, observational nature of the study, and further research is needed to confirm these results. The broader question of whether any physician can perform complex medical procedures typically associated with specialists is also addressed in other contexts. For example, rural emergency medicine physician assistants (PAs) were found to manage a wider range of conditions and perform more procedures with greater autonomy compared to urban PAs, often with less access to physician supervision (PUBMED:24842502). This indicates that non-physician practitioners can expand their scope of practice in certain settings, potentially including thrombolysis for stroke. In another study, PAs in a community teaching hospital used resources as effectively as, or more effectively than, residents for certain medical conditions, including stroke (PUBMED:11845630). This further supports the idea that with appropriate training and in the right circumstances, non-specialist providers can deliver care effectively for conditions that might traditionally be managed by specialists. In summary, while traditionally thrombolysis might be considered within the purview of stroke neurologists, evidence suggests that with proper training and in certain clinical settings, other physicians, including non-neurologist stroke physicians, may be able to safely and effectively perform thrombolysis. However, the generalizability of these findings and the optimal conditions for non-specialists to perform such procedures require further investigation.
Instruction: Diabetes services in the UK: fourth national survey; are we meeting NSF standards and NICE guidelines? Abstracts: abstract_id: PUBMED:15941771 Diabetes services in the UK: fourth national survey; are we meeting NSF standards and NICE guidelines? Background: Previous surveys of children's diabetes service provision in the UK have shown gradual improvements but continuing deficiencies. Aim: To determine whether further improvements in services have occurred. Methods: A questionnaire was mailed to all paediatricians in the UK identified as providing care for children and adolescents with diabetes. Responses were compared with results of three previous surveys, and with recommendations in the Diabetes NSF and the NICE type 1 diabetes guidelines. Results: Replies were received from 187 consultant paediatricians in 169 centres looking after children; 89% expressed a special interest in diabetes, 98% saw children in a designated diabetic clinic, and 95% clinics now have more than 40 patients. In 98% of the clinics there was a specialist nurse (82% now children's trained), but 61% clinics had a nurse:patient ratio &lt;1:100; 39% of clinics did not have a paediatric dietician and in 78% there was no access to psychology/psychiatry services in clinics. Glycated haemoglobin was measured routinely at clinics in 86%, annual screening for retinopathy performed in 80%, and microalbuminuria in 83%. All centres now have local protocols for ketoacidosis, but not for children undergoing surgery (90%) or severe hypoglycaemia (74%). Mean clinic HbA1c levels were significantly lower in the clinics run by specialists (8.9%) than generalists (9.4%). There have been incremental improvements over the last 14 years since the surveys began, but only two clinics met all the 10 previously published recommendations on standards of care. Conclusions: The survey shows continuing improvements in organisational structure of services for children with diabetes but serious deficiencies remain. Publication and dissemination of the results of the previous surveys may have been associated with these improvements and similar recurrent service review may be applicable to services for other chronic childhood conditions. abstract_id: PUBMED:12819585 Microalbuminuria testing in diabetes: is a dipstick as effective as laboratory tests? The National Service Framework (NSF) for Diabetes (Department of Health, 2002) aims to improve care for people with diabetes. Primary care trusts need to demonstrate the provision of cost-effective and evidence based care. The National Institute for Clinical Excellence guidelines (NICE, 2002) recommend that microalbuminuria is tested annually in people with diabetes, either by a dipstick method or by a laboratory method. A mini systematic review was undertaken to establish whether a dipstick method of detecting microalbuminuria is as effective as a laboratory method. Four databases were searched and studies comparing dipstick and laboratory methods were included. A hand search was also included. The review was further limited to the Micral test II compared to a reference laboratory standard. Four studies met the inclusion criteria and it was concluded that the Micral test II has a high sensitivity but not very high specificity with lower positive predictive value; it was therefore found to be adequate as a screening tool but not as a diagnostic tool. Therefore the Micral II test is not as effective as a laboratory comparator. abstract_id: PUBMED:15516100 A clinical study and the national service framework for diabetes. Clinical assessment and management for anyone who has diabetes may be influenced by the development of the National Service Framework (NSF) for Diabetes. Through a case study, this article explains how the NSF for Diabetes and other recent NHS documentation has influenced our approach to managing a type 2 diabetic patient whose feet are categorised as 'high risk'. Some of the potential shortfalls of the NSF for Diabetes are also discussed in this context. abstract_id: PUBMED:14998272 The Diabetes National Service Framework--a real opportunity? The Diabetes National Service Framework (NSF) represents a new style of this relatively new policy instrument. It sets clear 10-year targets but leaves a large part of implementation decision-making to local teams. It is clear that the central priorities of people with diabetes are therapeutic partnership, expert guidance and integrated service provision. These underpinning themes transcend all of the more specific objectives of the NSF. Realising both the themes and the specific objectives will, in many localities, mean tackling quite challenging transformational programmes. They will probably need to include changed ways of working and information systems development, as well as constructive partnership between primary and secondary care and between many different healthcare disciplines. This may appear a formidable task but having diabetes firmly on the 'must do' healthcare agenda for the first time creates a tremendous opportunity. The way physician specialists in diabetes, the natural local leaders, rise to the challenge will be a key determinant of whether this NSF leads to real improvements in the experience and outcome of care for people with diabetes. abstract_id: PUBMED:20647980 The impact of the NSF for Diabetes on patient empowerment. The National Service Framework (NSF) for Diabetes was developed by the UK government to tackle variations in care for patients with diabetes. This article critically analyses the impact of this policy and its effect on health care, focusing specifically on Standard 3 of the NSF. This standard discusses patient empowerment and involving patients in their own care. A number of education programmes have been set up to empower patients to better manage their diabetes, and two of these are explored in detail: Dose Adjustment for Normal Eating (DAFNE) and Diabetes Education and Self-Management for Ongoing and Newly Diagnosed (DESMOND). Research has shown however, that many people are not aware of these initiatives, suggesting that more needs to be done to ensure that patients, nurses and other health professionals are aware of the services available for people with diabetes in the community. Furthermore, research suggests that there is a lack of emotional and psychological support for patients living with this chronic condition. It is suggested here that nurses can provide patients with this support, as well as with information to help them access services to manage their diabetes. This, together with education about diet and exercise, and appropriate referrals, will enable the nurse to ensure that all patients receive high-quality care. abstract_id: PUBMED:17319298 'It appears many people with diabetes are unaware of the NSF guidelines'. N/A abstract_id: PUBMED:11816421 An NSF (National Service Framework) for diabetes. N/A abstract_id: PUBMED:12030149 Preparing for the NSF: a clinical governance model for diabetic care. The imminent introduction of a National Service Framework for diabetes has highlighted the need to develop an appropriate nursing model of care. With over 1.4 million people affected by diabetes in the UK, a multidisciplinary approach is required. This paper discusses the benefits and difficulties associated with this during the development of a clinical governance model. abstract_id: PUBMED:11997409 Lipid lowering in patients with diabetes mellitus: what coronary heart disease risk threshold should be used? Objective: To examine the impact for the UK population of providing statin treatment for diabetic patients for the primary prevention of coronary heart disease at a coronary event risk lower than currently recommended by the National Service Framework (NSF) for coronary heart disease. Design: Cross sectional survey. Setting: England 1998. Participants: Nationally representative sample of 6879 subjects aged 35-74 years living in private households. Main Outcome Measures: The proportion of the UK population recommended for statin treatment according to the NSF for coronary heart disease, and the proportion of the population with diabetes at a coronary disease event risk of &gt; or = 15% over 10 years. Results: Of the 6879 subjects with total cholesterol measurements, 218 (3.2%) had diabetes mellitus. In this nationally representative sample, 6.3% of the subjects (95% confidence interval (CI), 5.7% to 6.9%) were candidates for statin treatment for the secondary prevention of coronary heart disease, including 0.7% (95% CI 0.5% to 0.9%) with diabetes. A further 2.4% (95% CI 2.0% to 2.8%), including 0.4% (0.2% to 0.6%) with diabetes, were identified as candidates for primary prevention of coronary heart disease according to the NSF for coronary heart disease. Lowering the primary prevention threshold for statin treatment to a coronary event risk of &gt; or = 15% over 10 years in diabetic patients identified an additional 0.5% of the population. Conclusions: Extending statin treatment to diabetic patients at a coronary heart disease risk of &gt; or = 15% over 10 years would have a relatively small numerical impact in the UK population. Thus patients with diabetes mellitus should, as a minimum, be targeted for statin treatment at this level of risk. abstract_id: PUBMED:15116497 Improving diabetes care through the NSF and GMS contract. In response to the GMS Contract, which comes into effect this month, and the National Service Framework for Diabetes, one GP practice undertook an audit of patients with the condition. This revealed some gaps in the data recorded. The practice team worked to improve the information they gathered and entered on the computer system, and thus improve patient care in this area. Answer: The fourth national survey of diabetes services in the UK aimed to determine whether services for children and adolescents with diabetes have improved and if they are meeting the standards set by the National Service Framework (NSF) and the National Institute for Clinical Excellence (NICE) guidelines. The survey, which involved mailing questionnaires to UK paediatricians providing diabetes care, revealed that while there have been incremental improvements over the past 14 years, serious deficiencies still exist. For instance, 98% of clinics had a specialist nurse, but 61% had a nurse-to-patient ratio less than 1:100, and 39% of clinics did not have a paediatric dietician. Additionally, 78% of clinics lacked access to psychology/psychiatry services. Routine measurements of glycated haemoglobin were performed in 86% of clinics, annual retinopathy screening in 80%, and microalbuminuria screening in 83%. However, only two clinics met all 10 previously published recommendations on standards of care. The survey concluded that there are continuing improvements in the organizational structure of services, but significant gaps remain in meeting NSF and NICE guidelines (PUBMED:15941771). The NSF for Diabetes was developed to address variations in diabetes care and includes standards such as patient empowerment and involvement in their own care. However, research indicates that many people with diabetes are not aware of these initiatives, suggesting a need for increased awareness among patients, nurses, and other health professionals about the services available for diabetes management in the community (PUBMED:20647980). Overall, while there have been improvements in diabetes care services in the UK, the fourth national survey indicates that not all NSF standards and NICE guidelines are being met, highlighting the need for ongoing efforts to improve service provision and patient outcomes.
Instruction: Is L-glutathione more effective than L-glutamine in preventing enteric diabetic neuropathy? Abstracts: abstract_id: PUBMED:38004451 Rubia cordifolia L. Attenuates Diabetic Neuropathy by Inhibiting Apoptosis and Oxidative Stress in Rats. Background: Diabetic neuropathy is a debilitating manifestation of long-term diabetes mellitus. The present study explored the effects of the roots of Rubia cordifolia L. (R. cordifolia L.) in the Wistar rat model for diabetic neuropathy and possible neuroprotective, antidiabetic, and analgesic mechanisms underlying this effect. Materials And Methods: Rats were divided into five experimental groups. An amount of 0.25% carboxy methyl cellulose (CMC) in saline and streptozotocin (STZ) (60 mg/kg) was given to group 1 and group 2, respectively. Group 3 was treated with STZ and glibenclamide simultaneously while groups 4 and 5 were simultaneously treated with STZ and hydroalcoholic extract of the root of R. cordifolia, respectively. Hot plate and cold allodynias were used to evaluate the pain threshold. The antioxidant effects of R. cordifolia were assessed by measuring Thiobarbituric acid reactive substances (TBARS), reduced glutathione (GSH), catalase (CAT), and superoxide dismutase (SOD). At the end of the study, sciatic nerve and brain tissues were collected for histopathological study. Bcl-2 proteins, cleaved caspase-3, and Bax were assessed through the Western blot method. Results: R. cordifolia significantly attenuated paw withdrawal and tail flick latency in diabetic neuropathic rats. R. cordifolia significantly (p &lt; 0.01) improved the levels of oxidative stress. It was found to decrease blood glucose levels and to increase animal weight in R. cordifolia-treated groups. Treatment with R. cordifolia suppressed the cleaved caspase-3 and reduced the Bax:Bcl2 ratio in sciatic nerve and brain tissue compared to the diabetic group. Histopathological analysis also revealed a marked improvement in architecture and loss of axons in brain and sciatic nerve tissues at a higher dose of R. cordifolia (400 mg/kg). Conclusion: R. cordifolia attenuated diabetic neuropathy through its antidiabetic and analgesic properties by ameliorating apoptosis and oxidative stress. abstract_id: PUBMED:36117567 Phytochemical and anti-neuropathic investigations of Crocus sativus via alleviating inflammation, oxidative stress and pancreatic beta-cells regeneration. Objective: The aim of this study is to investigate the phytochemical and the long-term anti-neuropathic potentials of Crocus sativus cultivated in the University botanical garden, and explore its most bioactive compounds and their underlying mechanisms of action. Methods: Phytochemical analysis and bio-guided isolation-procedures including RP-HPLC and 1H and 13C NMR utilizing biological models of diabetes, inflammation, and diabetic-neuropathy were used. Cultivated saffron (S-RCED) and Spanish-saffron stigma (S-SP) alone or in combination with Camellia sinus (CS) were investigated. Results: The RP-HPLC analyses showed the presence of picrocrocin, crocin I, crocin II, crocin I', crocin II', and safranal (SAF) in both S-SP and S-RCED extracts with higher-concentrations. It had been shown that SAF was the most bioactive-compound in Crocus sativus. Both S-SP and S-RCED possessed significant (P &lt; 0.05) anti-diabetic activities in acute (6 h), subchronic (8 d) and chronic (8 weeks) models. S-RCED had been proven with more hypoglycemic potentials when compared to S-SP and SAF. S-SP, S-RCED, and SAF produced significant anti-inflammatory and anti-nociceptive activities against carrageenan-induced inflammatory, hyperalgesic and tactile diabetic-neuropathy models, respectively. S-SP, S-RCED, and SAF elevated serum catalase, reduced glutathione, and insulin serum levels, ameliorated lipid peroxidation and HbA1c levels, and histopathologically regenerated the pancreatic beta-cells. Combinations with CS showed more significant efficacy than the single component. Conclusion: The oxidative stress reduction, insulin secretagogue, and pancreatic beta-cells regeneration potentials might be responsible for the mechanism underlying the anti-diabetic, anti-inflammatory and anti-diabetic neuropathy activities. Thus, the cultivated Crocus sativus might be clinically useful for protecting against many serious-disorders. abstract_id: PUBMED:23441480 Benefits of early glycemic control by insulin on sensory neuropathy and cataract in diabetic rats. While there is an emphasis on the early glycemic control for its long-term benefits in preventing microvascular complications of diabetes, the biochemical mechanisms responsible for the long-lasting effects are not clearly understood. Therefore the impact of early insulin (EI) versus late insulin (LI) treatment on diabetic sensory neuropathy and cataract in streptozotocin-induced diabetic Wistar male rats were evaluated. EI group received insulin (2.5 IU/animal, once daily) treatment from day 1 to 90 while LI group received insulin from day 60 to 90. Early insulin treatment significantly reduced the biochemical markers like glucose, triglyceride, glycated hemoglobin, thiobarbituric acid reactive substances, advanced glycation end products and ratio of reduced glutathione and oxidized glutathione in diabetic rats. The late insulin treatment failed to resist the biochemical changes in diabetic rats. Diabetic rats developed sensory neuropathy as evidenced by mechanical and thermal hyperalgesia and showed a higher incidence and severity of cataract as revealed by slit lamp examination. Early insulin treatment protected the rats from the development of neuropathy and cataract, but late insulin administration failed to do so. The results demonstrate the benefits of early glycemic control in preventing neuropathy and cataract development in diabetic rats. abstract_id: PUBMED:8734302 Oxidative stress in Parkinson's disease and other neurodegenerative disorders. The cause of cell death in neurodegenerative diseases remains unknown but the formation of free radicals and the occurrence of oxidative stress may be a common component of many, if not all, such disorders. For example, in substantia nigra in Parkinson's diseases key alterations occur, in iron handling, mitochondrial function and antioxidant defences, particularly reduced glutathione. These indices of oxidative stress are accompanied by evidence of free radical mediated damage in the form of increased lipid peroxidation and oxidation of DNA bases. The alterations in oxidative stress occurring in Parkinson's disease appear not be related to the administration of L-DOPA. Some alterations of oxidative stress are found in other basal ganglia in degenerative disorders (multiple system atrophy, progressive supranuclear palsy, Huntington's disease) but these have not been investigated to the same extent. Similarly, examination of biochemical changes occurring in Alzheimer's disease, motor neurone disease and diabetic neuropathy also suggest the involvement of free radical mediated mechanisms as a component of neurodegeneration. It is probable that irrespective of the primary cause of individual neurodegenerative disorder, the onset of oxidative stress is a common mechanism by which neuronal death occurs and which contributes to disease progression. Clearly, therapeutic strategies aimed at limiting free radical production and oxidative stress and/or damage may slow the advance of neurodegenerative disease. abstract_id: PUBMED:20043035 Alpha-lipoic Acid and diabetic neuropathy. Diabetic neuropathy presents a major public health problem. It is defined by the symptoms and signs of peripheral nerve dysfunction in diabetic patients, in whom other causes of neuropathy have been excluded. Pathogenetic mechanisms that have been implicated in diabetic neuropathy are: a) increased flux through the polyol pathway, leading to accumulation of sorbitol, a reduction in myo-inositol, and an associated reduced Na+-K+-ATPase activity, and b) endoneurial microvascular damage and hypoxia due to nitric oxide inactivation by increased oxygen free radical activity. Alpha-lipoic acid seems to delay or reverse peripheral diabetic neuropathy through its multiple antioxidant properties. Treatment with alpha-lipoic acid increases reduced glutathione, an important endogenous antioxidant. In clinical trials, 600 mg alpha-lipoic acid has been shown to improve neuropathic deficits. This review focuses on the relationship of alpha-lipoic acid and auto-oxidative glycosylation. It discusses the impact of alpha-lipoic acid on hyperglycemia-induced oxidative stress, and examines the role of alpha-lipoic acid in preventing glycation process and nerve hypoxia. abstract_id: PUBMED:19597293 A splice variant of GNB3 and peripheral polyneuropathy in type 1 diabetes. Abnormalities in G protein-mediated signal transduction could be involved in the pathogenesis of diabetic polyneuropathy (DPN). Here we test whether the GNB3 C825T variant confers susceptibility to DPN in type 1 diabetes (T1D) mellitus. The C825T marker of GNB3 was genotyped in genomic DNA from blood isolated from a total of 213 Russian T1D patients 100 of whom had DPN. Compared to carriers of the wild-type genotype C/C, diabetic subjects with genotypes T/T had significantly increased risk to develop DPN (Odds Ratio (OR) of 4.4 (p = 0.001). The adjustment for confounders (age, sex, body mass index, cigarette smoking, and level of reduced glutathione) resulted in increase of the OR value up to 4.72 (p = 8.9 x 10&lt;formula&gt;;{-3}&lt;/formula&gt;). The further adjustment for hypertension abolished the association between the GNB3 C825T variant and DPN (OR = 1.95, p = 0.18). Non-complicated subjects homozygous for T/T showed decreased levels of reduced glutathione (T/T: 69 +/- 19 vs. C/T: 74 +/- 19 vs. C/C: 77 +/- 17 micromol/l, p = 0.009). Compared to other GNB3 variants, carriers of the T/T genotype had elevated systolic blood pressure (SBP) in complicated (T/T: 115.8 +/- 9.1 vs. C/T: 113.3 +/- 8.2 vs. C/C: 109.5 +/- 8.7 mm/Hg, p = 0.036) and non-complicated T1D patients (T/T: 118.1 +/- 8.4 vs. C/T: 116.9 +/- 7.9 vs. C/C: 112.1 +/- 7.2 mm/Hg, p = 0.02). However, the significance of association between the C825T polymorphism was lost after adjustment for confounding risk factors. In conclusion, the 825T allele of GNB3 is likely to accelerate the development of DPN through primary effects to SBP and hypertension in subgroups of diabetic patients with impaired neurovascular function and advanced oxidative stress. abstract_id: PUBMED:27956549 Loss of Glyoxalase 1 Induces Compensatory Mechanism to Achieve Dicarbonyl Detoxification in Mammalian Schwann Cells. The glyoxalase system is a highly specific enzyme system existing in all mammalian cells that is responsible for the detoxification of dicarbonyl species, primarily methylglyoxal (MG). It has been implicated to play an essential role in preventing the increased formation of advanced glycation end products under certain pathological conditions. We have established the first glyoxalase 1 knock-out model (GLO1-/-) in mammalian Schwann cells using the CRISPR/Cas9 technique to investigate compensatory mechanisms. Neither elevated concentrations of MG nor associated protein modifications were observed in GLO1-/- cells. Alternative detoxification of MG in GLO1-/- is achieved by increased catalytic efficiency of aldose reductase toward hemithioacetal (product of glutathione and MG), which is most likely caused by S-nitrosylation of aldose reductase. The hemithioacetal is mainly converted into lactaldehyde, which is paralleled by a loss of reduced glutathione. Inhibition of aldose reductase in GLO1-/- cells is associated with an increased sensitivity against MG, elevated intracellular MG levels, associated modifications, as well as increased oxidative stress. Our data suggest that aldose reductase can compensate for the loss of GLO1. This might be of clinical importance within the context of neuronal diseases caused by an impaired glyoxalase system and elevated levels of dicarbonyl species, such as MG. abstract_id: PUBMED:30536411 Study of nuclear factor-2 erythroid related factor-2 activator, berberine, in paclitaxel induced peripheral neuropathy pain model in rats. Objectives: The role of nuclear factor-2 erythroid related factor-2 (Nrf2) activator, berberine (BBR), has been established in rat model of streptozotocin induced diabetic neuropathy. Around 30-40% of cancer patients, on paclitaxel (PTX) chemotherapy develop peripheral neuropathy. The present study was contemplated with the aim of establishing the neuropathy preventive role of BBR, in paclitaxel induced peripheral neuropathy model in rats. Methods: A total of 30 Wistar rats were divided into five groups as follows: Group I: dimethyl sulfoxide; Group II: PTX+ 0.9% NaCl; Group III: Amitriptyline (ATL) + PTX; Group IV: BBR (10 mg/kg) + PTX and Group V: BBR (20 mg/kg) + PTX. Animals were assessed for tail flick latency, tail cold allodynia latency, histopathological scores, oxidative stress parameters, and mRNA expression of the Nrf2 gene in the sciatic nerve. Key Findings: Berberine significantly increased the tail flick and tail cold allodynia latencies and significantly decreased the histopathological score. BBR reduced oxidative stress by significantly decreasing the lipid peroxidation, increasing the superoxide dismutase and reduced glutathione levels in the sciatic nerve. BBR also increased the mRNA expression of Nrf2 gene in rat sciatic nerve. Conclusions: All of these results showed the neuropathy preventing role of BBR in PTX induced neuropathy pain model in rats. abstract_id: PUBMED:25427889 The Role of Oxidative Stress in Diabetic Neuropathy: Generation of Free Radical Species in the Glycation Reaction and Gene Polymorphisms Encoding Antioxidant Enzymes to Genetic Susceptibility to Diabetic Neuropathy in Population of Type I Diabetic Patients. Diabetic neuropathy (DN) represents the main cause of morbidity and mortality among diabetic patients. Clinical data support the conclusion that the severity of DN is related to the frequency and duration of hyperglycemic periods. The presented experimental and clinical evidences propose that changes in cellular function resulting in oxidative stress act as a leading factor in the development and progression of DN. Hyperglycemia- and dyslipidemia-driven oxidative stress is a major contributor, enhanced by advanced glycation end product (AGE) formation and polyol pathway activation. There are several polymorphous pathways that lead to oxidative stress in the peripheral nervous system in chronic hyperglycemia. This article demonstrates the origin of oxidative stress derived from glycation reactions and genetic variations within the antioxidant genes which could be implicated in the pathogenesis of DN. In the diabetic state, unchecked superoxide accumulation and resultant increases in polyol pathway activity, AGEs accumulation, protein kinase C activity, and hexosamine flux trigger a feed-forward system of progressive cellular dysfunction. In nerve, this confluence of metabolic and vascular disturbances leads to impaired neural function and loss of neurotrophic support, and over the long term, can mediate apoptosis of neurons and Schwann cells, the glial cells of the peripheral nervous system. In this article, we consider AGE-mediated reactive oxygen species (ROS) generation as a pathogenesis factor in the development of DN. It is likely that oxidative modification of proteins and other biomolecules might be the consequence of local generation of superoxide on the interaction of the residues of L-lysine (and probably other amino acids) with α-ketoaldehydes. This phenomenon of non-enzymatic superoxide generation might be an element of autocatalytic intensification of pathophysiological action of carbonyl stress. Glyoxal and methylglyoxal formed during metabolic pathway are detoxified by the glyoxalase system with reduced glutathione as co-factor. The concentration of reduced glutathione may be decreased by oxidative stress and by decreased in situ glutathione reductase activity in diabetes mellitus. Genetic variations within the antioxidant genes therefore could be implicated in the pathogenesis of DN. In this work, the supporting data about the association between the -262T &gt; C polymorphism of the catalase (CAT) gene and DN were shown. The -262TT genotype of the CAT gene was significantly associated with higher erythrocyte catalase activity in blood of DN patients compared to the -262CC genotype (17.8 ± 2.7 × 10(4) IU/g Hb vs. 13.5 ± 3.2 × 10(4) IU/g Hb, P = 0.0022). The role of these factors in the development of diabetic complications and the prospective prevention of DN by supplementation in formulations of transglycating imidazole-containing peptide-based antioxidants (non-hydrolyzed carnosine, carcinine, n-acetylcarcinine) scavenging ROS in the glycation reaction, modifying the activity of enzymic and non-enzymic antioxidant defenses that participate in metabolic processes with ability of controlling at transcriptional levels the differential expression of several genes encoding antioxidant enzymes inherent to DN in Type I Diabetic patients, now deserve investigation. abstract_id: PUBMED:9285497 The roles of oxidative stress and antioxidant treatment in experimental diabetic neuropathy. Oxidative stress is present in the diabetic state. Our work has focused on its presence in peripheral nerves. Antioxidant enzymes are reduced in peripheral nerves and are further reduced in diabetic nerves. That lipid peroxidation will cause neuropathy is supported by evidence of the development of neuropathy de novo when normal nerves are rendered alpha-tocopherol deficient and by the augmentation of the conduction deficit in diabetic nerves subjected to this insult. Oxidative stress appears to be primarily due to the processes of nerve ischemia and hyperglycemia auto-oxidation. The indexes of oxidative stress include an increase in nerve, dorsal root, and sympathetic ganglia lipid hydroperoxides and conjugated dienes. The most reliable and sensitive index, however, is a reduction in reduced glutathione. Experimental diabetic neuropathy results in myelinopathy of dorsal roots and a vacuolar neuropathy of dorsal root ganglion. The vacuoles are mitochondrial; we posit that lipid peroxidation causes mitochondrial DNA mutations that increase reduced oxygen species, causing further damage to mitochondrial respiratory chain and function and resulting in a sensory neuropathy. Alpha-lipoic acid is a potent antioxidant that prevents lipid peroxidation in vitro and in vivo. We evaluated the efficacy of the drug in doses of 20, 50, and 100 mg/kg administered intraperitoneally in preventing the biochemical, electrophysiological, and nerve blood flow deficits in the peripheral nerves of experimental diabetic neuropathy. Alpha-lipoic acid dose- and time-dependently prevented the deficits in nerve conduction and nerve blood flow and biochemical abnormalities (reductions in reduced glutathione and lipid peroxidation). The nerve blood flow deficit was 50% (P &lt; 0.001). Supplementation dose-dependently prevented the deficit; at the highest concentration, nerve blood flow was not different from that of control nerves. Digital nerve conduction underwent a dose-dependent improvement at 1 month (P &lt; 0.05). By 3 months, all treated groups had lost their deficit. The antioxidant drug is potentially efficacious for human diabetic sensory neuropathy. Answer: The provided abstracts do not directly compare the effectiveness of L-glutathione and L-glutamine in preventing enteric diabetic neuropathy. However, several abstracts discuss the role of oxidative stress in diabetic neuropathy and the importance of reduced glutathione (a form of L-glutathione) as an antioxidant in this context. For instance, Rubia cordifolia L. was found to significantly improve levels of oxidative stress markers, including reduced glutathione (GSH), in diabetic neuropathic rats (PUBMED:38004451). Similarly, alpha-lipoic acid, which increases reduced glutathione levels, has been shown to improve neuropathic deficits in clinical trials (PUBMED:20043035). The role of oxidative stress in neurodegenerative disorders, including diabetic neuropathy, is further emphasized, with reduced glutathione being a key antioxidant defense (PUBMED:8734302). Additionally, genetic variations in antioxidant genes, which could affect reduced glutathione levels, have been implicated in the susceptibility to diabetic neuropathy (PUBMED:25427889). On the other hand, L-glutamine is not specifically mentioned in the context of diabetic neuropathy in the provided abstracts. Therefore, based on the available information, it is not possible to determine whether L-glutathione is more effective than L-glutamine in preventing enteric diabetic neuropathy. More research directly comparing these two substances would be needed to make such a determination.
Instruction: Dinitrogen fixation associated with shoots of aquatic carnivorous plants: is it ecologically important? Abstracts: abstract_id: PUBMED:24817095 Dinitrogen fixation associated with shoots of aquatic carnivorous plants: is it ecologically important? Background And Aims: Rootless carnivorous plants of the genus Utricularia are important components of many standing waters worldwide, as well as suitable model organisms for studying plant-microbe interactions. In this study, an investigation was made of the importance of microbial dinitrogen (N2) fixation in the N acquisition of four aquatic Utricularia species and another aquatic carnivorous plant, Aldrovanda vesiculosa. Methods: 16S rRNA amplicon sequencing was used to assess the presence of micro-organisms with known ability to fix N2. Next-generation sequencing provided information on the expression of N2 fixation-associated genes. N2 fixation rates were measured following (15)N2-labelling and were used to calculate the plant assimilation rate of microbially fixed N2. Key Results: Utricularia traps were confirmed as primary sites of N2 fixation, with up to 16 % of the plant-associated microbial community consisting of bacteria capable of fixing N2. Of these, rhizobia were the most abundant group. Nitrogen fixation rates increased with increasing shoot age, but never exceeded 1·3 μmol N g(-1) d. mass d(-1). Plant assimilation rates of fixed N2 were detectable and significant, but this fraction formed less than 1 % of daily plant N gain. Although trap fluid provides conditions favourable for microbial N2 fixation, levels of nif gene transcription comprised &lt;0·01 % of the total prokaryotic transcripts. Conclusions: It is hypothesized that the reason for limited N2 fixation in aquatic Utricularia, despite the large potential capacity, is the high concentration of NH4-N (2·0-4·3 mg L(-1)) in the trap fluid. Resulting from fast turnover of organic detritus, it probably inhibits N2 fixation in most of the microorganisms present. Nitrogen fixation is not expected to contribute significantly to N nutrition of aquatic carnivorous plants under their typical growth conditions; however, on an annual basis the plant-microbe system can supply nitrogen in the order of hundreds of mg m(-2) into the nutrient-limited littoral zone, where it may thus represent an important N source. abstract_id: PUBMED:36215097 Cytokinins and auxins in organs of aquatic carnivorous plants: what do they reflect? Background And Aims: Aquatic carnivorous plants have typical rootless linear shoots bearing traps and exhibit steep physiological polarity with rapid apical growth. The aim was to analyse auxin and cytokinin metabolites in traps, leaves/shoots and shoot apices in several species of genera Aldrovanda and Utricularia to elucidate how the hormonal profiles reflect the specific organ functions and polarity. Methods: The main auxin and cytokinin metabolites were analysed in miniature samples (&gt;2 mg dry weight) of different organs of Aldrovanda vesiculosa and six Utricularia species using ultraperformance liquid chromatography coupled with triple quadrupole mass spectrometry. Key Results: Total contents of biologically active forms (free bases, ribosides) of all four main endogenously occurring cytokinin types were consistently higher in traps than in leaves in four Utricularia species with monomorphic shoots and/or higher than in shoots in two Utricularia species with dimorphic shoots. In Aldrovanda traps, the total content of different cytokinin forms was similar to or lower than that in shoots. In U. australis leaves, feeding on prey increased all cytokinin forms, while no consistent differences occurred in Aldrovanda. In four aquatic Utricularia species with monomorphic shoots, the content of four auxin forms was usually higher in traps than in leaves. Zero IAA content was determined in U. australis leaves from a meso-eutrophic site or when prey-fed. Conclusions: Different cytokinin and auxin profiles estimated in traps and leaves/shoots of aquatic carnivorous plants indicate an association with different dominant functions of these organs: nutrient uptake by traps versus photosynthetic function of traps. Interplay of cytokinins and auxins regulates apical dominance in these plants possessing strong polarity. abstract_id: PUBMED:34250740 Theoretical Assessment of Dinitrogen Fixation on Carbon Atom. Dinitrogen activation in non-metallic systems has received considerable attention in recent years. Herein, we report the theoretical feasibility of N2 fixation using aminocarbenes (L) or their anionic derivatives. The molecular descriptors of L and anionic L- , which affect the interaction of L and anionic L- with N2 , were identified through multiple linear regression analysis. Additionally, the electron flow during C-N bond formation was confirmed by performing intrinsic reaction coordination calculations with intrinsic bond orbital analysis for the reaction of anionic L- with N2 . abstract_id: PUBMED:24770880 Cytochemical and ultrastructural aspects of aquatic carnivorous plant turions. Turions, which are modified shoot apices, are vegetative, dormant overwintering organs produced by perennial aquatic plants. In this study, the turion cytochemistry and ultrastructure of Aldrovanda vesiculosa, Utricularia vulgaris and U. stygia were compared with particular emphasis placed on storage substances. These three aquatic, rootless carnivorous plant species were studied at the end of their winter dormancy. At this stage, the turions of all species had starch as their main storage material. In contrast with A. vesiculosa, Utricularia turions were rich in protein storage vacuoles, and proteins were also accumulated as crystalline inclusions in the nuclei. All examined species accumulated lipid droplets in cells of epidermal glands. abstract_id: PUBMED:27098087 Cytokinin, auxin and physiological polarity in the aquatic carnivorous plants Aldrovanda vesiculosa and Utricularia australis. Background And Aims: The typical rootless linear shoots of aquatic carnivorous plants exhibit clear, steep polarity associated with very rapid apical shoot growth. The aim of this study was to determine how auxin and cytokinin contents are related to polarity and shoot growth in such plants. Methods: The main auxin and cytokinin metabolites in separated shoot segments and turions of two carnivorous plants, Aldrovanda vesiculosa and Utricularia australis, were analysed using ultra-high-performance liquid chromatography coupled with triple quad mass spectrometry. Key Results: In both species, only isoprenoid cytokinins were identified. Zeatin cytokinins predominated in the apical parts, with their concentrations decreasing basipetally, and the trans isomer predominated in A. vesiculosa whereas the cis form was more abundant in U australis. Isopentenyladenine-type cytokinins, in contrast, increased basipetally. Conjugated cytokinin metabolites, the O-glucosides, were present at high concentrations in A. vesiculosa but only in minute amounts in U. australis. N(9)-glucoside forms were detected only in U. australis, with isopentenyladenine-9-glucoside (iP9G) being most abundant. In addition to free indole-3-acetic acid (IAA), indole-3-acetamide (IAM), IAA-aspartate (IAAsp), IAA-glutamate (IAGlu) and IAA-glycine (IAGly) conjugates were identified. Conclusions: Both species show common trends in auxin and cytokinin levels, the apical localization of the cytokinin biosynthesis and basipetal change in the ratio of active cytokinins to auxin, in favour of auxin. However, our detailed study of cytokinin metabolic profiles also revealed that both species developed different regulatory mechanisms of active cytokinin content; on the level of their degradation, in U. australis, or in the biosynthesis itself, in the case of A. vesiculosa Results indicate that the rapid turnover of these signalling molecules along the shoots is essential for maintaining the dynamic balance between the rapid polar growth and development of the apical parts and senescence of the older, basal parts of the shoots. abstract_id: PUBMED:637550 Biological dinitrogen fixation (acetylene reduction) associated with Florida mangroves. Biological dinitrogen fixation in mangrove communities of the Tampa Bay region of South Florida was investigated using the acetylene reduction technique. Low rates of acetylene reduction (0.01 to 1.84 nmol of C(2)H(4)/g [wet weight] per h) were associated with plant-free sediments, while plant-associated sediments gave rise to slightly higher rates. Activity in sediments increased greatly upon the addition of various carbon sources, indicating an energy limitation for nitrogenase (C(2)H(2)) activity. In situ determinations of dinitrogen fixation in sediments also indicated low rates and exhibited a similar response to glucose amendment. Litter from the green macroalga, Ulva spp., mangrove leaves, and sea grass also gave rise to significant rates of acetylene reduction. Higher rates of nitrogenase activity (15 to 53 nmol of C(2)H(4)/g [wet weight] per h were associated with washed excised roots of three Florida mangrove species [Rhizophora mangle L., Avicennia germinans (L) Stern, and Laguncularia racemosa Gaertn.] as well as with isolated root systems of intact plants (11 to 58 mug of N/g [dry weight] per h). Following a short lag period, root-associated activity was linear and did not exhibit a marked response to glucose amendment. It appears that dinitrogen-fixing bacteria in the mangrove rhizoplane are able to use root exudates and/or sloughed cell debris as energy sources for dinitrogen fixation. abstract_id: PUBMED:28436068 Dinitrogen Splitting Coupled to Protonation. The coupling of electron- and proton-transfer steps provides a general concept to control the driving force of redox reactions. N2 splitting of a molybdenum dinitrogen complex into nitrides coupled to a reaction with Brønsted acid is reported. Remarkably, our spectroscopic, kinetic, and computational mechanistic analysis attributes N-N bond cleavage to protonation in the periphery of an amide pincer ligands rather than the {Mo-N2 -Mo} core. The strong effect on electronic structure and ultimately the thermochemistry and kinetic barrier of N-N bond cleavage is an unusual case of a proton-coupled metal-to-ligand charge transfer process, highlighting the use of proton-responsive ligands for nitrogen fixation. abstract_id: PUBMED:31825596 Soybeans Grown with Carbonaceous Nanomaterials Maintain Nitrogen Stoichiometry by Assimilating Soil Nitrogen to Offset Impaired Dinitrogen Fixation. Engineered nanomaterials (ENMs) can enter agroecosystems because of their widespread use and disposal. Within soil, ENMs may affect legumes and their dinitrogen (N2) fixation, which are critical for food supply and N-cycling. Prior research focusing on end point treatment effects has reported that N2-fixing symbioses in an important food legume, soybean, can be impaired by ENMs. Yet, it remains unknown how ENMs can influence the actual amounts of N2 fixed and what plant total N contents are since plants can also acquire N from the soil. We determined the effects of one already widespread and two rapidly expanding carbonaceous nanomaterials (CNMs: carbon black, multiwalled carbon nanotubes, and graphene; each at three concentrations) on the N economy of soil-grown soybeans. Unlike previous studies, this research focused on processes and interactions within a plant-soil-microbial system. We found that total plant N accumulation was unaffected by CNMs. However, as shown by 15N isotope analyses, CNMs significantly diminished soybean N2 fixation (by 31-78%). Plants maintained N stoichiometry by assimilating compensatory N from the soil, accompanied by increased net soil N mineralization. Our findings suggest that CNMs could undermine the role of legume N2 fixation in supplying N to agroecosystems. Maintaining productivity in leguminous agriculture experiencing such effects would require more fossil-fuel-intensive N fertilizer and increase associated economic and environmental costs. This work highlights the value of a process-based analysis of a plant-soil-microbial system for assessing how ENMs in soil can affect legume N2 fixation and N-cycling. abstract_id: PUBMED:32308728 Agrobacterium tumefaciens mediated transformation of the aquatic carnivorous plant Utricularia gibba. Background: The genus Utricularia belongs to Lentibulariaceae, the largest family of carnivorous plants, which includes terrestrial, epiphytic and aquatic species. The development of specialized structures that evolved for carnivory is a feature of this genus that has been of great interest to biologists since Darwin's early studies. Utricularia gibba is itself an aquatic plant with sophisticated bladder traps having one of the most complex suction mechanisms for trapping prey. However, the molecular characterization of the mechanisms that regulate trap development and the biophysical processes involved in prey trapping are still largely unknown due to the lack of a simple and reproducible gene transfer system. Results: Here, we report the establishment of a simple, fast and reproducible protocol for genetic transformation of U. gibba based on the T-DNA of Agrobacterium tumefaciens. An in vitro selection system using Phosphinotricin as a selective agent was established for U. gibba. Plant transformation was confirmed by histochemical GUS assays and PCR and qRT-PCR analyses. We report on the expression pattern of the 35S promoter and of the promoter of a trap-specific ribonuclease gene in transgenic U. gibba plants. Conclusions: The genetic transformation protocol reported here is an effective method for studying developmental biology and functional genomics of this genus of carnivorous plants and advances the utility of U. gibba as a model system to study developmental processes involved in trap formation. abstract_id: PUBMED:30327648 Effects of Water Column Mixing and Stratification on Planktonic Primary Production and Dinitrogen Fixation on a Northern Red Sea Coral Reef. The northern Red Sea experiences strong annual differences in environmental conditions due to its relative high-latitude location for coral reefs. This allows the study of regulatory effects by key environmental parameters (i.e., temperature, inorganic nutrient, and organic matter concentrations) on reef primary production and dinitrogen (N2) fixation, but related knowledge is scarce. Therefore, this study measured environmental parameters, primary production and N2 fixation of phytoplankton groups in the water overlying a coral reef in the Gulf of Aqaba. To this end, we used a comparative approach between mixed and stratified water column scenarios in a full year of seasonal observations. Findings revealed that inorganic nutrient concentrations were significantly higher in the mixed compared to the stratified period. While gross photosynthesis and N2 fixation rates remained similar, net photosynthesis decreased from mixed to stratified period. Net heterotrophic activity of the planktonic community increased significantly during the stratified compared to the mixed period. While inorganic nitrogen (N) availability was correlated with net photosynthesis over the year, N2 fixation only correlated with N availability during the mixed period. This emphasizes the complexity of planktonic trophodynamics in northern Red Sea coral reefs. Comparing mixed and stratified planktonic N2 fixation rates with those of benthic organisms and substrates revealed a close seasonal activity similarity between free-living pelagic and benthic diazotrophs. During the mixed period, N2 fixation potentially contributed up to 3% of planktonic primary production N demand. This contribution increased by ca. one order of magnitude to 21% during the stratified period. Planktonic N2 fixation is likely a significant N source for phytoplankton to maintain high photosynthesis under oligotrophic conditions in coral reefs, especially during stratified conditions. Answer: Dinitrogen fixation associated with shoots of aquatic carnivorous plants, such as those from the genus Utricularia and Aldrovanda vesiculosa, has been studied to determine its ecological importance. The research found that Utricularia traps are primary sites of N2 fixation, with a significant portion of the plant-associated microbial community consisting of bacteria capable of fixing N2, with rhizobia being the most abundant group (PUBMED:24817095). However, the rates of nitrogen fixation were relatively low and did not exceed 1.3 μmol N g(-1) dry mass d(-1), and the plant assimilation rates of fixed N2 were detectable but accounted for less than 1% of the daily plant N gain (PUBMED:24817095). It is hypothesized that the high concentration of NH4-N in the trap fluid, resulting from the fast turnover of organic detritus, likely inhibits N2 fixation in most of the microorganisms present (PUBMED:24817095). Despite the limited contribution of N2 fixation to the nitrogen nutrition of aquatic carnivorous plants under typical growth conditions, the plant-microbe system can supply nitrogen in the order of hundreds of mg m(-2) annually into the nutrient-limited littoral zone, potentially representing an important nitrogen source in such environments (PUBMED:24817095). Therefore, while N2 fixation may not significantly contribute to the immediate nitrogen needs of aquatic carnivorous plants, it can have ecological relevance by contributing to the nitrogen budget of the surrounding ecosystem, particularly in nutrient-poor aquatic environments.
Instruction: Ultrasound elastography: is there a shear wave speed cutoff for pediatric liver fibrosis and inflammation? Abstracts: abstract_id: PUBMED:27840268 Ultrasound elastography: is there a shear wave speed cutoff for pediatric liver fibrosis and inflammation? Objective: The objective was to identify a shear wave speed (SWS) cutoff in pediatric liver disease to detect fibrosis and inflammation. Methods: We performed an institutional-review-board-approved prospective study of liver SWS in 41 children undergoing liver biopsy. Results: In patients without inflammation, SWS was higher when fibrosis was present compared to when fibrosis was not present (average SWS 1.8 vs. 1.4 m/s, respectively, P=.02). A SWS cutoff of 1.7 m/s had 100% positive predictive value and 24% negative predictive value for detecting fibrosis or inflammation. Conclusion: Point shear wave elastography is an insufficient stand-alone screening tool for liver fibrosis and inflammation in a heterogeneous patient population. abstract_id: PUBMED:28586157 Role of Point Shear Wave Elastography in the Determination of the Severity of Fibrosis in Pediatric Liver Diseases With Pathologic Correlations. Objectives: Our aims in this study were as follows: (1) to determine the cutoff value that can distinguish between advanced liver fibrosis and normal liver tissue for two different elastographic techniques; (2) to determine the cutoff value that can distinguish mild liver fibrosis from normal liver tissue for the techniques; and (3) to assess tissue stiffness in nonalcoholic fatty liver disease (NAFLD). Methods: Seventy-five patients assessed for liver biopsy on the same day were evaluated by point shear wave elastography. Thirty-one healthy children and 11 children with NAFLD were also evaluated. A 9L4 transducer with Virtual Touch quantification (VTQ) and Virtual Touch imaging and quantification (VTIQ) modes (Siemens Medical Solutions, Mountain View, CA) was used for quantification. Results: The shear wave speed of the patients with NAFLD was higher than that of the control group. The only predictive factor for VTQ and VTIQ was the histologic fibrosis score (model-adjusted R2 = 0.56 for VTQ and 0.75 for VTIQ). Shear wave speed cutoffs were 1.67 m/s for VTQ and 1.56 m/s for VTIQ in detecting fibrosis or inflammation and 2.09 m/s for VTQ and 2.17 m/s for VTIQ in discriminating children with low and high histologic liver fibrosis scores. Conclusions: The VTQ and VTIQ values reveal high-grade histopathologic fibrosis and have high success rates when distinguishing high- from low-grade fibrosis. However, they have limited success rates when differentiating low-grade fibrosis from normal liver tissue. abstract_id: PUBMED:29936025 Viscoelasticity Measurement in Rat Livers Using Shear-Wave US Elastography. To investigate the usefulness of shear-wave speed and dispersion slope measurements obtained, using an ultrasound elastography system in rat livers with various degrees of necroinflammation and fibrosis. A total of 25 male Sprague Dawley rats were randomly divided into 5 groups of 5 rats each: G0 (control), G1 (CCl4 injected twice a week for 1 wk), G2 (CCl4 injected four times a wk for 1 wk), G3 (CCl4 injected twice a wk for 6 wk) and G4 (CCl4 injected twice a wk for 10 wk). The shear-wave speed (m/s) and the dispersion slope ([m/s]/kHz) were measured. Histologic features (inflammation, necrosis and fibrosis) were used as reference standards. In multivariable analysis with histologic features as independent variables, the fibrosis grade was significantly related to shear-wave speed (p &lt; 0.05) and the necrosis grade was significantly related to dispersion slope (p &lt; 0.05). Dispersion slope is more useful than shear-wave speed for predicting the degree of necroinflammation. abstract_id: PUBMED:33926073 Quantification of Liver Fibrosis, Steatosis, and Viscosity Using Multiparametric Ultrasound in Patients with Non-Alcoholic Liver Disease: A "Real-Life" Cohort Study. Non-alcoholic fatty liver disease (NAFLD) is the most common chronic liver disease worldwide. This study aimed to evaluate the performance of four ultrasound-based techniques for the non-invasive multiparametric (MPUS) assessment of liver fibrosis (LF), steatosis (HS), and inflammation in patients with NAFLD. We included 215 consecutive adult patients with NAFLD (mean age: 54.9 ± 11.7; 54.5% were male), in whom LF, HS, and viscosity were evaluated in the same session using four new ultrasound-based techniques embedded on the Aixplorer MACH 30 system: ShearWave Elastography (2D-SWE.PLUS), Sound Speed Plane-wave UltraSound (SSp.PLUS), Attenuation Plane-wave UltraSound (Att.PLUS), and Viscosity Plane-wave UltraSound (Vi.PLUS). Transient Elastography (TE) with Controlled Attenuation Parameter (CAP) (FibroScan) were considered as control. All elastographic measurements were performed according to guidelines. Valid liver stiffness measurements (LSM) were obtained in 98.6% of patients by TE, in 95.8% of patients by 2D-SWE.PLUS/Vi.PLUS, and in 98.1% of patients by Att.PLUS/SSp.PLUS, respectively. Therefore, 204 subjects were included in the final analysis. A strong correlation between LSMs by 2D-SWE.PLUS and TE (r = 0.89) was found. The best 2D-SWE.PLUS cut-off value for the presence of significant fibrosis (F ≥ 2) was 7 kPa. Regarding steatosis, SSp.PLUS correlated better than Att.PLUS with CAP values: (r = -0.74) vs. (r = 0.45). The best SSp.PLUS cut-off value for predicting the presence of significant steatosis was 1524 m/s. The multivariate regression analysis showed that Vi.PLUS values were associated with BMI and LSM by 2D-SWE.PLUS. In conclusion, MPUS was useful for assessing fibrosis, steatosis, and inflammation in a single examination in patients with NAFLD. abstract_id: PUBMED:35868906 Viscoelasticity assessment for in vivo quantification of muscle contusion injury in rats using shear wave elastography. The aim of the study described here was to investigate the role of viscoelasticity in assessing muscle fibrosis and inflammation in a rat model of contusion using quantitative shear wave elastography (SWE). Unilateral gastrocnemius muscle contusion was induced in 32 male rats using an impactor apparatus. The contralateral muscles served as the control group. SWE was applied to the control group and rats 1, 3, 14 and 21 d after successful modeling (each time point group, n = 8). Histologic features were used as reference standards. The degree of fibrosis was moderately correlated with shear wave speed (r = 0.53), whereas the degree of inflammation was well correlated with shear wave dispersion (SWD) slope (r = 0.74). The area under the receiver operating characteristic curve (AUC) for the dispersion slope for muscle inflammation and fibrosis assessment was 0.87 (95% confidence interval: 0.705-0.963), which exceeded that of the shear wave speed (0.68, 95% confidence interval: 0.494-0.834). The larger decline in dispersion slope in the fibrotic stage than in the inflammation stage (1-d group vs. 14-d group or 21-d group, p &amp;lt; 0.05) indicated better predictive performance than the shear wave speed. abstract_id: PUBMED:25851300 Ultrasound shear wave speed measurements correlate with liver fibrosis in children. Background: Little published research has shown the relationship between noninvasive US shear wave speed (SWS) measurements and degree of liver fibrosis as established by percutaneous biopsy in children. Objective: To assess the relationship between liver US shear wave speed (SWS) measurements and parenchymal fibrosis in children. Materials And Methods: Sixty-two children (0-18 years old) with known or suspected liver disease underwent same-day US shear wave elastography (SWE) and clinically ordered percutaneous core needle biopsy. SWE was performed just before the liver biopsy in the area targeted for sampling, using an Acuson S3000 US system with a 9L4 transducer; six SWS measurements were acquired using Virtual Touch Quantification (VTQ) and Virtual Touch IQ (VTIQ) modes. Biopsy specimens were scored for histological fibrosis and inflammation. Bivariate relationships were assessed using Pearson correlation, while multiple linear regression analysis was used to establish the relationship between SWS and predictor variables. Receiver operating characteristic (ROC) curves were created to assess the abilities of VTQ and VTIQ to discern low vs. high liver fibrosis (histological fibrosis scores 0-2 vs. 3-6). Results: There were significant positive correlations between liver histological fibrosis score and VTQ (n = 49) and VTIQ (n = 48) mean shear wave speed measurements (r = 0.68 and r = 0.73; P-values &lt;0.0001). There also were significant positive correlations between liver histological inflammation score and VTQ and VTIQ mean shear wave speed measurements (r = 0.47 and r = 0.44, and P = 0.0006 and P = 0.0016, respectively). For VTQ, both histological fibrosis (P &lt; 0.0001) and inflammation (P = 0.04) scores were significant predictors of shear wave speed (model adjusted R (2) = 0.49). For VTIQ, only histological fibrosis score (P &lt; 0.0001) was a significant predictor of shear wave speed (model adjusted R (2) = 0.56). ROC areas under the curve were 0.84 and 0.86 for VTQ and VTIQ, respectively. Conclusion: Liver US shear wave speed measurements increase with increasing parenchymal fibrosis in children. abstract_id: PUBMED:38137615 Shear Wave Elastography for Assessing Liver Stiffness in HCV-Infected Kidney Transplant Recipients after Direct-Acting Antiviral Treatment: A Comparative Study with Magnetic Resonance Elastography. Hepatitis C virus (HCV) infection can lead to hepatic fibrosis. The advent of direct-acting antivirals (DAAs) has substantially improved sustained virological response (SVR) rates. In this context, kidney transplant recipients (KTRs) are of particular interest due to their higher HCV infection rates and uncertain renal excretion and bioavailability of DAAs. We investigated liver stiffness after DAA treatment in 15 HCV-infected KTRs using ultrasound shear wave elastography (SWE) in comparison with magnetic resonance elastography (MRE). KTRs were treated with DAAs (daclatasvir and sofosbuvir) for three months and underwent SWE at baseline, end of therapy (EOT), and 3 (EOT+3) and 12 months (EOT+12) after EOT. Fourteen patients achieved SVR12. Shear wave speed (SWS)-as a surrogate parameter for tissue stiffness-was substantially lower at all three post-therapeutic timepoints compared with baseline (EOT: -0.42 m/s, p &lt; 0.01; CI = -0.75--0.09, EOT+3: -0.43 m/s, p &lt; 0.01; CI = -0.75--0.11, and EOT+12: -0.52 m/s, p &lt; 0.001; CI = -0.84--0.19), suggesting liver regeneration after viral eradication and end of inflammation. Baseline SWS correlated positively with histopathological fibrosis scores (r = 0.48; CI = -0.11-0.85). Longitudinal results correlated moderately with APRI (r = 0.41; CI = 0.12-0.64) but not with FIB-4 scores (r = 0.12; CI = -0.19-0.41). Although higher on average, SWE-derived measurements correlated strongly with MRE (r = 0.64). In conclusion, SWE is suitable for non-invasive therapy monitoring in KTRs with HCV infection. abstract_id: PUBMED:36832276 How to Identify Advanced Fibrosis in Adult Patients with Non-Alcoholic Fatty Liver Disease (NAFLD) and Non-Alcoholic Steatohepatitis (NASH) Using Ultrasound Elastography-A Review of the Literature and Proposed Multistep Approach. Non-alcoholic fatty liver disease (NAFLD), and its progressive form, non-alcoholic steatohepatitis (NASH), represent, nowadays, real challenges for the healthcare system. Liver fibrosis is the most important prognostic factor for NAFLD, and advanced fibrosis is associated with higher liver-related mortality rates. Therefore, the key issues in NAFLD are the differentiation of NASH from simple steatosis and identification of advanced hepatic fibrosis. We critically reviewed the ultrasound (US) elastography techniques for the quantitative characterization of fibrosis, steatosis, and inflammation in NAFLD and NASH, with a specific focus on how to differentiate advanced fibrosis in adult patients. Vibration-controlled transient elastography (VCTE) is still the most utilized and validated elastography method for liver fibrosis assessment. The recently developed point shear wave elastography (pSWE) and two-dimensional shear wave elastography (2D-SWE) techniques that use multiparametric approaches could bring essential improvements to diagnosis and risk stratification. abstract_id: PUBMED:28988274 Ultrasound or MR elastography of liver: which one shall I use? Liver stiffness is now a well-established noninvasive biomarker for assessing fibrosis in chronic liver disease. MRI-based and ultrasound-based dynamic elastography techniques have been introduced for assessment of liver stiffness and useful in clinical staging of hepatic fibrosis. Several different elastography techniques are now available with each method having inherent strengths and limitations. The published literature generally indicates that MR elastography has a higher diagnostic performance and fewer technical failures than ultrasound-based elastography techniques in assessing hepatic fibrosis. There is also significant potential to further develop elastography techniques to implement multiparametric methods that have promise for distinguishing between processes such as inflammation, fibrosis, venous congestion, and portal hypertension that can result in increased liver stiffness. In this commentary, we compare MR and ultrasound elastography methods and their utility in clinical practice. abstract_id: PUBMED:34375562 Liver stiffness measurements by 2-dimensional shear wave elastography compared to histological and ultrasound parameters in primary biliary cholangitis. Background And Aims: Liver stiffness measurements (LSMs) by 2-dimensional-shear-wave elastography (LSM2D-SWE) are now widely used in hepatology. However, relevant information for primary biliary cholangitis (PBC) is scant. We compare LSM2D-SWE with liver biopsy (LB) in a cohort of PBC patients in Greece. Methods: Data of 68 LBs from 53 PBC patients were retrospectively analyzed and fibrosis stage was compared to LSM2D-SWE. Forty-six patients (86.8%) were females and at the time of LBx median (IQR) age was 62.6 (53.2-72.1). Demographic, UDCA treatment, histological and B-mode ultrasound data were tested for their influence on LSM2D-SWE estimates. Results: Liver fibrosis stages F0-F4 were found in 4, 19, 19, 16 and 10 cases, respectively. Across stages F0-F4, the LSM2D-SWE was 5.6 (5.1-6.1), 7.0 (5.8-7.7), 9.1 (7.3-11.5), 10.8 (9.9-12.2) and 14.5 (11.9-25.7) kPa, respectively, with highly significant difference (p&lt;.001). The LSM2D-SWE differed also significantly between F0 vs. F1 (p=.027), F1 vs. F2 (p=.005) and F3 vs. F4 (p=.017). The discriminatory ability of LSM2D-SWE for mild, significant, severe fibrosis and cirrhosis was highly significant in all comparisons (p&lt;.001), with AUC2D-SWE 95.3%, 87.4%, 85.3% and 95.3% and accuracy 89.7%, 85.3%, 80.9% and 86.8%, respectively. Among 21 parameters tested, significant predictors of LSM2D-SWE by multiple linear regression were fibrosis stage, portal inflammation and parenchymal heterogeneity. The portal inflammation grade accounted for 32.2% of LSM variation with adjusted R2 0.428. Conclusions: In patients with PBC, LSM measurements by 2D-SWE can reliably discriminate between mild, significant, severe fibrosis and cirrhosis. Measurements are significantly affected by portal inflammation grade. Answer: Yes, there is a shear wave speed (SWS) cutoff identified for pediatric liver fibrosis and inflammation using ultrasound elastography. In a study involving 41 children undergoing liver biopsy, it was found that a SWS cutoff of 1.7 m/s had a 100% positive predictive value and 24% negative predictive value for detecting fibrosis or inflammation (PUBMED:27840268). Another study determined that the SWS cutoffs were 1.67 m/s for Virtual Touch quantification (VTQ) and 1.56 m/s for Virtual Touch imaging and quantification (VTIQ) in detecting fibrosis or inflammation, and 2.09 m/s for VTQ and 2.17 m/s for VTIQ in discriminating children with low and high histologic liver fibrosis scores (PUBMED:28586157). However, it is important to note that point shear wave elastography is considered an insufficient stand-alone screening tool for liver fibrosis and inflammation in a heterogeneous patient population (PUBMED:27840268). Additionally, ultrasound shear wave speed measurements correlate with liver fibrosis in children, with significant positive correlations between liver histological fibrosis score and mean shear wave speed measurements (PUBMED:25851300).
Instruction: Does residual wall size or technique matter in the treatment of Zenker's diverticulum? Abstracts: abstract_id: PUBMED:27010588 Does residual wall size or technique matter in the treatment of Zenker's diverticulum? Objectives/hypothesis: We aimed to compare three surgical techniques (open approach for diverticulectomy with cricopharyngeal myotomy [OA], endoscopic laser-assisted diverticulotomy [ELD], and endoscopic stapler-assisted diverticulotomy [ESD]) for treatment of Zenker's diverticulum with regard to validated swallowing outcomes, radiographic outcomes, complications, and revision rates. We statistically analyzed whether the size of residual postoperative party wall or the specific surgical technique correlates with swallowing outcomes. Study Design: Retrospective chart review and radiographic study analysis. Methods: A retrospective chart review and radiographic analysis of preoperative and postoperative contrast swallow studies were conducted on patients undergoing surgery for Zenker's diverticulum between 2002 and 2014 at our institution. A follow-up validated swallowing outcome questionnaire, the Eating Assessment Tool-10, was administered to measure and compare patients' symptomatic outcomes. Results: Seventy-three patients were reviewed and grouped according to technique. Median follow-up was 1.6 years. ESD resulted in a significantly larger residual party wall than ELD and OA but yielded comparative swallowing outcomes. OA had the highest complication rate and ESD had the highest revision rate. There were no revisions after ELD nor OA. Conclusions: Despite the predictably larger residual postoperative party wall following ESD, this technique produced statistically comparable swallowing outcomes. Given its low complication rate and comparable results, ESD should be considered first line therapy for medically high-risk patients with Zenker's diverticulum, while acknowledging a higher risk of symptom recurrence. ELD, with its slightly greater risk profile but low recurrence rate, is well suited for most in revision cases. OA may best be reserved for those patients in whom endoscopic approach is not feasible. Level Of Evidence: 4 Laryngoscope, 126:2475-2479, 2016. abstract_id: PUBMED:29966497 The Plication Technique to Enhance the Endoscopic Approach to Zenker's Diverticulum. Endoscopic stapler approaches to Zenker's diverticulum often yield a persistent diverticulum and recurrent dysphagia up to 20%. A novel technique to reduce the postoperative diverticulum is described. Eight consecutive patients with Zenker's diverticulum who underwent endoscopic stapler diverticulotomy had adjunctive endoscopic plication of the diverticulum wall to functionally reduce the residual diverticulum size. On postoperative esophagram, there was no visible diverticulum in 4 of 7 patients (57%). The remaining 3 patients had a reduction in common wall of 76%, 50%, and 40% with a mean postoperative size of 1.0 cm. All patients had resolution or significant improvement in dysphagia. There were no complications or recurrences at a mean follow-up of 6.3 months. As an adjunct to endoscopic treatment of Zenker's diverticulum, the plication technique can reduce diverticulum size. Further studies will determine if the plication technique affects long-term recurrence of endoscopic stapler approaches. abstract_id: PUBMED:32780234 Zenker diverticulum treatment: retrospective comparison of flexible endoscopic window technique and surgical approaches. Introduction: Different treatments exist for Zenker diverticulum. We compared flexible endoscopic myotomy of the cricopharyngeal muscle, using a technique called the "window technique" in order to improve the field of view, to surgical approaches. Materials And Methods: Patients were retrospectively included and divided into a gastrointestinal group, with flexible endoscopic myotomy, and an ear-nose-throat treatments group with either rigid endoscopic treatment, either cervicotomy. We evaluated effectiveness in terms of quality of life (on a scale on 0 to 10) safety and technical aspects of each procedure. Results: A total 106 patients who underwent 128 interventions were included. Rigid endoscopic procedures were the shortest (p &lt; 0.001), with no difference for adverse event. Endoscopic approaches, flexible and rigid ones, were associated with shorter time to intake resumption (1 and 3 days, respectively, vs 6 after cervicotomy) and shorter length of hospital stay (3 and 4 days, respectively, vs 7 after cervicotomy) (p = 0.001). Post-operative QoL was better after flexible endoscopy (9/10) and open cervicotomy (9/10) than after rigid endoscopy (7/10) (p = 0.004). Patients declared fewer residual symptoms after open cervicotomy (77% of low symptomatic patients) and flexible endoscopy (80%) than after rigid endoscopy (43%) (p = 0.003). Conversion to open surgery was more frequent during rigid than flexible endoscopies (18% vs 0%, p = 0.0008). Conclusion: Flexible endoscopic approach of Zenker diverticulum treatment seems to be safe and effective and may be an alternative to surgical approaches. Myotomy can be eventually helped by the window technique. abstract_id: PUBMED:31747168 New combined endoscopic method of Zenker's diverticulum treatment Endoscopic treatment methods are currently a priority in the treatment of patients with Zenker's diverticulum. The optimal endoscopy should combine the positive aspects of traditional and tunnel technologies and be fairly simple from technical standpoint, the most radical and safe for the patient. This work evaluates the effectiveness of endoscopic cricopharyngeal esophagomyotomy using combined methods of treating patients with Zenker's diverticulum. From June, 2014 to December, 2018 A. S. Loginov City Clinical Scientific Centre performed 30 surgeries using a new combined technique. The average surgery time is 37.5 minutes. No postoperative complications. Control X-ray examination has shown almost complete absence of the residual cavity of the diverticulum. The combined endoscopic method of treating patients with Zenker's diverticulum allows to create conditions for preventing the recurrence of the disease, thereby ensuring the best result of treatment. abstract_id: PUBMED:26139133 New flexible endoscopic controlled stapler technique for the treatment of Zenker's diverticulum. Objective: Zenker´s diverticulum (ZD) is a rare cause of dysphagia. Various surgical and flexible endoscopic therapies are available with either higher morbidity or either higher recurrence rate. Therefore, improved treatment options are needed. Material And Methods: This case report involves an 83-year-old female patient with symptomatic ZD. Under flexible endoscopic control, a new 5 mm fully rotatable surgical stapler was used for the dissection of the septum between the ZD and the esophageal lumen. An ultrathin endoscope and the 5 mm stapler were introduced together through a flexible overtube under conscious sedation. Results: ZD treatment with this new stapler technique was feasible and effective in our patient. Procedure time was 10 min. Clinical symptoms improved immediately and the patient could be discharged the day after the procedure. Conclusion: The described stapler technique under flexible endoscopic control is the first report of this new treatment option for ZD. This new technique under conscious sedation may have some potential advantages compared to standard techniques such as better long-term results and lower complication rates. Further studies are needed. abstract_id: PUBMED:18767328 Zenker's diverticulum: technique and results of endoscopic treatment with Endo-GIA 30 stapler Objectives: The purpose of this project was to analyze the endoscopic management of Zenker diverticulum using an Endo-GIA stapler inserted transorally to perform an oesophageal diverticulostomia. Patients And Methods: Between January 97 and December 2006, 30 consecutively treated symptomatic patients (13 men; median age 67 years; range 45-91) with Zenker's diverticulum were enrolled into this retrospective study. Ninety six percent complained about dysphagia with slimming in 33%. The diagnosis of Zenker's diverticulum is based on anamnesis and radiological examinations with applied contrast medium of the upper digestive tract. Results: Twenty-six patients had the endoscopic approach. A patient profited in same time from a resection by external cervical approach following a tearing of the mucous membrane after installation from staled diverticulotomy. Finally 3 patients failed endoscopic exposure. On the 26 patients operated by strict endoscopic treatment, the average duration of intervention was 29 minutes. Overall, an onset of liquid intake on 2.3 postoperative days and the average length of stay was 6 days. Mean follow-up was 40 months. A recurrency was noted in the 2 cases, one was reoperated with the same technique. Conclusion: This endoscopic technique using an Endo-Gia stapler is an efficient, safe and minimally invasive method for the treatment of Zenker's diverticulum. With a low rate of morbidity, it is a good technique especially for patients with impaired of health or associated diseases. abstract_id: PUBMED:28533665 New flexible endoscopic controlled stapler technique for the treatment of Zenker's diverticulum: A case series. Aim: To report about the combination and advantages of a stapler-assisted diverticulotomy performed by flexible endoscopy. Methods: From November 2014 till December 2015 17 patients (8 female, 9 male, average age 69.8 years) with a symptomatic Zenker diverticulum (mean size 3.5 cm) were treated by inserting a new 5 mm fully rotatable surgical stapler (MicroCutter30 Xchange, Cardica Inc.) next to an ultrathin flexible endoscope through an overtube. The Patients were under conscious sedation with the head reclined in left position, the stapler placed centrally and pushed forward to the bottom of the diverticulum. The septum was divided by the staple rows under flexible endoscopic control. Results: In eleven patients (64.7%) the stapler successfully divided the septum completely. Mean procedure time was 21 min, medium size of the septum was 2.8 cm (range 1.5 cm to 4 cm). In four patients the septum was shorter than 3 cm, in seven longer than 3 cm. To divide the septum, averagely 1.3 stapler cartridges were used. Two minor bleedings occurred. Major adverse events like perforation or secondary haemorrhage did not occur. After an average time of two days patients were discharged from the hospital. In 6 patients (35.3%) the stapler failed due to a thick septum or insufficient reclination of the head. Follow up endoscopy was performed after an average of two months in 9 patients; 4 patients (44.4%) were free of symptoms, 5 patients (55.6%) stated an improvement. A relapse of symptoms did not occur. Conclusion: Flexible endoscopic Zenker diverticulotomy by using a surgical stapler is a new, safe and efficient treatment modality. A simultaneously tissue opening and occlusion prevents major complications. abstract_id: PUBMED:21143692 Transoral treatment of Zenker diverticulum: flexible endoscopy versus endoscopic stapling. A retrospective comparison of outcomes. Transoral stapled diverticulo-esophagostomy (TSDE) has gained increased popularity in surgical treatment of Zenker diverticulum (ZD). One of the advantages of this approach is early rehabilitation with significant decrease in patient morbidity and time to resumption of oral intake as compared with open treatment. The section of the septum between the diverticulum and the esophagus with a flexible endoscopic (ES) approach has also been proposed since mid-90s as an alternative for treatment of ZD. Both these approaches are a minimally invasive approach to treat ZD. We compared the TSDE management of ZD versus the ES treatment in a retrospective consecutive series of patients who were referred to either the ES or surgical unit of our Institute. Fifty-eight consecutive patients underwent treatment for ZD either by TSDE or ES. The two techniques were evaluated for length of hospital stay, diverticulum size, resumption of oral intake, resolution of dysphagia, and complications. Clinical outcome was evaluated throughout a symptom score from 0 to 3, calculated before and after the procedure. The two groups were compared on the various parameters using a Mann--Whitney test. Twenty-eight patients underwent ES and 30 TSDE for ZD. In both groups, a significant decrease in postoperative versus preoperative dysphagia was reported. The average length of hospital stay wasn't significantly different in the two groups (3.38 days for TSDE vs. 2.42 days for ES). The overall complication rate was similar in the two groups. There were two cases in the ES group and three cases in the TDSE group that required an ES revision to take down a residual diverticular wall that produced a mild but persistent dysphagia. Minimally invasive treatment of ZD both with ES and with TSDE is a valuable option for this disease: both techniques are safe and effective, with similar outcome in terms of hospital stay, symptom reduction, and complication rate. Long-term results have to be evaluated. abstract_id: PUBMED:8379739 Endoscopic stapling technique of esophagodiverticulostomy for Zenker's diverticulum. We present an endoscopic technique of division of the common wall between the esophagus and the hypopharyngeal (Zenker's) diverticulum. The novelty of the technique, as compared with endoscopic sutureless coagulating methods, consists of stapling the esophageal to the diverticular wall using the Endo-GIA 30 stapler (US Surgical Corp, Norwalk, CT), which protects the neck from any contamination from the digestive lumen and ensures optimal hemostasis of the wound edges. The stapler has been designed such that perforation of the bottom of the diverticulum is not likely. The technique has been applied to 6 patients. abstract_id: PUBMED:11843721 Endoscopic stapling technique for the treatment of Zenker diverticulum vs standard open-neck technique: a direct comparison and charge analysis. Background: Presently, the 2 most widely used methods for the treatment of Zenker diverticulum are endoscopic stapling of the common party wall between the diverticulum sac and the esophagus and the standard open-neck technique involving diverticulectomy and cricopharyngeal myotomy. Objective: To perform an analysis of the hospital charges to determine the economic efficiency of each technique based on our experience at the Mt Sinai Medical Center, New York, NY. Methods: A retrospective analysis of 16 patients diagnosed as having Zenker diverticulum was conducted. Eight randomly chosen patients underwent endoscopic stapling with an EndoGIA 35-mm endoscopic stapler (Ethicon Inc, Somerville, NJ), and 8 randomly chosen patients underwent a standard open approach with diverticulectomy. Medical records were reviewed to determine operative time, length of hospital stay, time to oral intake, and postoperative complications. A charge analysis of the operative and postoperative fees was also performed. Statistical analysis between the 2 groups was conducted using analysis of variance and the paired t test. Results: The mean +/- SD operative time for the endoscopic stapling technique was 25.5 +/- 15.78 minutes, which was significantly less (P&lt;.001) than that for the open procedure, 87.6 +/- 35.10 minutes. The mean operative charges were roughly equivalent at US$ 5178 for the endoscopic procedure and US$ 5113 for the open procedure. The endoscopic procedure, while shorter in operative time, had the added expense of specialized equipment, specifically the EndoGIA endoscopic stapler. The mean +/- SD length of hospital stay for the endoscopic procedure was significantly shorter (P&lt;.001) at 1.3 +/- 0.59 days vs 5.2 +/- 1.03 days for the open procedure. The inpatient hospital charges for the endoscopic group was also significantly less (P&lt;.001) at a mean of US$ 3589 per stay vs US$ 11,439 for the open group. The mean +/- SD time to oral intake was significantly shorter (&lt;.001) at a mean of US$ 3589 per stay vs US$ 11,439 for the open group. The mean +/- SD time to oral intake was significantly shorter (P&lt;.001) in the endoscopic group at 0.8 +/- 0.26 days vs 5.1 +/- 1.25 days for the open group. There were no major complications in either group, and all patients experienced resolution of preoperative symptoms. Conclusions: Compared with the standard open technique, the endoscopic stapling technique for the treatment of Zenker diverticulum results in a statistically significant shorter operative time, hospital stay, and time to resume oral feedings. While the charges of the operative procedures were roughly equivalent, the total hospital charges were significantly less for the patients treated endoscopically. Answer: The treatment of Zenker's diverticulum involves various surgical techniques, and both the residual wall size and the technique used can influence outcomes. A study comparing three surgical techniques—open approach for diverticulectomy with cricopharyngeal myotomy (OA), endoscopic laser-assisted diverticulotomy (ELD), and endoscopic stapler-assisted diverticulotomy (ESD)—found that ESD resulted in a significantly larger residual party wall than ELD and OA but yielded comparable swallowing outcomes. OA had the highest complication rate, and ESD had the highest revision rate, with no revisions after ELD nor OA. Despite the larger residual postoperative party wall following ESD, this technique produced statistically comparable swallowing outcomes, suggesting that residual wall size may not be the sole determinant of success (PUBMED:27010588). Another study introduced a plication technique as an adjunct to endoscopic stapler diverticulotomy, which functionally reduced the residual diverticulum size and showed promising results with no visible diverticulum in 57% of patients on postoperative esophagram. All patients had resolution or significant improvement in dysphagia, with no complications or recurrences at a mean follow-up of 6.3 months (PUBMED:29966497). A retrospective comparison of flexible endoscopic window technique and surgical approaches indicated that flexible endoscopic myotomy of the cricopharyngeal muscle was associated with shorter time to intake resumption and shorter length of hospital stay compared to surgical approaches. Post-operative quality of life was better after flexible endoscopy and open cervicotomy than after rigid endoscopy, with fewer residual symptoms reported after open cervicotomy and flexible endoscopy than after rigid endoscopy (PUBMED:32780234). Other studies have evaluated new endoscopic methods, such as a combined endoscopic method that showed almost complete absence of the residual cavity of the diverticulum and no postoperative complications (PUBMED:31747168), and a flexible endoscopic controlled stapler technique that was safe and efficient, with a new stapler technique under flexible endoscopic control potentially offering better long-term results and lower complication rates (PUBMED:28533665, PUBMED:26139133).
Instruction: Is overactivity the core feature of hypomania in bipolar II disorder? Abstracts: abstract_id: PUBMED:17065838 Is overactivity the core feature of hypomania in bipolar II disorder? Background: Recent studies found that overactivity (increased goal-directed activities) may be as important as mood change (elevated and/or irritable mood) for the diagnosis of mania/hypomania (on family history and psychometric grounds), questioning DSM-IV-TR criteria always requiring mood change and listing overactivity among the other symptoms. The aim of the study was to find out if overactivity was at least as important as mood change for the diagnosis of hypomania. Sampling And Methods: A consecutive sample of 137 bipolar II disorder (BP-II) and 76 major depressive disorder remitted outpatients were interviewed with the Structured Clinical Interview for DSM-IV by a senior clinical and research psychiatrist in a private practice. Patients were asked if they had had hypomanic symptoms and episodes, and which were the most common hypomanic symptoms during the various episodes. The study aim had not been planned when variables were collected for different study goals. Results: Overactivity was the most common hypomanic symptom in BP-II, more common than elevated mood, and had the strongest association with BP-II among all the hypomanic symptoms (overactivity odds ratio = 15.4, elevated mood odds ratio = 12.6). Three factors were found: an 'elevated mood' factor including elevated mood and increased self-esteem; a 'mental activation' factor including racing/crowded thoughts, and a 'behavioral activation' factor including overactivity. There was no relationship between overactivity and mood change. Irritable mood was not associated with overactivity and elevated mood. BP-II was present in 21.6%of patients without a history of overactivity, and in 81.0% of patients with a history of overactivity. BP-II was present in 25.0% of patients without elevated mood, and in 63.3% of patients with elevated mood. As a predictor of BP-II, overactivity had a sensitivity of 90.5%, a specificity of 61.8%, and a positive predictive value of 81.0% (elevated mood: 72.2, 82.8, and 88.3%, respectively). Five or more hypomanic symptoms had the most balanced combination of sensitivity (82.4%) and specificity (85.5%) for BP-II, and a positive predictive value of 91.1%. Overactivity was present in 89.5% of patients with a history of &gt; or = 5 hypomanic symptoms, while elevated mood was present in 76.6%. Conclusions: Theresults seem to support the view that overactivity may be a core feature of hypomania, suggesting the upgrading of overactivity to a stem criterion for hypomania. abstract_id: PUBMED:23261011 Overactive lifestyle in patients with fibromyalgia as a core feature of bipolar spectrum disorder. Objectives: To test the hypothesis that the premorbid overactivity previously described in subjects with fibromyalgia is a core feature of the manic/hypomanic symptoms characterising bipolar spectrum disorders. Methods: 110 consecutive patients with fibromyalgia were assessed for bipolar spectrum disorders using both categorical and dimensional approaches. The first was based on a version of the DSM-IV SCID-CV interview, modified to improve the detection of bipolar spectrum disorders, the second on the hypomania symptom checklist HCL-32, which adopts a dimensional perspective of the manic/hypomanic component of mood by including sub-syndromal hypomania. Results: Both DSM-IV and Zurich criteria diagnosed high rates of bipolar spectrum disorder in patients with fibromyalgia (70% and 86.3%, respectively). Individuals with a major bipolar spectrum disorder (bipolar II disorder) and with a minor bipolar spectrum disorder (subthreshold depression and hypomania) did not differ in their demographic and clinical aspects. Hypomanic symptom counts on the HCL-32 confirmed high estimates of the bipolar spectrum, with 79% of subjects with fibromyalgia scoring 14 (threshold for hypomania) or above. Conclusions: Overactivity reported in previous studies may be considered a core feature of hypomanic symptoms or syndromes comorbid with bipolar spectrum disorders. Major and minor bipolar spectrum disorders are not associated with differences in demographic or clinical characteristics, suggesting that fibromyalgia rather than being related specifically to depression is related to bipolar spectrum disorders and in particular to the hypomania/overactivity component. abstract_id: PUBMED:24140225 Increased energy/activity, not mood changes, is the core feature of mania. Background: In the Diagnostic and Statistical Manual of Mental Disorders, 5th edition, the occurrence of increased energy/activity and elation of mood or irritability became necessary symptoms for the diagnosis of an episode of mania or hypomania. Objective: To evaluate whether increases in energy/activity or mood changes represent the core feature of the manic syndrome. Methods: The symptomatology of 117 hospitalized patients with bipolar mania was evaluated using the Schedule for Affective Disorders and Schizophrenia-Changed version (SADS-C). Based on six items of the SADS-S related to mania, a Confirmatory Factor Analysis (CFA) was performed. An Item Response Theory (IRT) analysis was used to identify how much each symptom informs about the different levels of severity of the syndrome. Results: According to the CFA, the item "increased energy" was the symptom with the highest factorial loadings, which was confirmed by the IRT analysis. Thus, increased energy was the alteration most correlated with the total severity of manic symptoms. Additionally, the analysis of the Item Information Function revealed that increased energy was correlated with the larger amplitude of severity levels compared with the other symptoms of mania. Limitations: Only six manic symptoms were considered. The sample might not be representative because the patients were evaluated while presenting peak symptom severity. Conclusions: Increased energy/activity is a more important symptom for a diagnosis of mania than mood changes and represents the core feature of this syndrome. abstract_id: PUBMED:19744407 What is hypomania? Tetrachoric factor analysis and kernel estimation of DSM-IV hypomanic symptoms. Background: The DSM-IV definition of hypomania, which relies on clinical consensus and historical tradition, includes several "nonspecific" symptoms. The aim of this study was to identify the core symptoms of DSM-IV hypomania. Method: In an outpatient private practice, 266 bipolar II disorder (BP-II) and 138 major depressive disorder (MDD) remitted patients were interviewed by a bipolar-trained psychiatrist, for different study goals. Patients were questioned, using the Structured Clinical Interview for DSM-IV, about the most common symptoms and duration of recent threshold and subthreshold hypomanic episodes. Data were recorded between 2002 and 2006. Four different samples, assessed with the same methodology, were pooled for the present analyses. Tetrachoric factor analysis was used to identify core hypomanic symptoms. Distribution of symptoms by kernel estimation was inspected for bimodality. Validity of core hypomania was tested by receiver operating characteristic (ROC) analysis. Results: The distribution of subthreshold and threshold hypomanic episodes did not show bimodality. Tetrachoric factor analysis found 2 uncorrelated factors: factor 1 included the "classic" symptoms elevated mood, inflated self-esteem, decreased need for sleep, talkativeness, and increase in goal-directed activity (overactivity); factor 2 included the "nonspecific" symptoms irritable mood, racing/crowded thoughts, and distractibility. Factor 1 discriminatory accuracy for distinguishing BP-II versus MDD was high (ROC area = 0.94). The distribution of the 5-symptom episodes of factor 1 showed clear-cut bimodality. Similar results were found for episodes limited to 3 behavioral symptoms of factor 1 (decreased need for sleep, talkativeness, and overactivity) and 4 behavioral symptoms of factor 1 (adding elevated mood), with high discriminatory accuracy. Conclusions: A core, categorical DSM-IV hypomania was found that included 3 to 5 symptoms, ie, behavioral symptoms and elevated mood. Behavioral symptoms (overactivity domain) could be the basic phenotype of hypomania. This finding could help in probing for hypomania and reduce misdiagnosis. Biologic research could focus more on the underpinnings of the overactivity domain specifically. abstract_id: PUBMED:17612849 Testing new diagnostic criteria for hypomania. Background: A recent series of studies has questioned DSM-IV diagnostic criteria for hypomania, suggesting that overactivity (increased goal-directed activity) should have priority over mood change as stem criterion. Angst has suggested new criteria for hypomania, giving priority to overactivity. Study aim was to test the validity of Angst's diagnostic criteria for hypomania. Methods: A consecutive sample of remitted 213 DSM-IV bipolar-II disorder (BP-II) and major depressive disorder (MDD) outpatients were re-diagnosed, during a follow-up visit, by the Structured Clinical Interview for DSM-IV (yes/no structured questions on hypomanic symptoms, skip-out instruction of stem question on mood change not followed, in order to assess all past hypomanic symptoms), by a mood disorder specialist psychiatrist in a private practice. Angst's stem diagnostic criteria for hypomania were tested versus DSM-IV hypomania: 1) overactivity plus at least 3 of the 7 DSM-IV hypomanic symptoms 2) overactivity plus at least 2 of the 7 DSM-IV hypomanic symptoms. Results: DSM-IV criteria for hypomania were met by 137 patients, overactivity plus 2/7 was met by 146 patients, and overactivity plus 3/7 was met by 135 patients. Of the patients with overactivity plus 2/7, 83.5% also met DSM-IV criteria for hypomania, and of the patients with overactivity plus 3/7 86.6% also met DSM-IV criteria for hypomania. Logistic regression of DSM-IV hypomania versus overactivity plus 2/7 found odds ratio (OR) = 17.6, and versus overactivity plus 3/7 found OR = 18.8. Comparisons between DSM-IV hypomania and Angst's criteria for hypomania showed that there were no significant differences on age, gender, symptom structure of hypomania, number of episodes, episodes duration, and episodes level of functioning. Associations (ORs) between the stem criterion of each definition of hypomania and hypomanic symptoms were often strong. DSM-IV hypomania stem criterion was closely associated with overactivity (OR = 15.4), and Angst's hypomania stem criteria were closely associated with mood change (OR = 7.6 for overactivity plus 2/7, OR = 14.3 for overactivity plus 3/7). Conclusions: Results support Angst's criteria for hypomania based on overactivity (overactivity plus 3/7 seems more supported). These criteria do not seem to lead to overdiagnosing hypomania. Previous studies supported the upgrading of overactivity among DSM-IV hypomanic symptoms. Angst's diagnostic criteria may positively impact the treatment of depression. It has been shown that focusing the probing for history of hypomania more on overactivity than on mood change reduces the false-negative BP-II. By using Angst' criteria for hypomania, clinicians may reduce the current high misdiagnosis of BP-II as MDD and the related mistreatment. abstract_id: PUBMED:18313825 A relationship between bipolar II disorder and borderline personality disorder? Background: The relationship between DSM-IV-TR borderline personality disorder (BPD) and bipolar disorders, especially bipolar II disorder (BP-II), is still unclear. Many recent reviews on this topic have come to opposite or different conclusions. Study Aim: The aim was to test the association between hypomania symptoms and BPD traits, as hypomania is the defining feature of BP-II in DSM-IV-TR. Methods: During follow-up visits in a private practice, consecutive 138 remitted BP-II outpatients were re-diagnosed by a mood disorder specialist psychiatrist, using the Structured Clinical Interview for DSM-IV (as modified by Benazzi and Akiskal for better probing hypomania). Soon after, patients self-assessed (blind to interviewer) the SCID-II Personality Questionnaire for BPD. Associations and confounding were tested by logistic regression, between each criteria symptom of hypomania (apart from "racing thoughts" and "distractibility", not assessed as probing focused mainly on behavioral, observable signs), and the entire set of BPD traits. Multivariate regression was also used to jointly regress the entire set of hypomanic symptoms on the entire set of BPD traits. Results: Mean (SD) age was 39.0 (9.8) years, females were 76.3%. Frequency of BPD traits ranged between 17% and 66% (e.g. impulsivity trait 41%, affective instability trait 63%), mean (SD) number of traits was 4.2 (2.3). The most common episodic hypomanic symptoms were elevated mood (91%) and overactivity (93%); frequency of excessive risky, impulsive activities (impulsivity) was 62%. By logistic regression the only significant association was between the episodic impulsivity of hypomania and the trait impulsivity of BPD. Multivariate regression of the entire set of hypomanic symptoms jointly regressed on the entire set of BPD traits was not statistically significant. Discussion: The core feature of BP-II, i.e. hypomania, does not seem to have a close relationship with BDP traits in the study setting, partly running against a strong association between BPD and BP-II and a bipolar spectrum nature of BPD. abstract_id: PUBMED:12910352 Frequency of bipolar spectrum in 111 private practice depression outpatients. Background: Mood disorders included into the bipolar spectrum are increasing, and overactivity (increased goal-directed activity) has reached the status of mood change for the diagnosis of hypomania in the recent studies by Angst and Akiskal. Study Aim: was to find frequency of bipolar spectrum in remitted depressed outpatients by including sub-syndromal hypomania. Methods: 111 depression-remitted outpatients were interviewed for history of hypomania and hypomanic symptoms with the Structured Clinical Interview for DSM-IV-Clinician Version (a partly semistructured interview), as modified by Benazzi and Akiskal. Bipolar I patients were not included. All past hypomanic symptoms (especially overactivity) were systematically assessed. Wording of the questions could be changed to increase/check understanding. Subsyndromal hypomania was defined as an episode of overactivity (increased goal-directed activity) plus at least 2 hypomanic symptoms. Results: Frequency of bipolar II (BPII) was 68/111 (61.2%, 95% confidence interval 52% to 69.8 %), frequency of major depressive disorder (MDD) was 43/111. The most common hypomanic symptom was overactivity. In the MDD sample, sub-syndromal hypomania was present in 39.5% (15.3% of the entire sample), and had 4 median symptoms. Bipolar spectrum frequency was 76.5% (95% confidence interval 67.9% to 83.5 %). Overactivity had higher sensitivity than elevated mood for predicting BPII diagnosis. Limitations: Single interviewer. Conclusions: By systematic probing more focused on past overactivity than mood change, and by inclusion of sub-syndromal hypomania, bipolar spectrum frequency was higher than the near 1 to 1 ratio versus MDD reported up to now (Angst et al.). Given the wide confidence interval, the value in the depression population should be around 70%. Better probing skills by clinicians, and use of semi-structured interviews could much reduce the current high underdiagnosis of BPII and related disorders in usual clinical practice. abstract_id: PUBMED:15181482 Toward better probing for hypomania of bipolar-II disorder by using Angst's checklist. The reliability of the diagnosis of bipolar-II disorder (BP-II) is still a problem. Semi-structured interviews by clinicians might partly overcome this problem. The aims of this study were to find the degree of agreement in the diagnosis of BP-II between the Structured Clinical Interview for DSM-IV (SCID) and a semi-structured interview based on Angst's hypomania checklist (Angst et al., 2003), and to assess the priority among hypomanic symptoms for the diagnosis of BP-II. Remitted depression outpatients (N = 102) were interviewed during a follow-up visit using th Structured Clinical Interview for DSM-IV (SCID), and then with Angst's semi-structured interview, following DSMIV criteria. Bipolar I (BP-I) patients were excluded. Using the SCID, 29 patients were diagnosed BP-II, 26 BP-I, and 47 major depressive disorder (MDD). By the semi-structured interview 69 patients were diagnosed BP-II, 33 MDD, and none BP-I. Agreement for the diagnosis of BP-II between the two interviews was 53.9% (k = 0.18). Re-analysis, after deleting the SCID question on the impact on functioning (DSM-IV unclear boundary between BP-I and BP-II), increased agreement to 78.4% (k = 0.55). Elevated mood and overactivity (increased goal-directed activity) had th lowest agreement (k = 0.46 0.49). For predicting BP-II, overactivity had the highest sensitivity (94.2%), whil elevated mood had a sensitivity of 84.0%. Multivariate analysis for predicting BP-II (diagnosed by semi-structured interview), including all DSM-IV hypomanic symptoms, found that mood change and overactivity were the only independent predictors. Overactivity plus at least three symptoms (as suggested by Angst and Gamma, 2002) were present in 71 patients, of whom 91.5% also met DSM-IV criteria for hypomania. Overactivity and elevated mood were strongly associated (but not overactivity and irritability). Findings may support a diagnosis of BP-II based on Angst's semi-structured interview versus the fully structured SCID interview. While DSM-IV always requires mood change for the diagnosis of hypomania, the present findings may suggest that overactivity could have the same priority, as suggested by Angst et al. (2003) and by Akiskal et al. (1977, 2001, 2003). abstract_id: PUBMED:14499315 Diagnosis of bipolar II disorder: a comparison of structured versus semistructured interviews. Background: Reliability of bipolar II (BPII) disorder diagnosis is still a problem. Recent studies have shown that semistructured interviews by clinicians are better than structured interviews by nonclinicians for BPII diagnosis. The aim of the study was to find the degree of agreement in the diagnosis of BPII between the Structured Clinical Interview for DSM-IV (SCID) and a semistructured interview based on DSM-IV criteria done by an expert clinician. Methods: One hundred eleven remitted major depressive episode (MDE) outpatients were interviewed first with the SCID and soon after that with a semistructured interview following DSM-IV criteria (based on clinical evaluation). Bipolar I (BPI) patients were excluded. Results: By the SCID, 24 patients were diagnosed BPII (21.6%) and 30 were diagnosed BPI (27.0%). By the semistructured interview, 68 patients were diagnosed BPII (61.2% of the entire sample) and none BPI. Agreement between the SCID BPII diagnosis and the semistructured interview BPII diagnosis was 51.3% (meaning one in two missed). Sensitivity and specificity of the SCID BPII diagnosis for the semistructured BPII diagnosis were 29.4% and 90.7%, respectively. Overactivity (increased goal-directed activity) was the most common hypomanic symptom. In the group with overactivity (n=76), a semistructured interview BPII diagnosis was present in 77.6%, while a SCID BPII diagnosis was present in only 22.3%. Sensitivity and specificity of overactivity for BPII diagnosis were 86.7% and 60.4%, respectively, while elevated mood had sensitivity of 60.2% and specificity of 86.0%. Conclusions: Findings support a diagnosis of BPII based on a semistructured interview by an expert clinician versus a fully structured interview. Overactivity priority level for the diagnosis of hypomania is supported by the present findings. abstract_id: PUBMED:15526254 Factor structure of recalled DSM-IV hypomanic symptoms of bipolar II disorder. The DSM-IV-TR definition of hypomania in bipolar II disorder (BP-II) has yet to show its validity. The aim of the current study was to find the factor structure of hypomania by using DSM-IV-TR symptoms, and to assess the DSM-IV-TR definition of hypomania. One hundred ninety-seven consecutive BP-II remitted outpatients were interviewed by the Structured Clinical Interview for DSM-IV (SCID-CV) as modified by Benazzi and Akiskal (2003) and by Benazzi (2003), in a private practice, assessing the symptoms that were more common during past hypomanic episodes. The factor structure of hypomania was studied by principal component factor analysis. Almost all patients reported overactivity (increased goal-directed activity) during hypomania, and less commonly elevated mood. Overactivity plus three or more symptoms identified 89.3% of DSM-IV-TR BP-II. Factor analysis found three factors: factor 1, including racing thoughts ("mental activation"); factor 2, including elevated mood ("high mood"); and factor 3, including overactivity ("behavioral activation"). Elevated mood was correlated only with two of the nine DSM-IV-TR hypomanic symptoms. The three-domains structure of hypomania by Kraepelin (i.e., increased mood, thought, and activity) was found in the DSM-IV-TR definition of hypomania, partly supporting its list of symptoms. However, DSM-IV-TR priority given to mood change for the diagnosis of hypomania was not supported. An upgrading of overactivity to at least a priority level similar to mood change was supported by (1) its high frequency, (2) its utility to diagnose BP-II, and (3) by factor analysis showing that elevated mood (the "prototypical" symptom of hypomania in DSM-IV-TR) correlated with few symptoms, and that three factors (of which only one included elevated mood) were present. Answer: Yes, overactivity appears to be a core feature of hypomania in bipolar II disorder. Several studies have indicated that overactivity, or increased goal-directed activities, may be as important as, or even more important than, mood change for the diagnosis of hypomania. One study found that overactivity was the most common hypomanic symptom in bipolar II disorder patients, more common than elevated mood, and had the strongest association with bipolar II disorder among all the hypomanic symptoms (PUBMED:17065838). Another study suggested that overactivity reported in patients with fibromyalgia may be considered a core feature of hypomanic symptoms or syndromes comorbid with bipolar spectrum disorders (PUBMED:23261011). Further research indicated that increased energy/activity is a more important symptom for a diagnosis of mania than mood changes and represents the core feature of this syndrome (PUBMED:24140225). Tetrachoric factor analysis and kernel estimation of DSM-IV hypomanic symptoms also identified a core, categorical DSM-IV hypomania that included behavioral symptoms and elevated mood, with behavioral symptoms (overactivity domain) potentially being the basic phenotype of hypomania (PUBMED:19744407). Moreover, testing of new diagnostic criteria for hypomania supported the view that overactivity should have priority over mood change as a stem criterion, and that Angst's diagnostic criteria for hypomania based on overactivity do not seem to lead to overdiagnosing hypomania (PUBMED:17612849). These findings suggest that focusing on overactivity rather than mood change when probing for a history of hypomania could reduce misdiagnosis and improve treatment for depression. In summary, the evidence supports the notion that overactivity is a core feature of hypomania in bipolar II disorder, and this has implications for diagnostic criteria and clinical practice.
Instruction: Do different ethnic groups need different sun protection? Abstracts: abstract_id: PUBMED:19624429 Do different ethnic groups need different sun protection? Background: In the present study, the transmission of sunlight trough the human skin barrier into the living tissue was investigated in the spectral region between 280 and 700 nm. Methods: The experiments were performed with a fiber-based spectrometer on sliced skin obtained from volunteers with different skin types. One fiber was positioned directly on the skin surface and the second one underneath the skin samples. The distribution of the sunlight under the epidermis was determined. Results: Significant differences were found in the absorption properties of the different skin types, which were mainly determined by the variations in melanin concentration and distribution. It was found that sunscreens for specific ethnic groups need different combinations of UV filters, if a balanced relation between ultraviolet B (UVB) and ultraviolet A (UVA) protection is to be obtained. On the other hand, it could be demonstrated that the human skin is also well protected against visible and near-infrared light by melanin. Conclusions: The higher the skin type category, the better the protection in the visible part of the spectrum of the sun. This stimulates the hot discussion at the present time, as to whether sunscreens should also contain protection compounds in the visible and near-infrared parts of the spectrum. abstract_id: PUBMED:28647965 Paying attention to different health needs of different ethnic groups in process health for all program In recent years, great effort has been made in the promotion of health for all in China. Articles on column on chronic and non-communicable disease risk factors in Uighur population, analysis based on the investigation results of Uygur population health status in the Kashi area of Xinjiang of China and similar domestic and foreign studies showed that the health data in different countries are different. The differences in health related data exist in different ethnic groups even in same country or same ethnic group in different areas. Only by fully understanding the differences in disease and related factors among different ethnic groups, developing individualized health indicators and conducting targeted intervention, the goal of health for all can be achieved. abstract_id: PUBMED:38360939 Ethnic and Gender Differences in Sun-Related Cognitions Among College Students: Implications for Intervention. Background: Skin cancer incidence and prognosis vary by ethnicity and gender, and previous studies demonstrate ethnic and gender differences in sun-related cognitions and behaviors that contribute to this disease. The current study sought to inform skin cancer interventions tailored to specific demographic groups of college students. The study applied the prototype willingness model (PWM) to examine how unique combinations of ethnic and gender identities influence sun-related cognitions. Method: Using data from a survey of 262 college students, the study tested whether self-reported sun-related cognitions were different for White women, Hispanic women, White men, and Hispanic men. Path modeling was also used to identify which PWM cognitions (e.g., prototypes, norms) were the strongest predictors of risk and protection intentions and willingness in each demographic group. Results: Several differences in sun-related cognitions and PWM pathways emerged across groups, emphasizing the need for tailored skin cancer education and interventions. Results suggest that, for White women, interventions should primarily focus on creating less favorable attitudes toward being tan. Conclusion: Interventions for Hispanic women may instead benefit from manipulating perceived similarity to sun-related prototypes, encouraging closer personal identification with images of women who protect their skin and encouraging less identification with images of women who tan. For White men, skin cancer interventions may focus on creating more favorable images of men who protect their skin from the sun. Lastly, interventions for Hispanic men should increase perceived vulnerability for skin cancer. abstract_id: PUBMED:36301344 Sun protective behaviors in sun-sensitive individuals: a cross-sectional study examining for ethnic and racial differences. The rates of non-melanoma skin cancer continue to rise in the United States. We investigated if differences exist in skin cancer preventive behaviors among sun-sensitive non-Hispanic whites and other racial and ethnic groups. The National Health Information Survey was used to perform the cross-sectional study. Outcomes of interest included multiple sun-protective methods. Individuals were determined to be sun-sensitive if Fitzpatrick skin phototype (SPT) I/II. Multivariable logistic regression was used to examine the associations between the use of sun-protective practices and race and ethnicity, SPT, and survey year. The study included 67,471 individuals. Adjusted prevalences of skin cancer preventive behaviors revealed that across all SPTs, non-Hispanic whites were more likely to use sunscreen, undergo physician-administered FBSE, and have multiple sunburns or tan indoors in the past year compared to corresponding SPT other racial and ethnic groups. In contrast, other racial and ethnic groups with any SPT were more likely to practice sun avoidance than corresponding SPT non-Hispanic whites. Additionally, other racial and ethnic groups with SPT III+ were more likely to wear sun-protective clothing than non-Hispanic white individuals with similar SPTs. For all SPT and racial and ethnic groups, there were significant increases in sunscreen use and decreases in past-year indoor tanning. Full body skin examination prevalence, regardless of sun sensitivity, increased for all non-Hispanic whites, but remained unchanged for sun-sensitive other racial and ethnic groups. Adjusted prevalence of multiple sunburns and use of protective clothing remained unchanged for all racial and ethnic groups across all SPT categories. There are differences in sun-protective behaviors among sun-sensitive individuals by race and ethnicity with the magnitude of some of these differences increasing. Future research and public health campaigns are needed on photosensitive individuals and the relationship between skin protective behaviors and race/ethnicity. When discussing sun protection, care providers should not only consider the influence of sun-sensitivity, but also race and ethnicity, and its impact on sun prevention behaviors. abstract_id: PUBMED:22851950 Vitamin D beliefs and associations with sunburns, sun exposure, and sun protection. The main objective of this study was to examine certain beliefs about vitamin D and associations with sun exposure, sun protection behaviors, and sunburns. A total of 3,922 lifeguards, pool managers, and parents completed a survey in 2006 about beliefs regarding vitamin D and sun-related behaviors. Multivariate ordinal regression analyses and linear regression analysis were used to examine associations of beliefs and other variables. Results revealed that Non-Caucasian lifeguards and pool managers were less likely to agree that they needed to go out in the sun to get enough vitamin D. Lifeguards and parents who were non-Caucasian were less likely to report that sunlight helped the body to produce vitamin D. A stronger belief about the need to go out in the sun to get enough vitamin D predicted more sun exposure for lifeguards. For parents, a stronger belief that they can get enough vitamin D from foods predicted greater sun protection and a stronger belief that sunlight helps the body produce vitamin D predicted lower sun exposure. This study provides information regarding vitamin D beliefs and their association with certain sun related behaviors across different demographic groups that can inform education efforts about vitamin D and sun protection. abstract_id: PUBMED:35153433 Nasal changes in different age groups. Objective: The objective of this study was to evaluate nasal changes in different age groups and to detect gender difference in nasal dimensions. Materials And Methods: Clinical measurement and profile photographic records of 279 randomly selected subjects were obtained for the evaluation of nasal changes. Thirty-nine subjects were excluded, and the final sample consisted of 240 subjects. The subjects were divided into four groups by age: Group 1 (16-20 years), Group 2 (21-30 years), Group 3 (31-40 years), and Group 4 (41-50 years). Each group was further subdivided on the basis of gender. Data were collected and analyzed by two-way multivariate analysis of variance with Duncan's multiple range post hoc test. Results: In both males and females, nasal height and breadth increased with except nasal index and nasolabial angle. More nasal changes were seen in males as compared to females of the same age group. Conclusions: Nonconsistent age-related changes were found for nasal index and nasolabial angle. The rest of the nasal parameters increased with age and differ between the genders. Larger nasal changes were seen in males as compared to females of the same age group. abstract_id: PUBMED:33061512 Sun Awareness and Sun Protection Practices. Background: The incidence of skin cancer has increased over the past few years, owing to excessive exposure to ultraviolet (UV) radiation. Over the past decade, protection against solar radiation has been highly encouraged in numerous public health education campaigns. The adopted sun protection practices include avoiding exposure to the sun, using protective clothing, and applying sunscreen. The aim of this paper was to analyze the awareness and attitudes towards sun protection practices in some regions of the Arabian Peninsula. Objective: The main objective of this study is to estimate the level of sun protection practices and determine the factors that have a significant impact on the level of adoption of sun protection practices by residents in the Arabian Peninsula. Methods: A 35-question self-reporting online questionnaire to evaluate the sun protection measures adopted by the residents was distributed in the Kingdom of Saudi Arabia and Kingdom of Bahrain. The survey was conducted from January 2018 to May 2019, and a total 830 responses were collected for analysis. The data were analyzed using the statistical software SPSS. The chi-squared test and Spearman's correlation were performed to determine the association of the sun protection level with factors concerning the socio-demographic characteristics, sun sensitivity, health-conscious lifestyle, adopted sun protection measures, and general awareness about the UV index, skin cancer, and vitamin D. Logistic regression classification was applied in developing the empirical model to predict the sun protection level of the residents. Results: Based on the analysis, only 34% of the total 830 respondents use sunscreen and most of the sunscreen users are female. The three most popular sun protection practices are: 1) avoiding exposure to the sun during peak hours, 2) using protective clothing, and 3) using sunglasses. The results indicate that 22% of respondents use a low level of sun protection, 53% use a medium level of protection, while 25% of respondents use a high level of sun protection. Many of the respondents understand that exposure to the sun can cause skin cancer and skin aging, while 54% of the respondents still believe that a suntan can be healthy if sunburn is avoided. Thirty percent of the participants of this study are aware of the UV index, and only 20% of all respondents know the correct meaning of the UV index. The overall predictive accuracy of the constructed logistic regression model is 67%. Based on the results of the logistic regression model, the five most significant factors of the sun protection level are: 1) perception that the sun can cause skin aging, or not, 2) age, 3) awareness of the dangerous effects of sun exposure, 4) income with respect to expenses, and 5) taking the UV index into consideration during exposure to the sun. Conclusion: Most of the respondents are aware about sun protection. However, the sun protection level of the residents is still inadequate. An efficient strategy of increasing the awareness of the effective use of sun protection systems must be established based on the significant factors to prevent skin cancer. abstract_id: PUBMED:34438542 Cluster Randomized Trial: Sun Protection Intervention 'Clever in Sun and Shade for Preschools'-Effectiveness and Dissemination. Skin cancer is one of the most common types of cancer and UV radiation is one of the main risk factors. Therefore, sun protection, especially in childhood, is strongly recommended. We examined the effectiveness of the 'Clever in Sun and Shade for Preschools' program (CLEVER) in promoting sun protection behavior among preschool staff (trial registration: DRKS00023468) and describe its dissemination. Within a cluster randomized trial with 24 preschools (n = 273 staff members) stating a high need for sun protection measures, an educational workshop for preschool staff and a project kit with materials applicable in preschool groups was provided. Staff members of preschools taking part in CLEVER report significantly stronger sun protection behavior to avoid the sun (effect size [ES] 0.70, 95% confidence interval [CI] 0.04 0.71, p &lt; 0.05) and less perceived impediments to avoid the sun (ES -0.56, CI -0.82 -0.17, p &lt; 0.01) after 12 months as well as higher self-efficacy to avoid the sun (ES 1.09, CI 0.39 1.07, p &lt; 0.001) and to use sunscreen (ES 0.71, CI 0.03 0.88, p &lt; 0.05) after 1 month. Compared to the control group, there was no significant effect on sunscreen use and further psychosocial outcomes. The effectiveness of CLEVER may be underrated due to a high drop-out rate. Within three years, an enhanced free-of-charge program kit, including a media-based workshop and materials, had reached over 4000 preschools, i.e., 7.1% of all daycare centers in Germany. The results show that CLEVER can strengthen sun protection, offer high-quality information at low cost, and is easily disseminable. abstract_id: PUBMED:36622262 Sun protection behavior: health impact, prevalence, correlates and interventions. Sun protection behavior can reduce skin cancer risk. This paper provides an overview of skin cancer risk and the complex behavioral options for sun protection, along with a narrative review of research on determinants of, and interventions to promote, sun protection. Gaps and opportunities for future research are also outlined. Evidence supports the effectiveness of sunscreen use, ultraviolet (UV) protection clothing, and shade seeking. However, these behaviors are complex and are often performed in ways that are inadequate for sun protection. Most research examining and targeting determinants of sun protection behavior has focused on sunscreen use, to the exclusion of other strategies, and has largely ignored the complexity of even sunscreen use. Most research and interventions are theory- driven, drawing on social cognitive constructs, but also considering self and social identity and emotion. Multilevel perspectives considering context, environment, policies, and other structural contexts have also been applied to sun protection behavior, but there is a need to combine psychological constructs with factors at other levels to optimize predictive models and intervention effectiveness. Effective sun protection effectively involves complex behaviors and perceived and actual tradeoffs that should be considered, in combination with multilevel determinants, in research predicting and promoting sun safety. abstract_id: PUBMED:23596561 Retrospective study of cancer types in different ethnic groups and genders at Karachi. Retrospective study of Cancer types in different ethnic groups &amp; genders determines the pattern of cancers in different ethnic groups &amp; genders during the last eight years reported in Oncology wards of hospitals of Karachi, Pakistan. Every single one male &amp; female case with histologically and cytologically established cancer was enrolled from January 2003 to December 2010. Data for all patients were collected retrospectively by patient's file &amp; charts, which represents the population of Karachi, Interior Sindh &amp; Balochistan. 5134 patients (Male = 2432 / Female = 2702) investigated for their diagnosis of cancer type, ethnicity, age &amp; gender. Classification of malignancy was done according to the International Classification of Disease coding system by W.H.O (ICD-10). The statistical analysis was performed for mean, standard error &amp; proportions for ethnic groups &amp; genders. Proportionately 47.37% males and among which major ethnic groups 17% Sindhi, 17% Immigrant, 4% Baloch, 3% Pukhtoon, ≈ 4% Punjabi, 1% Siraiki, 2% Minorities and 52.62% females, in which 16% Sindhi, 21% Immigrant, 4% Baloch 3% Pukhtoon, 5% Punjabi, 1% Siraiki, 3% Minorities. Mean age of males = 45.75 years, SE ± 0.227 and for females = 44.07, SE ± 0.183. The three most occurring tumors in all cancers of male were found Head &amp; Neck, Adenoma/Carcinoma of Glands &amp; Body cavity membranes, GIT, and females Breast, Head &amp; Neck, Adenoma/Carcinoma of Glands &amp; Body cavity membranes, GIT. The analysis of data indicates Head &amp; Neck is most common cancer among male, in the similar way Breast cancer is the most common malignancy among female. Answer: Yes, different ethnic groups may require different sun protection strategies. Studies have shown that there are significant differences in the absorption properties of skin among various skin types, which are primarily determined by variations in melanin concentration and distribution. This suggests that sunscreens for specific ethnic groups may need different combinations of UV filters to achieve a balanced protection between UVB and UVA rays (PUBMED:19624429). Additionally, the health data related to disease and related factors vary among different ethnic groups, even within the same country or among the same ethnic group in different areas. Understanding these differences is crucial for developing individualized health indicators and targeted interventions to achieve the goal of health for all (PUBMED:28647965). Ethnic and gender differences in sun-related cognitions and behaviors have been observed, which contribute to varying skin cancer incidence and prognosis. For example, interventions for White women may focus on creating less favorable attitudes toward being tan, while interventions for Hispanic women might benefit from manipulating perceived similarity to sun-related prototypes. For White men, creating more favorable images of men who protect their skin from the sun could be effective, and for Hispanic men, increasing perceived vulnerability to skin cancer should be considered (PUBMED:38360939). Furthermore, there are differences in sun-protective behaviors among sun-sensitive individuals by race and ethnicity, with non-Hispanic whites more likely to use sunscreen and undergo skin examinations, while other racial and ethnic groups are more likely to practice sun avoidance or wear sun-protective clothing (PUBMED:36301344). Beliefs about vitamin D also vary across demographic groups, influencing sun-related behaviors such as sun exposure and sun protection (PUBMED:22851950). In conclusion, different ethnic groups do need different sun protection strategies, and these should be tailored to their specific skin types, health needs, and sun-related cognitions and behaviors.
Instruction: Plasmacytoid variant urothelial bladder cancer: is it time to update the treatment paradigm? Abstracts: abstract_id: PUBMED:24954925 Plasmacytoid variant urothelial bladder cancer: is it time to update the treatment paradigm? Objectives: Plasmacytoid variant (PCV) urothelial cancer (UC) of the bladder is rare, with poor clinical outcomes. We sought to identify factors that may better inform expectations of tumor behavior and improve management options in patients with PCV UC. Materials And Methods: A retrospective analysis of the Indiana University Bladder Cancer Database between January 2008 and June 2013 was performed comparing 30 patients with PCV UC at cystectomy to 278 patients with nonvariant (NV) UC at cystectomy who underwent surgery for muscle-invasive disease. Multivariable logistic regression was used to assess precystectomy variables associated with non-organ-confined disease at cystectomy and Cox regression analysis to assess variables associated with mortality. Results: Patients with PCV UC who were diagnosed with a higher stage at cystectomy (73% pT3-4 vs. 40%, P = 0.001) were more likely to have lymph node involvement (70% vs. 25%, P&lt;0.001), and positive surgical margins were found in 40% of patients with PCV UC vs. 10% of patients with NV UC (P&lt;0.001). Median overall survival and disease-specific survival were 19 and 22 months for PCV, respectively. Median overall survival and disease-specific survival had not been reached for NV at 68 months (P&lt;0.001). Presence of PCV UC on transurethral resection of bladder tumor was associated with non-organ-confined disease (odds ratio = 4.02; 95% CI: 1.06-15.22; P = 0.040), and PCV at cystectomy was associated with increased adjusted risk of mortality (hazard ratio = 2.1; 95% CI: 1.2-3.8; P = 0.016). Conclusions: PCV is an aggressive UC variant, predicting non-organ-confined disease and poor survival. Differentiating between non-muscle- and muscle-invasive disease in patients with PCV UC seems less important than the aggressive nature of this disease. Instead, any evidence of PCV on transurethral resection of bladder tumor may warrant aggressive therapy. abstract_id: PUBMED:35795120 Pathological complete response of plasmacytoid variant bladder cancer to pembrolizumab following genomic analysis. Introduction: Plasmacytoid variant bladder cancer is a rare variant of urothelial carcinoma that accounts for 1% of bladder cancers. Plasmacytoid variant urothelial carcinoma is characterized by an aggressive phenotype and poor clinical outcomes. Case Presentation: A 61-year-old woman presented with gross hematuria. Cystoscopy showed a 16-mm solid tumor. Transurethral resection of the bladder tumor was performed, and the pathological diagnosis was invasive plasmacytoid variant urothelial carcinoma. Although the pathological T stage was pT1, computed tomography showed right obturator lymph node swelling. Since previous reports indicate poor response to chemotherapy for this disease, clinical sequencing was performed. Based on the high tumor mutation burden revealed, pembrolizumab was administered for 4 cycles, and computed tomography showed a partial response. Robot-assisted radical cystectomy was performed, and a pathological complete response including the pelvic lymph node was observed. Conclusion: Pembrolizumab may be a treatment option for plasmacytoid variant urothelial carcinoma following genomic analysis. abstract_id: PUBMED:36937121 Plasmacytoid Variant Urothelial Cell Carcinoma: A Case of a Histological Variant of Urinary Bladder Cancer With Aggressive Behavior. This paper presents a case report of plasmacytoid variant urothelial carcinoma (PVUC), a rare form of transitional cell carcinoma. PVUC is known for its unique clinical features, aggressive behavior, and poor survival rates. PVUC comprises less than 3% of all bladder tumors, and its diagnosis is often difficult due to its resemblance to other forms of bladder cancer. It requires a staging workup to rule out metastasis, relies heavily on immunostaining and histopathological analysis for diagnosis, and requires a multidisciplinary approach with early aggressive treatment, including cisplatin-based chemotherapy following surgery. This report highlights the importance of understanding rare variants of bladder cancer to ensure timely and accurate diagnosis and appropriate treatment planning. We report here a case of a 75-year-old male with multiple comorbidities who presented with hematuria and was diagnosed with urothelial carcinoma plasmacytoid type, which was initially treated with transurethral resection but later found to be unresectable and treated with palliative chemotherapy and radiation therapy. Eventually, the patient passed away three years after the diagnosis. abstract_id: PUBMED:35529298 Poor Outcome due to the Plasmacytoid Variant of Urothelial Carcinoma. A 72-year-old man visited our hospital due to pollakiuria and lower abdominal pain. Urinary cytology was positive, and cystoscopy revealed diffuse edematous nonpapillary tumor. We performed transurethral biopsy, and clinical stage T3 plasmacytoid variant of urothelial carcinoma (PUC) was diagnosed. Although we planned for radical cystectomy, peritoneal dissemination and lung and pelvic lymph node metastases appeared 3 weeks after the initial visit. We also planned for chemotherapy; however, the metastases rapidly progressed, and he died 7 weeks after the biopsy. PUC is rare and shows an aggressive clinical course and poor prognosis. abstract_id: PUBMED:37228940 Plasmacytoid bladder cancer: a rare case report. Plasmacytoid carcinoma of the bladder is a rare histological variant of urothelial carcinoma associated with late presentation. This pattern of disease may portend an extremely poor prognosis and significant challenges for treatment with curative intent. Case Presentation: The authors report a case of a patient with locally advanced plasmacytoid urothelial carcinoma (PUC) of the bladder. A 71-year-old man with a history of chronic obstructive pulmonary disease presented with gross hematuria. The rectal examination revealed a fixed bladder base. A computed tomography scan showed a pedunculated lesion arising from the anterior and left lateral bladder wall and extended to the perivesical fat. The patient underwent transurethral resection of the tumor. The histologic examination revealed the presence of muscle-invasive PUC of the bladder. The decision of the multidisciplinary consultation meeting was to do palliative chemotherapy. Thus, the patient could not receive systemic chemotherapy and died 6 weeks after transurethral resection of the bladder tumor. Clinical Discussion: A plasmacytoid variant of urothelial carcinoma is a rare subtype of urothelial carcinoma that has a poor prognosis with a high mortality rate. The disease is usually diagnosed at an advanced stage. Given the rarity of plasmacytoid bladder cancer, treatment guidelines are not clear; therefore, more aggressive treatment may be required. Conclusion: PUC of the bladder is characterized by high aggressiveness, an advanced stage at the time of diagnosis, and a poor prognosis. abstract_id: PUBMED:32633253 Plasmacytoid variant of urothelial bladder cancer. A single center experience. Introduction: Bladder Cancer (BC) is11th most common malignancy. In terms of pathology, the vast majority of patients suffer from transitional cell carcinoma. Apart from this common type of BC, there are many morphological subtypes with different oncological characteristics. Plasmacytoid BC is a well-recognized subtype of BC with great diagnostic importance as it usually presents in locally advanced or even metastatic stage. Objective: The objective of this study was to evaluate our experience in diagnosing and treating patients with this rare BC subtype. Methods: A retrospective analysis of 5 patients diagnosed with plasmacytoid BC in our department during they ears 2014-2016 was performed. Transurethral resection of the tumors was performed in all patients and pathology diagnosis of plasmacytoid variant was based on several morphologic and immunohistochemical parameters. Staging included abdominal and thoracic CT. Results: 3 of 5 patients were diagnosed with metastatic disease. These patients were referred to the oncology department. 2 patients presented with non-metastatic BC after initial staging and thus a radical cystectomy was performed. Follow up of all patients was carried out and their survival was recorded. Conclusions: We concluded that despite the fact that the plasmacytoid variant of BC is rare, it is important to take into account the pathologic and clinical features of this tumor in order to manage the optimal treatment of this poor prognosis cancer. abstract_id: PUBMED:34796255 A case report of a patient with plasmacytoid urothelial cancer with significant response to HER2-targeting therapy and enfortumab vedotin. In this case report, we present a patient with the rare plasmacytoid variant of urothelial cancer. Notable elements of his course include: complete response to neoadjuvant paclitaxel, gemcitabine, cisplatin, development of metastatic disease to the rectum, sustained disease control with dual HER2 targeting therapy, and subsequent complete response to enfortumab vedotin. Plasmacytoid urothelial cancer accounts for just 1-3% of all urothelial cancer cases and is associated with more aggressive disease, with a propensity for intra-abdominal spread and poor response to neoadjuvant therapy. Preliminary data indicate that the variant may generally have high levels of HER2 expression. We review the history of HER2 targeting in metastatic urothelial cancer, which has included single-agent as well as combination with chemotherapy; there are ongoing biomarker-based clinical trials. Furthermore, we highlight the complete response to enfortumab vedotin. To date, this is the first report of efficacy for enfortumab vedotin in the plasmacytoid variant. abstract_id: PUBMED:27305940 CD138 Expression Is Observed in the Urothelial Epithelium and in Various Urothelial Carcinomas, and Cannot Be Evidence for Plasmacytoid Urothelial Carcinoma. CD138 (syndecan-1) immunoexpression has been reported to be specific for the plasmacytoid variant of urothelial carcinomas (UCs). The aim of this study was to examine the utility of CD138 immunohistochemistry for diagnosing the plasmacytoid variant of UCs. The extent and intensity of CD138 immunostaining were evaluated in 22 infiltrating UCs, 2 other infiltrating carcinomas, 15 noninvasive urothelial lesions, 3 other benign lesions, and perilesional normal tissues. CD138 immunostaining of the normal urothelial epithelium was universally diffuse and strong. In addition, all 42 cases of urinary tract lesions exhibited positive CD138 immunostaining; however, 1 of 3 plasmacytoid variants exhibited focal CD138 expression. The frequency of CD138 positivity in plasmacytoid variants may be relatively low, compared with that observed in the conventional types and other variants; thus, it is not appropriate to assume that CD138 expression in UCs is specific for plasmacytoid variants. abstract_id: PUBMED:28753857 The Impact of Plasmacytoid Variant Histology on the Survival of Patients with Urothelial Carcinoma of Bladder after Radical Cystectomy. Background: The clinical significance of the plasmacytoid variant (PCV) in urothelial carcinoma (UC) is currently lacking. Objective: To compare clinical outcomes of patients with any PCV with that of patients with pure UC treated with radical cystectomy (RC). Design, Setting, And Participants: We identified 98 patients who had pathologically confirmed PCV UC and 1312 patients with pure UC and no variant history who underwent RC at our institution between 1995 and 2014. Outcome Measurements And Statistical Analysis: Univariable and multivariable Cox regression and Cox proportional hazards regression to determine if PCV was associated with overall survival (OS). Results And Limitations: Patients with PCV UC were more likely to have advanced tumor stage (p=0.001), positive lymph nodes (p=0.038), and receive neoadjuvant chemotherapy than those with pure UC (46% vs 22%, p&lt;0.0001). The rate of positive soft tissue surgical margins was over five times greater in the PCV UC group compared with the pure UC group (21% vs 4.1%, respectively, p&lt;0.0001). Median OS for the pure UC versus the PCV patients were 8 yr and 3.8 yr, respectively. On univariable analysis, PCV was associated with an increased risk of overall mortality (hazard ratio=1.34, 95% confidence interval: 1.02-1.78, p=0.039). However, on multivariable analysis adjusted for age, sex, neoadjuvant chemotherapy received, lymph node status, pathologic stage, and soft margin status, the association between PCV and OS was no longer significant (hazard ratio=1.06, 95% confidence interval: 0.78, 1.43, p=0.7). This retrospective study is limited by the lack of pathological reanalysis, and the impact of other concurrent mixed histology cannot be determined in this study. Conclusions: Patients with PCV features have a higher disease burden at RC compared with those with pure UC. However, PCV was not an independent predictor of survival after RC on multivariable analysis, suggesting that PCV histology should not be used as an independent prognostic factor. Patient Summary: Plasmacytoid urothelial carcinoma is a rare and aggressive form of bladder cancer. Patients with plasmacytoid urothelial carcinoma had worse adverse pathologic features, but this was not associated with worse overall mortality when compared with patients with pure urothelial carcinoma. abstract_id: PUBMED:31698007 Plasmacytoid acinar adenocarcinoma of the prostate: a newly described variant of prostate cancer. A plasmacytoid variant of prostatic adenocarcinoma has not been reported to the best of our knowledge. A 54-year-old male presented with recurrent attacks of acute urinary retention. Laboratory findings showed high creatinine and a serum prostate specific antigen of 50.7 μg/L. Magnetic Resonance Imaging showed a locally advanced tumor involving the bladder and extending to the base of prostate with bilateral ureterovesical junction involvement and invasion of the left seminal vesicle and left anterior mesorectal fascia as well as perirectal fat invasion. Diffuse metastases to the abdominopelvic lymph nodes were identified. Bone scintigraphy showed multiple bone metastases. Transrectal ultrasound guided biopsy of the prostate was attempted but the patient could not tolerate the procedure and the procedure was canceled. The patient then underwent transurethral resection of bladder tumor. Microscopic examination showed sheets of malignant cells with prominent plasmacytoid appearance undermining benign urothelium. The tumor cells were positive for PSA, PSAP, NKX 3.1 and Cytokeratin 8/18. The tumor cells were negative for P63, Cytokeratin 34βE12, Cytokeratin 20, Desmin, CD38, Kappa and Lambda light chains, Chromogranin, Synaptophysin, GATA 3, E-cadherin and CD45. INI1 was retained. Next generation sequencing showed an intermediate tumor mutational burden. Notably, no genomic alterations in the CDH1 gene (encoding for E-cadherin) were present. The patient showed some initial response to antiandrogen therapy with a drop in serum PSA levels following androgen deprivation therapy. However, the patient died 6 months after diagnosis. It is critical to recognize this newly described variant and to distinguish it from plasmacytoid urothelial carcinoma. Recognition of the newly described plasmacytoid variant of adenocarcinoma of the prostate will lead to identification and reporting of more cases and a better understanding of its clinicopathologic features. Answer: Plasmacytoid variant urothelial bladder cancer (PCV UC) is a rare and aggressive form of bladder cancer that has been associated with poor clinical outcomes and a high mortality rate. The current treatment paradigm for PCV UC may need to be updated due to its distinct behavior and response to therapies compared to nonvariant urothelial cancer (NV UC). Patients with PCV UC are often diagnosed with higher stage disease at cystectomy, are more likely to have lymph node involvement, and have a higher incidence of positive surgical margins compared to patients with NV UC. Median overall survival and disease-specific survival are significantly lower for patients with PCV UC (PUBMED:24954925). This suggests that the aggressive nature of PCV UC may warrant a different treatment approach than what is typically used for NV UC. Recent case reports have shown promising results with the use of immunotherapy and targeted therapies for PCV UC. For instance, a patient with invasive PCV UC showed a pathological complete response to pembrolizumab, a PD-1 inhibitor, following genomic analysis that revealed a high tumor mutation burden (PUBMED:35795120). Another patient with metastatic PCV UC had a significant response to HER2-targeting therapy and enfortumab vedotin, a nectin-4-directed antibody-drug conjugate (PUBMED:34796255). However, the rarity of PCV UC means that treatment guidelines are not well-established, and more aggressive treatment may be required. The disease is usually diagnosed at an advanced stage, and despite some patients responding to therapies like pembrolizumab and enfortumab vedotin, the prognosis remains poor (PUBMED:37228940, PUBMED:35529298, PUBMED:37228940). In conclusion, the unique clinical features and aggressive behavior of PCV UC suggest that it may be time to update the treatment paradigm for this variant. The potential benefits of immunotherapy and targeted therapies in PCV UC should be further investigated, and treatment decisions may need to be more individualized based on genomic analysis and the patient's response to therapy.
Instruction: Is endometrial apoptosis evidence of endometrial aging in unexplained infertility? Abstracts: abstract_id: PUBMED:16054962 Is endometrial apoptosis evidence of endometrial aging in unexplained infertility? a preliminary report. Objective: To investigate spontaneous endometrial apoptosis in women with unexplained infertility and to find out whether there is a possible relationship between endometrial apoptosis, age, and hormonal parameters. Study Design: This study was designed as a prospective, case-controlled study in a University Hospital setting. A total of 34 endometrial biopsies were performed from 17 women with unexplained infertility and 17 fertile controls, who were admitted for tubal ligation. Endometrium was sampled on the seventh post-ovulatory day. On the same day of endometrial sampling, serum levels of FSH, LH, PRL, TSH, E2, progesterone, 17alpha-hydroxyprogesterone, testosterone and DHEA-S were determined. Endometrial glandular and stromal apoptosis were investigated by DNA nick end labeling (TUNEL) method on each sample. Endometrial apoptotic index was calculated and correlated with age and hormonal parameters. Results: There was no difference in either endometrial glandular apoptotic index (AI) or stromal AI between the groups. However, the mean glandular AI was significantly higher than the mean stromal AI (p = 0.0001). There was a strong correlation between endometrial AI and age (r = 0.91, p = 0.02). Serum T levels were significantly found to be decreased in the unexplained infertility group (p = 0.0001). In addition, serum TSH levels were positively correlated with AI in the glandular endometrium in women with unexplained infertility (r = 0.611, p = 0.009). Conclusion: Endometrial apoptosis increases with age. Serum levels of testosterone were lower in unexplained infertility. The effect of serum TSH levels on apoptosis in the glandular epithelium of the endometrium needs further studies. abstract_id: PUBMED:24592032 Endometrial spiral artery Doppler parameters in unexplained infertility patients: is endometrial perfusion an important factor in the etiopathogenesis? Objective: Uterine perfusion, particularly the endometrial blood flow, may have an important role in endometrial receptivity. In order to assess the contribution of sub endometrial blood flow in the etiopathogenesis of unexplained infertility mid luteal- peri-implantation period spiral artery transvaginal color Doppler parameters were measured and compared with fertile controls. Material And Methods: Forty-two consecutive patients admitted to Izmir Katip Celebi University Ataturk Training and Research Hospital, Department of Obstetric and Gynecology with the diagnosis of unexplained infertility after standard diagnostic work up constituted the study group and they were compared with a fertile control group admitted to hospital with non specific gynecological complaints or for check-up in the same period. Mid luteal transvaginal color Doppler ultrasonography was applied to each patient by the same radiologist who was blind to the diagnosis of the particular patient and, RI (resistance index) and PI (pulsatility index) values were calculated. Results: There were no significant differences between the two groups, in respect to age, body mass index, basal hormonal and mid luteal progesterone levels (p&gt;0.05). For the fertile control group, mid luteal-peri-implantation phase endometrial spiral artery mean RI values were calculated as 0.48±0.08 SD and mean PI values as 0.65±0.18 SD. For the study group, mean RI values were calculated as 0.54±0.07 SD, PI values were calculated as 0.80±0.16 SD. The differences for RI (p=0.009) and PI (p=0.004) were statistically significant. Conclusion: According to Doppler parameters, unexplained infertility patients have high impedance blood flow in spiral arteries which means that peri-implantation blood flow in these patient is lower than fertile controls. These findings suggest that endometrial perfusion may have an important contribution to etiopathogenesis of unexplained infertility. abstract_id: PUBMED:36777856 Abnormal expression of galectin-1, -3 leading to unexplained infertility by decreasing endometrial receptivity: a retrospective analysis. Objective: To explore the relationship between galectin-1, -3 and unexplained infertility and the effect on endometrial receptivity. Methods: The clinical data of 100 female patients at childbearing age coming to Xingtai People's Hospital from February 2019 to February 2021 were collected retrospectively. Based on normal pregnancy or not, 50 infertility patients were placed into an infertility group, and 50 patients with normal pregnancy history were placed into a normal group. The mRNA and protein levels of galectin-1, -3, endometrial wave-like activity, endometrial thickness, uterine artery pulsatility index (PI), resistance index (RI), end diastolic velocity (EDV) and peak systolic velocity (PSV) ratio (S/D = PSV/EDV) were compared between the two groups of patients. Results: The mRNA and protein levels of galectin-1, -3 in the infertile group were lower than those in the normal group (P&lt;0.05). In addition, the endometrial wave-like activity in the infertile group was more than that in the normal group (P&lt;0.05). The endometrial thickness was less, while PI, RI and S/D were higher in the infertile group than those in the normal group (P&lt;0.05). Conclusion: The low mRNA and protein expressions of galectin-1, -3 in unexplained infertility can affect endometrial receptivity, which may be closely related to unexplained infertility. abstract_id: PUBMED:32070163 Assessment of endometrial receptivity during implantation window in women with unexplained infertility. This study aimed to assess the endometrial receptivity during implantation window in women with unexplained infertility. A prospective study recruited 168 women with unexplained infertility and 169 fertile women. Ultrasonic parameters and biomarkers in the uterine fluid were detected. The endometrial vascularization index (VI), flow index (FI) and vascularization flow index (VFI) were significantly higher in fertile women as compared with unexplained infertile women, and the integrin αvβ3, vascular endothelial growth factor (VEGF), tumor necrosis factor alpha (TNF-α), and leukemia inhibitory factor (LIF) levels in uterine fluid were significantly higher in fertile women. The biochemical pregnancy rate, clinical pregnancy rate, and ongoing pregnancy rate in fertile women were 20.12%, 18.34%, and 17.75%, respectively, which were significantly higher compared with unexplained infertile women (7.14%, 5.36%, and 4.17%, respectively). Endometrial thickness (ET), endometrial volume (EV), VI, FI, and VFI measured by ultrasound, and the integrin αvβ3, VEGF, TNF-α, and LIF levels in uterine fluid were all significantly higher in pregnant women as compared with nonpregnant women. The best parameters of ultrasonic indicators for predicting endometrial receptivity in women with unexplained infertility were FI(AUC = 0.894, sensitivity 93.8%, and specificity 83.1%). Integrin αvβ3 had the best predictive value for endometrial receptivity among biomarkers in the uterine fluid (AUC = 0.921, sensitivity 96.7%, and specificity 89.5%). Women with unexplained infertility present declined endometrial receptivity. Endometrial ultrasonic parameters detected by three-dimensional power Doppler and biomarkers in the uterine fluid may be effective indicators to predict endometrial receptivity. abstract_id: PUBMED:35635620 Effects of hysterosalpingo-contrast sonography examination on endometrial receptivity among women with unexplained infertility. Purpose: Hysterosalpingo-contrast sonography (HyCoSy) is the preferred method for evaluating fallopian tubal patency, and it is associated with improved rates of natural pregnancy among infertile patients. However, the mechanism underlying the improvement in pregnancy rates following HyCoSy remains unclear. This study aimed to investigate the effect of HyCoSy examination on endometrial receptivity as well as pregnancy rates among infertile women. Methods: This prospective study included 120 women with unexplained infertility who visited our department between June 2018 and February 2021. These patients were classified into the study group (n = 60) and the control group (n = 60) depending on their willingness to undergo three-dimensional HyCoSy in the present cycle (study group) or 6 months later (control group). Endometrial characteristics, including endometrial thickness and pattern as well as the endometrial blood flow distribution pattern, were measured twice by transvaginal Doppler ultrasonography in the preovulatory phase before and after HyCoSy examination. Participants were followed for 6 months to observe the outcome of spontaneous conception. Results: Compared with the control group, the study group had a significantly higher cumulative pregnancy rate at 6 months after HyCoSy (21.6% [13/60] vs 5.0% [3/60], P = 0.007). More patients in the study group showed improved endometrial blood flow distribution (P = 0.021, χ2 = 7.699), but no differences in endometrial thickness and pattern were observed between the groups (P &gt; 0.05). Conclusion: HyCoSy examination may improve endometrial perfusion and has a therapeutic effect on improving spontaneous pregnancy among women with unexplained infertility. abstract_id: PUBMED:30594169 Does endometrial scratching increase the rate of spontaneous conception in couples with unexplained infertility and a good prognosis (Hunault &gt; 30%)? Study protocol of the SCRaTCH-OFO trial: a randomized controlled trial. Background: In the Netherlands, couples with unexplained infertility and a good prognosis to conceive spontaneously (i.e. Hunault &gt; 30%) are advised to perform timed intercourse for at least another 6 months. If couples fail to conceive within this period, they will usually start assisted reproductive technology (ART). However, treatment of unexplained infertility by ART is empirical and can involve significant burdens. Intentional endometrial injury, also called 'endometrial scratching', has been proposed to positively affect the chance of embryo implantation in patients undergoing in vitro fertilization (IVF). It might also be beneficial for couples with unexplained infertility as defective endometrial receptivity may play a role in these women. The primary aim of this study is to determine whether endometrial scratching increases live birth rates in women with unexplained infertility. Method: A multicentre randomized controlled trial will be conducted in Dutch academic and non-academic hospitals starting from November 2017. A total of 792 women with unexplained infertility and a good prognosis for spontaneous conception &lt; 12 months (Hunault &gt; 30%) will be included, of whom half will undergo endometrial scratching in the luteal phase of the natural cycle. The women in the control group will not undergo endometrial scratching. According to Dutch guidelines, both groups will subsequently perform timed intercourse for at least 6 months. The primary endpoint is cumulative live birth rate. Secondary endpoints are clinical and ongoing pregnancy rate; miscarriage rate; biochemical pregnancy loss; multiple pregnancy rate; time to pregnancy; progression to intrauterine insemination (IUI) or IVF; pregnancy complications; complications of endometrial scratching; costs and endometrial tissue parameters associated with reproductive success or failure. The follow-up duration is 12 months. Discussion: Several small studies show a possible beneficial effect of endometrial scratching in women with unexplained infertility trying to conceive naturally or through IUI. However, the quality of this evidence is very low, making it unclear whether these women will truly benefit from this procedure. The SCRaTCH-OFO trial aims to investigate the effect of endometrial scratching on live birth rate in women with unexplained infertility and a good prognosis for spontaneous conception &lt; 12 months. Trial Registration: NTR6687 , registered August 31st, 2017. Protocol Version: Version 2.6, November 14th, 2018. abstract_id: PUBMED:35105443 Effect of endometrial scratching on unassisted conception for unexplained infertility: a randomized controlled trial. Objective: To investigate whether endometrial scratching increases the chance of live birth in women with unexplained infertility attempting to conceive without assisted reproductive technology. Design: Randomized, placebo-controlled, participant-blind, multicenter international trial. Setting: Fertility clinics. Patient(s): Women with a diagnosis of unexplained infertility trying to conceive without assistance. Intervention(s): Participants were randomly assigned to receive an endometrial biopsy or a placebo procedure (placement of a biopsy catheter in the posterior fornix, without inserting it into the external cervical os). Both groups performed regular unprotected intercourse with the intention of conceiving over three consecutive study cycles. Main Outcome Measure(s): The primary outcome was live birth. Result(s): A total of 220 women underwent randomization. The live birth rate was 9% (10 of 113 women) in the endometrial-scratch group and 7% (7 of 107 women) in the control group (adjusted OR, 1.39; 95% CI, 0.50-4.03). There were no differences between the groups in the secondary outcomes of clinical pregnancy, viable pregnancy, ongoing pregnancy, and miscarriage. Endometrial scratching was associated with a higher pain score on a 10-point scale (adjusted mean difference, 3.07; 95% CI, 2.53-3.60). Conclusion(s): This trial did not find evidence that endometrial scratching improves the live birth rate in women with unexplained infertility trying to conceive without assistance. Clinical Trial Registration Number: Australian New Zealand Clinical Trials Registry ACTRN12614000656639. abstract_id: PUBMED:35612069 Determination of Cut Off for Endometrial Thickness in Couples with Unexplained Infertility: Trustable AI. Endometrial thickness in assisted reproductive techniques is one of the essential factors in the success of pregnancy. Despite extensive studies on endometrial thickness prediction, research is still needed. We aimed to analyze the impact of endometrial thickness on the ongoing pregnancy rate in couples with unexplained infertility. A total of 729 couples with unexplained infertility were included in this study. A random forest model (RFM) and logistic regression (LRM) were used to predict pregnancy. Evaluation of the performance of RFM and LRM was based on classification criteria and ROC curve, Odd Ratio for ongoing Pregnancy by EMT categorized. The results showed that RFM outperformed the LRM in IVF/ICSI and IUI treatments, obtaining the highest accuracy. We obtained a 7.7mm cut-off point for IUI and 9.99 mm for IVF/ICSI treatment. The results showed machine learning is a valuable tool in predicting ongoing pregnancy and is trustable via multicenter data for two treatments. In addition, Endometrial thickness was not statistically significantly different from CPR and FHR in both treatments. abstract_id: PUBMED:38482194 Mechanisms of endometrial aging: lessons from natural conceptions and assisted reproductive technology cycles. Until recently, the study of age-related decline in fertility has focused primarily on the ovary; depletion of the finite pool of oocytes and increases in meiotic errors leading to oocyte aneuploidy are well-established mechanisms by which fertility declines with advancing age. Comparatively little is known about the impact of age on endometrial function. The endometrium is a complex tissue comprised of many cell types, including epithelial, stromal, vascular, immune and stem cells. The capacity of this tissue for rapid, cyclic regeneration is unique to this tissue, undergoing repeated cycles of growth and shedding (in the absence of an embryo) in response to ovarian hormones. Furthermore, the endometrium has been shown to be capable of supporting pregnancies beyond the established boundaries of the reproductive lifespan. Despite its longevity, molecular studies have established age-related changes in individual cell populations within the endometrium. Human clinical studies have attempted to isolate the effect of aging on the endometrium by analyzing pregnancies conceived with euploid, high quality embryos. In this review, we explore the existing literature on endometrial aging and its impact on pregnancy outcomes. We begin with an overview of the principles of endometrial physiology and function. We then explore the mechanisms behind endometrial aging in its individual cellular compartments. Finally, we highlight lessons about endometrial aging gleaned from rodent and human clinical studies and propose opportunities for future study to better understand the contribution of the endometrium to age-related decline in fertility. abstract_id: PUBMED:35633622 A Comparative Study of Uterine Artery Doppler Parameters and Endometrial Characteristics in Women with Unexplained Infertility and Fertile Women at a Nigerian Teaching Hospital. Objectives: To investigate the uterine artery Doppler parameters and endometrial characteristics in women with unexplained infertility. Methods: A prospective case-control study of 42 women with unexplained infertility and 42 fertile controls. Their mid-luteal phase transvaginal Doppler parameters of both uterine arteries and endometrial characteristics (endometrial blood flow, thickness and volume) were investigated and analysed. P values &lt; 0.05 was statistically significant. Results: The mean uterine artery pulsatility index (PI) and resistivity index (RI) of the cases (women with unexplained infertility) were significantly higher compared to the values in the fertile controls (PI = 2.81±0.61 vs 2.15±0.65; p=0.001) (RI= 0.87±0.08 vs 0.82±0.07; p=0.003). In addition, the end-diastolic volume (EDV) (6.12±4.17 vs 9.37±5.14; p=0.007) and endometrial-subendometrial blood flow (p=0.036) were significantly lower in the cases compared to the controls. Multivariate logistics analysis showed that PI was independently associated with infertile status (p=0.006). There was no significant difference in the mean PSV (48.69±10.9 vs 50.58±11.30; p=0.602), endometrial thickness (10.30+3.13 vs 10.72+3.10; p=0.544), endometrial volume (7.82+1.56 vs 8.23+1.71; p=0.323), mean age (32.28±4.062 vs 31.91±3.58 years; p=0.502), body mass index (26.15±2.71kg/m2 vs 25.24±2.85 kg/m2; p=0.18) menstrual bleeding days (4.07±0.89 days vs 4.02±0.95 days; p=0.481) duration of menstrual cycle (28.02±1.09 days vs 27.64±1.36 days; p= 0.162), smoking history (p=0.909) and alcohol intake (p=0.507) of the infertile women compared with the fertile controls, respectively. Conclusion: Women with unexplained infertility have increased uterine artery Doppler PI and RI and reduced endometrial-subendometrial perfusion compared with fertile controls. Answer: Yes, endometrial apoptosis has been suggested as evidence of endometrial aging in unexplained infertility. A study investigating spontaneous endometrial apoptosis in women with unexplained infertility found a strong correlation between endometrial apoptotic index (AI) and age (r = 0.91, p = 0.02), indicating that endometrial apoptosis increases with age (PUBMED:16054962). This suggests that endometrial aging, as evidenced by increased apoptosis, may be a factor in unexplained infertility. However, the study also noted that the effect of serum thyroid-stimulating hormone (TSH) levels on apoptosis in the glandular epithelium of the endometrium needs further investigation, indicating that other factors may also influence endometrial apoptosis and its relationship with infertility.
Instruction: Differentiated thyroid carcinoma (DTC) in a young woman with Peutz-Jeghers syndrome: are these two conditions associated? Abstracts: abstract_id: PUBMED:15125320 Unilateral Sertoli-cell androblastoma in the ovary of a young woman The authors present a case of a Sertoli-cell type androblastoma of the ovary. The tumor is mainly seen in young women and is mostly discovered through hormonal dysfunctions linked to it. Histopathological evaluation is indispensable for an exact diagnosis. Because of its rarity, there is little experience with the tumor and the histological identification is also sometimes very challenging. The tumor is occasionally associated with Peutz-Jeghers syndrome, tumor of the thyroid, or goiter. Biologically it follows a low-malignant course with a fair prognosis, long-term complex patient follow up is necessary after unilateral oophorectomy of the diseased organ. abstract_id: PUBMED:19235129 Differentiated thyroid carcinoma (DTC) in a young woman with Peutz-Jeghers syndrome: are these two conditions associated? Aims: Peutz-Jeghers Syndrome (PJS) is a rare dominantly inherited disease characterized by hamartomatous small bowel polyposis, mucocutaneous hyperpigmentation, and increased risk of cancer. Differentiated thyroid cancers (DTCs) present mainly as sporadic, but they may have also a familial component. We present a case of PJS in a caucasian 25 years-old woman, who developed a DTC. Methods: The patient had a palpable nodule in the right side of the thyroid region and an endocrinological evaluation, including hormonal assays, neck ultrasound (US) and fine needle aspiration (FNAB) of the nodule was performed. Results: US confirmed a single nodular lesion in the right thyroid lobe (14 mm). Cytological analysis at FNAB revealed a pattern compatible with papillary thyroid carcinoma. The histological analysis after total thyroidectomy confirmed the diagnosis of a Hurtle cell variant of papillary thyroid carcinoma, with follicular architecture. Conclusion: Even though rare, the association between PJS and DTC can be possible. In clinical practice it must be borne in mind that the wide spectrum of possible cancer diseases occurring in PJS could also include DTC, that the latter can occur earlier in life in PJS population and with a more aggressive histological pattern. Furthermore, in patients with PJS, US of the thyroid should be performed whenever thyroid disease is suspected at physical examination or based on patient's medical history. Due to lack of established data allowing for a real esteem of the association between PJS and DTC, US of the thyroid, should not be recommended as a routine screening for all subjects with PJS. abstract_id: PUBMED:3549077 Genodermatoses with malignant potential. Cowden's disease, which is characterized by multiple facial papules, gingival papillomas, acral papules, and other hamartomatous lesions, represents a genodermatosis with a high incidence of breast and thyroid carcinoma. abstract_id: PUBMED:19536760 Differentiated thyroid cancer associated with intestinal polyposis syndromes: a review. Intestinal polyposis syndromes, such as familial adenomatous polyposis (FAP) and Cowden's syndrome, are often associated with extraintestinal manifestations, and while many of these manifestations are benign, malignant extraintestinal manifestations, such as differentiated thyroid cancers, do occur. Although differentiated thyroid cancers (ie, papillary and follicular thyroid carcinomas) are associated with multiple syndromes, they are most commonly associated with intestinal polyposis syndromes. In the general population, the probability of developing thyroid cancer by age 65 years is only .5%. However, 1% to 2% of patients with FAP develop papillary thyroid carcinoma, the most common extraintestinal malignancy in patients with FAP. Also, up to 10% of patients with Cowden's syndrome will develop follicular thyroid carcinoma. The purpose of this review was to provide an overview of FAP, Cowden's syndrome, and Peutz-Jeghers syndrome, to discuss in detail the associations between intestinal polyposis syndromes and differentiated thyroid cancers, and to provide suggestions for screening and managing these diseases. abstract_id: PUBMED:10803862 A concise genetic and clinical guide to multiple endocrine neoplasias and related syndromes. Several familial neoplastic syndromes are associated with endocrine gland oncogenesis. The main ones are: multiple endocrine neoplasia type 1 (MEN 1), which affects primarily the pituitary, pancreas, and parathyroid glands; MEN 2A and MEN 2B, which involve mainly the thyroid and parathyroid glands and the adrenal medulla; familial medullary thyroid carcinoma (FMTC), which affects only the thyroid gland; and, finally, Carney complex, which affects the adrenal cortex, pituitary, thyroid gland, and the gonads. Carney complex is also associated with pigmentation abnormalities and myxoid and other neoplasms of mesenchymal origin. Thus, this syndrome also belongs to another group of genetic disorders, those associated with pigmentation defects and multiple tumors, including tumors of the endocrine glands. Peutz-Jeghers syndrome and Cowden disease are just two of these disorders that have recently been elucidated at the molecular level. von Hippel-Lindau disease is another condition that affects the pancreas and adrenal medulla and its gene is also known. The inheritance of the MENs, Carney complex, and related syndromes is autosomal dominant. Clinical recognition of these syndromes at a young age improves clinical outcome and prognosis of the various tumors and decreases associated morbidity and mortality. This review considers a wider, more inclusive view of the MEN syndromes, summarizes their clinical features and presents the newest information on their molecular elucidation. abstract_id: PUBMED:21707968 Solitary Peutz-Jeghers type hamartomatous polyps in the duodenum are not always associated with a low risk of cancer: two case reports. Introduction: A hamartomatous polyp without associated mucocutaneous pigmentation or a family history of Peutz-Jeghers Syndrome is diagnosed as a solitary Peutz-Jeghers type hamartomatous polyp. As compared with Peutz-Jeghers Syndrome, Peutz-Jeghers type hamartomatous polyps are diagnosed with a lower risk of cancer and are regarded as a different disorder. Case Presentation: In case one, we describe an 84-year-old Japanese man with a 14 mm duodenal polyp. Endoscopic mucosal resection was performed and histological examination showed findings suggestive of a hamartomatous polyp with a focus of well-differentiated adenocarcinoma. In case two, we describe a 76-year-old Japanese man who had been treated for prostate, rectal and lung cancer. Upper gastrointestinal endoscopy revealed a duodenal polyp measuring 15 mm in diameter. Endoscopic mucosal resection was performed, and histological examination showed findings suggestive of a hamartomatous polyp. Liver and thyroid cancers were found after the endoscopic treatment. Conclusion: Although duodenal solitary hamartomatous polyps are associated with a lower risk of cancer, four patients, including our cases, have been diagnosed with cancerous polyps. Patients with duodenal solitary hamartomatous polyps should be treated by endoscopic or surgical resection and need whole-body screening. abstract_id: PUBMED:21877933 Papillary thyroid carcinoma in Peutz-Jeghers syndrome. Background: Peutz-Jeghers syndrome (PJS) is a rare dominantly inherited disease characterized by the association of gastrointestinal hamartomatous polyposis, mucocutaneous hyperpigmentation, and increased risk of cancer at different target organs. Its occurrence with differentiated thyroid cancer, particularly papillary thyroid carcinoma (PTC), even if rare, has been described. Summary: We here present a case of PTC observed in a PJS patient and a review of the literature aiming at discussing the utility of thyroid surveillance in the management of these patients. A 22-year-old woman presenting with hyperpigmented lesions of the lips and hamartomatous polyps in the stomach, duodenum, jejunum, and ileum, leading to the suspicion of PJS, was submitted to genetic analysis. Mutation scanning of the Liver Kinase B1 (LKB1) gene identified the presence of the truncating mutation E265X, thus confirming the clinical diagnosis. Beside the endoscopic, radiologic, and echographic evaluations required by the standard surveillance guidelines, the patient had a neck ultrasound (US), which showed a 5×4×6 mm hypoechoic nodule in the right thyroid lobe. The nodule contained microcalcifications and a perinodular vascular pattern. The cytological preparations derived from US-guided fine-needle aspiration biopsy of the nodule demonstrated the presence of PTC. The patient underwent a video-assisted total thyroidectomy and the histological examination revealed a follicular variant of papillary microcarcinoma. Radioactive iodine therapy was not performed because of the small size of the lesion. The patient was started on levothyroxine therapy to keep the serum thyrotropin levels suppressed. Both the sequencing and the multiplex ligation-dependent probe amplification analysis could not identify any LKB1 mutation in the tumor specimen, and the methylation-specific polymerase chain reaction assay excluded hypermethylation of the LKB1 promoter as the mechanism of inactivation for the remaining normal allele in the tumor. Conclusions: Although other mechanisms of LKB1 silencing may be responsible for its inactivation in the thyroid cancer, we cannot rule out that the occurrence of thyroid carcinoma could be a coincidental finding in this patient. However, the case here presented suggests that US of the thyroid could possibly become an integral part of the evaluation and the follow-up program adopted for PJS patients. abstract_id: PUBMED:4657623 Peutz-Jeghers syndrome. Report of two cases from Lebanon, associated with endocrine abnormalities. N/A abstract_id: PUBMED:11407658 Clinical genetics of multiple endocrine neoplasias, Carney complex and related syndromes. The list of multiple endocrine neoplasias (MENs) that have been molecularly elucidated is growing with the most recent addition of Carney complex. MEN type 1 (MEN 1), which affects primarily the pituitary, pancreas, and parathyroid glands, is caused by mutations in the menin gene. MEN type 2 (MEN 2) syndromes, MEN 2A and MEN 2B that affect mainly the thyroid and parathyroid glands and the adrenal medulla, and familial medullary thyroid carcinoma (FMTC), are caused by mutations in the REToncogene. Finally, Carney complex, which affects the adrenal cortex, the pituitary and thyroid glands, and the gonads, is caused by mutations in the gene that codes for regulatory subunit type 1A of protein kinase A (PKA) (PRKAR1A) in at least half of the known patients. Molecular defects have also been identified in syndromes related to the MENs, like Peutz-Jeghers syndrome (PJS) (the STK11/LKB1 gene), and Cowden (CD; the PTEN gene) and von Hippel-Lindau disease (VHLD; the VHL gene). Although recognition of these syndromes at a young age generally improves prognosis, the need for molecular testing in the diagnostic evaluation of the MENs is less clear. This review presents the newest information on the clinical and molecular genetics of the MENs (MEN 1, MEN 2, and Carney complex), including recommendations for genetic screening, and discusses briefly the related syndromes PJS, CD and VHLD. abstract_id: PUBMED:22654876 Familial follicular cell-derived thyroid carcinoma. Follicular cell-derived well-differentiated thyroid cancer, papillary (PTC) and follicular thyroid carcinomas comprise 95% of all thyroid malignancies. Familial follicular cell-derived well-differentiated thyroid cancers contribute 5% of cases. Such familial follicular cell-derived carcinomas or non-medullary thyroid carcinomas (NMTC) are divided into two clinical-pathological groups. The syndromic-associated group is composed of predominately non-thyroidal tumors and includes Pendred syndrome, Warner syndrome, Carney complex (CNC) type 1, PTEN-hamartoma tumor syndrome (PHTS; Cowden disease), and familial adenomatous polyposis (FAP)/Gardner syndrome. Other conditions with less established links to the development of follicular cell-derived tumors include ataxia-telangiectasia syndrome, McCune Albright syndrome, and Peutz-Jeghers syndrome. The final group encompasses syndromes typified by NMTC, as well as pure familial (f) PTC with or without oxyphilia, fPTC with multinodular goiter, and fPTC with papillary renal cell carcinoma. This heterogeneous group of diseases does not have the established genotype-phenotype correlations known as in the familial C-cell-derived tumors or medullary thyroid carcinomas (MTC). Clinicians should have the knowledge to identify the likelihood of a patient presenting with thyroid cancer having an additional underlying familial syndrome stemming from characteristics by examining morphological findings that would alert pathologists to recommend that patients undergo molecular genetic evaluation. This review discusses the clinical and pathological findings of patients with familial PTC, such as FAP, CNC, Werner syndrome, and Pendred syndrome, and the heterogeneous group of familial PTC. Answer: Differentiated thyroid carcinoma (DTC) and Peutz-Jeghers syndrome (PJS) have been reported to be associated in some cases. PJS is a rare dominantly inherited disease characterized by hamartomatous small bowel polyposis, mucocutaneous hyperpigmentation, and an increased risk of cancer (PUBMED:19235129; PUBMED:21707968). Although DTCs are primarily sporadic, they may also have a familial component, and their association with PJS, while rare, has been described (PUBMED:19235129; PUBMED:21707968). In one reported case, a 25-year-old woman with PJS developed a DTC, which was confirmed by cytological analysis and histological analysis after total thyroidectomy (PUBMED:19235129). The occurrence of DTC in PJS patients can happen earlier in life and may present with a more aggressive histological pattern (PUBMED:19235129). The association between PJS and DTC is not well established, and routine screening for thyroid cancer in all PJS patients is not currently recommended. However, thyroid ultrasound should be performed when thyroid disease is suspected based on physical examination or patient's medical history (PUBMED:19235129). The review of differentiated thyroid cancer associated with intestinal polyposis syndromes also indicates that while differentiated thyroid cancers are associated with multiple syndromes, they are most commonly associated with intestinal polyposis syndromes such as FAP and Cowden's syndrome, and to a lesser extent, PJS (PUBMED:19536760). The review suggests that the wide spectrum of possible cancer diseases occurring in PJS could include DTC (PUBMED:19536760). In conclusion, while the association between PJS and DTC is possible and has been observed, it is considered rare, and more data is needed to fully understand the relationship between these two conditions. Nonetheless, clinicians should be aware of the potential for DTC in patients with PJS and consider appropriate screening measures when indicated (PUBMED:19235129; PUBMED:19536760; PUBMED:21707968).
Instruction: Do paediatricians provide brief intervention for adolescents who smoke? Abstracts: abstract_id: PUBMED:17535177 Do paediatricians provide brief intervention for adolescents who smoke? Aims: This study aimed to describe the self-reported practice of paediatricians in brief interventions for adolescents who smoke. We also aimed to compare practice with confidence, skills and knowledge of brief intervention and nicotine replacement therapy after a targeted training programme. Methods: Medical staff at The Children's Hospital at Westmead completed a questionnaire of clinical practice and confidence in brief intervention for smoking cessation. Data were analysed comparing self-reported practice with confidence, skill and knowledge of brief interventions, based on Fiore's 5A's approach (Ask about smoking at every opportunity, Assess willingness to quit, Advise patients to quit smoking, Assist quit attempts and Arrange follow up). Results: Fifty-seven clinicians completed questionnaires, 55 (96%) recognised the importance of asking adolescents about smoking. Thirty-one (54%) identified adolescent smokers all or most of the time, increasing to 50 (88%) if their presenting condition was associated with smoking. Twenty-five (44%) clinicians assessed the stage of change, 33 (58%) advised the adolescent to quit smoking, nine (16%) assisted quit attempts and 10 (17.5%) arranged follow up. Clinicians more confident in brief intervention skills, motivational interviewing and relapse prevention were more likely to use the 5A's (P&lt;0.05). Training increased clinician's confidence in brief intervention skills and knowledge of nicotine replacement therapy (P&lt;0.01); however, there was no statistically significant change in clinical practice 1 month post training. Conclusion: Training paediatricians in brief intervention skills, motivational interviewing and relapse prevention can increase the use of 5A's brief intervention in clinical practice, potentially increasing quit attempts in adolescents who smoke. abstract_id: PUBMED:31216867 Adolescents' experiences of brief psychosocial intervention for depression: An interpretative phenomenological analysis of good-outcome cases. Background: Brief psychosocial intervention (BPI) is a treatment for adolescent depression that has recently demonstrated clinical effectiveness in a controlled trial. The aim of this study is to explore experiences of adolescents with major depression receiving BPI treatment in the context of good treatment outcomes. Method: A subsample of five interviews from a larger study of adolescents' experiences of BPI was purposively selected, focusing on good-outcome cases. Interviews were analysed using interpretative phenomenological analysis to provide a richer understanding of participants' experiences of overcoming depression in the BPI group. Results: Four central themes were identified: 'Being heard and feeling safe', 'Collaborative working enhancing therapy', 'Gaining a different perspective on one's self and relationships' and 'A positive therapeutic relationship'. Conclusion: BPI is a novel approach with promising clinical effectiveness. Utilising adolescents' experiences has revealed potential psychological mechanisms of good treatment response to BPI. Overall implications for clinical practice with depressed adolescents are discussed. abstract_id: PUBMED:23681934 Brief intervention to preteens and adolescents to create smoke-free homes and cotinine results: a randomized trial. Background: Little research has focused on brief and practical strategies for addressing environmental tobacco smoke exposure through interventions focused explicitly on creating a smoke-free home. Methods: We used a two-group (intervention and control groups) repeated-measures randomized controlled trial design. Families were randomized to the intervention (n = 176) or control (n = 176) condition after the baseline interview, with outcome assessments for reported and urine cotinine measures at 2 (post-intervention), 6 (follow-up) and 12 (follow-up) months. Results: Baseline urinary cotinine levels of both groups were not statistically significantly different (P &gt; 0.05); however, post-intervention urinary cotinine levels were significantly different at 2, 6 and 12 months after start of the study (P &lt; 0.001). Conclusion: As a physician-based brief intervention, our intervention was effective. Clinical providers might offer feedback and brief interventions to preteens and adolescents. Because of the ease of intervention on delivery, this intervention has the potential to have significant impact if widely disseminated. abstract_id: PUBMED:32812888 A Mobile Health Intervention for Adolescents Exposed to Secondhand Smoke: Pilot Feasibility and Efficacy Study. Background: Secondhand smoke (SHS) exposure in children and adolescents has adverse health effects. For adolescents of lower socioeconomic status (SES), exposure is widespread, evidenced in the measurement of urinary cotinine, a major metabolite of nicotine. Direct intervention with exposed children has been proposed as a novel method, yet there is minimal evidence of its efficacy. Combining this approach with a mobile health (mHealth) intervention may be more time and cost-effective and feasible for adolescent populations. Objective: In this pilot study, we assessed the feasibility and preliminary evidence of efficacy of a 30-day text message-based mHealth intervention targeted at reducing SHS exposure in adolescent populations of low SES. Methods: For the study, 14 nonsmoking and nonvaping participants between the ages of 12-21 years exposed to SHS were enrolled. The intervention consisted of a daily text message sent to the participants over the course of a month. Text message types included facts and information about SHS, behavioral methods for SHS avoidance, or true-or-false questions. Participants were asked to respond to each message within 24 hours as confirmation of receipt. Feasibility outcomes included completion of the 30-day intervention, receiving and responding to text messages, and feedback on the messages. Efficacy outcomes included a reduction in urinary cotinine, accuracy of true-or-false responses, and participants' perceptions of effectiveness. Results: Of the 14 participants that were enrolled, 13 completed the intervention. Though not required, all participants had their own cell phones with unlimited text messaging plans. Of the total number of text messages sent to the 13 completers, 91% (372/407) of them received on-time responses. Participant feedback was generally positive, with most requesting more informational and true-or-false questions. In terms of efficacy, 54% (6/11) of participants reduced their cotinine levels (however, change for the group overall was not statistically significant (P=.33) and 45% (5/11) of participants increased their cotinine levels. Of the total number of true-or-false questions sent across all completers, 77% (56/73) were answered correctly. Participants' ratings of message effectiveness averaged 85 on a scale of 100. Conclusions: In this pilot study, the intervention was feasible as the majority of participants had access to a cell phone, completed the study, and engaged by responding to the messages. The efficacy of the study requires further replication, as only half of the participants reduced their cotinine levels. However, participants answered the majority of true-or-false questions accurately and reported that the messages were helpful. abstract_id: PUBMED:25300577 Brief alcohol interventions for adolescents and young adults: a systematic review and meta-analysis. This study reports findings from a meta-analysis summarizing the effectiveness of brief alcohol interventions for adolescents (age 11-18) and young adults (age 19-30). We identified 185 eligible study samples using a comprehensive literature search and synthesized findings using random-effects meta-analyses with robust standard errors. Overall, brief alcohol interventions led to significant reductions in alcohol consumption and alcohol-related problems among adolescents (g = 0.27 and g = 0.19) and young adults (g = 0.17 and g = 0.11). These effects persisted for up to 1 year after intervention and did not vary across participant demographics, intervention length, or intervention format. However, certain intervention modalities (e.g., motivational interviewing) and components (e.g., decisional balance, goal-setting exercises) were associated with larger effects. We conclude that brief alcohol interventions yield beneficial effects on alcohol-related outcomes for adolescents and young adults that are modest but potentially worthwhile given their brevity and low cost. abstract_id: PUBMED:38490335 Identifying brief intervention factors to improve cannabis related outcomes in adolescents and young adults: A systematic review of sample characteristics and intervention components. Introduction: Prior systematic and meta-analytic reviews observed mixed evidence for the efficacy of cannabis brief interventions (BIs). Inconsistent support for cannabis BIs may be the result of intersecting methodological factors, including intervention structure and content, participant eligibility criteria, and outcome assessment measures. The current systematic review of cannabis BI studies narratively synthesizes these data to guide intervention development decision-making in future cannabis BI studies (PROSPERO CRD42022285990). Methods: We searched PubMed/MEDLINE, PsycINFO, and CINAHL databases in January 2022 and again in June 2023 to capture newly published studies. Studies were included if they were a randomized trial, enrolled adolescents (13-17) and/or young adults (18-30), specified cannabis use and/or problems inclusion criteria, and evaluated a cannabis BI (defined as ≤4 sessions). We extracted and synthesized data on intervention characteristics (e.g., components, length/duration, modality), cannabis inclusion criteria and recruitment setting, baseline cannabis use descriptives and treatment-seeking status, and outcome assessment measures to discern if/how they may intersect to determine intervention efficacy. The Cochrane Risk of Bias Tool 2 assessed study quality. Results: Our search resulted in a final sample of 25 study records including 4094 participants. Recruitment setting seemed to provide an influential backdrop for how well inclusion criteria determined baseline cannabis use level, as well as for the type/length of the BI evaluated. Motivational interviewing (MI) and personalized feedback (PF) were the most frequently used BI components overall; however, some differences were observed in the proportion of BIs with reported intervention effects using MI vs. PF. Frequency of use days was the most commonly used outcome measure, although this may not be the most sensitive measure for assessing cannabis BI efficacy. Conclusions: Our systematic review indicates that cannabis BI studies require greater precision in their design, giving special attention to matching the content and structure of the BI to the needs of the target population and selecting outcomes commensurate to the goals of the BI and the target population to more accurately reflect the efficacy of the BI. However, consistent with prior reviews, all included studies demonstrated at least some concerns for risk of bias, and most were at high risk. abstract_id: PUBMED:30243410 Feasibility, acceptability, and preliminary effects of a brief alcohol intervention for suicidal adolescents in inpatient psychiatric treatment. Alcohol use, both short-term intoxication and longer-term use, is a notable risk factor for suicide. Despite the strong relationship between alcohol use and suicidal thoughts and behaviors, providers typically treat these two problems independently. In particular, acute psychiatric care hospitalizations for adolescents are typically brief, and many only cursorily address alcohol use. Integrating a brief motivational enhancement intervention for alcohol use into an inpatient psychiatric hospitalization treatment protocol has the potential to enhance motivation to stop or reduce drinking if adolescents can more fully understand how it increases risk for suicidal behavior. This study tested the feasibility, acceptability, and preliminary effects of the Alcohol and Suicide Intervention for Suicidal Teens (ASIST), a brief motivational enhancement intervention targeting alcohol use and suicidal thoughts and behaviors for suicidal adolescents receiving inpatient psychiatric treatment. Results from a randomized pilot trial of ASIST (N = 50) revealed that the intervention was both feasible and acceptable, with 92% of those in the ASIST condition reporting that the intervention helped them to understand how their alcohol use is related to their suicidal thoughts and behaviors. Study findings suggest a larger randomized controlled trial may be warranted to test the effectiveness of ASIST with psychiatrically hospitalized adolescents. abstract_id: PUBMED:24612184 Paediatricians' perceptions of a potential online psychosocial intervention for children with recurrent abdominal pain. Aims: To evaluate paediatricians' perceived effectiveness of an online psychosocial intervention for children with recurrent abdominal pain (RAP). Also, to explore which elements of this intervention they would consider necessary when applied in the clinical context. Methods: A total of 131 paediatricians affiliated to Catalan and Balearic paediatric societies completed an online survey about how effective they would perceive an online psychosocial intervention for RAP (for reducing pain intensity, reducing disability and preventing chronicity) and how this intervention should be carried out. They were asked about the perceived effectiveness of the standard medical treatments they routinely applied for RAP and also their opinion of face-to-face psychosocial interventions. Results: A face-to face psychosocial intervention was considered better (to reduce pain intensity, reduce disability and prevent chronic pain) than an online psychological intervention and the standard medical treatment. Online and face-to-face psychosocial interventions are considered equally useful for children with mild disability, but a face-to-face psychosocial intervention is considered better for those with moderate and severe levels of disability. Paediatricians considered that an online psychosocial intervention for children with RAP should be simple and consistent; it should provide easy access for users; and its interface should be easy to use and attractive. Conclusions: Paediatricians show a positive attitude towards a potential online psychosocial intervention for children and adolescents with RAP. However, they do not use the Internet for offering health care, and they would prefer a face-to-face psychosocial intervention. abstract_id: PUBMED:27595811 Influence of Counsellor- and Intervention Variables on Motivation to Change Following a Brief Motivational Intervention to Reduce Risky Alcohol Use Influence of Counsellor- and Intervention Variables on Motivation to Change Following a Brief Motivational Intervention to Reduce Risky Alcohol Use Brief interventions are recommended for prevention and early intervention of risky alcohol use. However, evidence of their effectiveness, in particular for children and adolescents, is heterogeneous. Analysis of counsellor and intervention variables may provide insights into mechanisms of action in brief interventions and thereby contribute to an enhanced effectiveness. We analyzed data of N = 141 children and adolescents who were treated for acute alcohol intoxication in the emergency department. Study participants received a brief motivational intervention to reduce risky alcohol use during hospitalization. We applied multiple regression analysis to examine counsellor variables (empathy, affirmation, competence, congruence) and intervention variables (readiness and confidence ruler, decisional balance, goal agreement) as predictors of motivation to change. Higher scores on the basic therapeutic skill "positive affirmation" (R2 = 7.1 %; p &lt; .01), finishing the intervention with a written goal agreement (R2 = 2.9 %; p &lt; .05) and younger age were associated with greater readiness to change (R2 = 10.2 %; p &lt; .01). Therefore, a special focus should be put on the counsellor skill "positive affirmation" when training new counsellors. Results also indicate that younger patients respond stronger to a brief intervention in this context. abstract_id: PUBMED:25354927 Feasibility assessment of a brief, web-based behavioral activation intervention for adolescents with depressed mood. Objective: Adolescent depression is a major public health concern. Efficacious interventions exist, but are underutilized. Novel approaches to improving access are therefore a top priority. Web-based approaches offer a viable treatment delivery solution; this approach may reach adolescents who might not otherwise receive formal treatment. Behavioral activation (BA) approaches have had success in treatment of depressive symptoms in youth. The purpose of this article is to: (1) describe the development process of a web-based, behavioral activation intervention for adolescents; (2) summarize the preliminary feasibility data; and (3) discuss the benefits and challenges associated with development and evaluation of adolescent self-help resources. Methods: The current study is part of a larger NIMH funded study focusing on the development and evaluation of Bounce Back Now (BBN), an evidence-informed, web resource for disaster-affected adolescents and their families. This study is specifically on the development of the BA component of the mood module of BBN, which was evaluated more extensively than other components. We present data from a formal usability evaluation conducted with 24 adolescents, and preliminary usage data collected from 2,000 disaster affected adolescents recruited from the tornado-affected coordinates in Alabama and Joplin, MO. Results: Preliminary data supported the feasibility of this approach: qualitative data with the clinic-based sample revealed favorable reactions to the intervention, and preliminary data from the large ongoing randomized controlled trial have indicated moderate levels of access. Conclusions: Brief, web-based approaches may offer a promising alternative to address access barriers for adolescents with depressed mood. Answer: Yes, paediatricians do provide brief interventions for adolescents who smoke. A study aimed at describing the self-reported practice of paediatricians in brief interventions for adolescent smokers found that the majority of clinicians recognized the importance of asking adolescents about smoking. Specifically, 96% of the clinicians in the study acknowledged the importance of this practice. However, the actual implementation varied, with 54% identifying adolescent smokers all or most of the time, and this increased to 88% if the presenting condition was associated with smoking. The study also reported that 44% of clinicians assessed the stage of change, 58% advised the adolescent to quit smoking, 16% assisted quit attempts, and 17.5% arranged follow-up. The study concluded that training paediatricians in brief intervention skills, motivational interviewing, and relapse prevention can increase the use of the 5A's brief intervention in clinical practice, potentially increasing quit attempts in adolescents who smoke (PUBMED:17535177).
Instruction: Perioperative risk of bradyarrhythmias in patients with asymptomatic chronic bifascicular block or left bundle branch block: does an additional first-degree atrioventricular block make any difference? Abstracts: abstract_id: PUBMED:9523812 Perioperative risk of bradyarrhythmias in patients with asymptomatic chronic bifascicular block or left bundle branch block: does an additional first-degree atrioventricular block make any difference? Background: The incidence of perioperative bradyarrhythmias in patients with bifascicular or left bundle branch block (LBBB) and the influence of an additional first-degree atrioventricular (A-V) block has not been evaluated with 24-h Holter electrocardiographic monitoring. Therefore the authors assessed the rate of block progression and bradyarrhythmia in these patients. Methods: Patients (n = 106) with asymptomatic bifascicular block or LBBB with or without an additional first-degree A-V block scheduled for surgery under general or regional anesthesia were enrolled prospectively. Three patients were excluded. Of the 103 remaining, 56 had a normal P-R interval and 47 had a prolonged one. Holter monitoring (CM2, CM5) was applied to each patient just before induction of anesthesia and was performed for 24 h. The primary endpoint of the study was the occurrence of block progression. As secondary endpoints, bradycardias &lt; 40 beats/min with hemodynamic compromise (systolic blood pressure &lt; 90 mmHg) or asystoles &gt; 5 s were defined. Results: Block progression to second-degree A-V block and consecutive cardiac arrest occurred in one case of LBBB without a prolonged P-R interval Severe bradyarrhythmias with hypotension developed in another eight patients: asystoles &gt; 5 s occurred in two cases and six patients had bradycardias &lt; 40/min. Pharmacotherapy was successful in these eight patients. There was no significant difference for severe bradyarrhythmias associated with hemodynamic compromise between patients with and without P-R prolongation (P = 1.00). Conclusions: In patients with chronic bifascicular block or LBBB, perioperative progression to complete heart block is rare. However, the rate of bradyarrhythmias with hemodynamic compromise proved to be relevant. Because an additional first-degree A-V block did not increase the incidence of severe bradyarrhythmias and pharmacotherapy by itself was successful in nearly all cases, routine prophylactic insertion of a temporary pacemaker in such patients should be questioned. abstract_id: PUBMED:10456813 Perioperative transcutaneous pacemaker in patients with chronic bifascicular block or left bundle branch block and additional first-degree atrioventricular block. Background: Complete heart block is dreaded perioperatively in patients with chronic bifascicular or left bundle branch block (LBBB) and additional first-degree A-V block. Our aim was to investigate the necessity as well as the efficacy and safety of transcutaneous pacing in the perioperative setting. Methods: Thirty-nine consecutive patients with asymptomatic chronic bifascicular block or LBBB and prolongation of the P-R interval scheduled to undergo surgery under anesthesia were prospectively enrolled in the study. Preoperatively, a transcutaneous pacemaker (PACE 500 D, Osypka Co.) was applied; its efficacy was checked with intra-arterial blood pressure measurement; the pain level was recorded. Additionally, 24-h Holter monitoring (CM2, CM5) was applied. Occurrences of a block progression or a bradycardia of &lt;40 beats/min with hemodynamic impairment were the defined end points. Results: Thirty-seven of the 39 patients (95%) could be successfully stimulated with a median current strength of 70 mA; whereby 33 of the 39 patients felt moderate to severe pain. There was no perioperative block progression. Three cases of brady-cardia of &lt;40 beats/min with a critical drop in blood pressure occurred; but these patients were successfully treated with drug therapy without pacemaker stimulation. Conclusion: The perioperative application and testing of the pacemaker was safe and could be performed in nearly all patients successfully. However, we do not consider a routine prophylactic transcutaneous placement in patients with chronic bifascicular or LBBB and additional first-degree A-V block justified. Nevertheless, appropriate drugs and temporary pacemaker equipment should be easily accessible. abstract_id: PUBMED:11921706 Perioperative temporary pacemaker therapy Introduction: Numerous factor can precipitate progression of incomplete heart block to complete heart block in the perioperative period. While there is a consensus of the indications of acute temporary pacing, there is no widely accepted guidelines for the elective perioperative temporary pacing. The indications are clear in the cases of second or third degree atrio-ventricular block, bradycardias, bradyarrhythmias with frequency less than 50/min not responding to atropine, but there are many debate in the cases of first degree atrio-ventricular block with bifascicular or left bundle branch block. Furthermore, during the operation bradyarrhythmia, complete atrio-ventricular block not responding to atropine can develop without any sign on the previous ECG at rest. Methods And Results: The authors review the different methods of pacing (transvenous, transcutaneous, transoesophageal), summarize the advantages and disadvantages of each method, the complications and their prevention. Patients' condition, concomitant medical treatment, designed procedures and staffs' experience in the introduction of pacemaker electrode must be considered in questioned cases. Conclusions: The transcutaneous method is easily performed, don't need a lot of experience. It must be available in every operating theatre and in the questioned cases may be enough until the introduction of the transvenous electrode, if the field of operation make the electrode placement possible. abstract_id: PUBMED:29033310 Relation of elevated serum uric acid levels to first-degree heart block and other cardiac conduction defects in hospitalized patients with type 2 diabetes. Aims: Several studies have reported that moderately elevated serum uric acid levels are associated with an increased risk of tachyarrhythmias (mainly atrial fibrillation) in patients with and without type 2 diabetes mellitus (T2DM). It is currently unknown whether an association also exists between elevated serum uric acid levels and cardiac conduction defects in patients with T2DM. Methods: We retrospectively analyzed a hospital-based sample of 967 patients with T2DM discharged from our Division of Endocrinology over the years 2007-2014. Standard electrocardiograms were performed on all patients and were interpreted by expert cardiologists. Results: Overall, 267 (27.6%) patients had some type of conduction defects on electrocardiograms (defined as at least one block among first-degree atrio-ventricular block, second-degree block, third-degree block, left bundle branch block, right bundle branch block, left anterior hemi-block or left posterior hemi-block). Patients in the 3rd serum uric acid tertile had a higher prevalence of any cardiac conduction defects than those belonging to 2nd or 1st tertile, respectively (35.8% vs. 25.0% vs. 22.6%; p&lt;0.0001). Elevated serum uric acid levels were associated with a nearly twofold increased risk of cardiac conduction defects after adjustment for age, sex, hemoglobin A1c, diabetes duration, metabolic syndrome, chronic kidney disease, chronic obstructive pulmonary disease, ischemic heart disease, valvular heart disease and medication use (adjusted-odds ratio 1.84, 95% confidence intervals 1.2-2.9; p=0.009). Conclusions: Moderately elevated serum uric acid levels are associated with an increased prevalence of any cardiac conduction defects in hospitalized patients with T2DM, independent of multiple risk factors and potential confounding variables. abstract_id: PUBMED:8578750 Symptomatic bradycardia with amiodarone in patients with pre-existing conduction disorders Amiodarone has been shown to improve survival in patients with impaired left ventricular function and ventricular trachyarrhythmias. The clinical applicability, however, is limited by numerous often serious adverse effects. 70 patients receiving the treatment with amiodarone for ventricular tacharrhythmias were investigated regarding the development of symptomatic bradycardia. Group 1 consisted of 25 patients with preexisting conduction disorders, 5 had first degree heart block, 10 had right bundle branch block, and 7 had left bundle branch block on electrocardiographic examination. The 3 remaining patients showed asymptomatic sinus node dysfunction defined as sinus arrest or sinuatrial block. Amiodarone caused symptomatic bradycardia in 6 of the group 1 patients (II degrees-III degrees AV block in 3 patients with preexisting I degree AV block or right bundle branch block; intermittent sinus arrest or SA block in the 3 cases with previous sinus node dysfunction). 45 patients had no conduction disorders before the administration of amiodarone (group II). None of these patients developed symptomatic bradycardia. The difference to group I was significant (p &lt; 0.0005). Thus, patients with preexisting conduction disorders have a substantial risk of developing symptomatic bradycardia under amiodarone therapy, with an incidence of 24% in group I patients of our collective. abstract_id: PUBMED:12545300 Predicting the risk of atrioventricular conduction lesions after percutaneous septal ablation for obstructive hypertrophic cardiomyopathy Background And Introduction: Damage to the AV conduction system is a frequent complication of percutaneous septal ablation (PTSMA) that needs early and reliable identification of those patients (pts.) at risk for complete heart block (CHB) and subsequent pacemaker implantation. Methods And Results: In the first 39 pts. who underwent PTSMA in 1996, AV conduction recovery needed up to 11 days. One pt. suffered from unexpected CHB after 9 days. Seven pts. who needed a DDD pacemaker (DDD-PM) were compared to those without conduction disturbances. A score was established which identified all DDD-PM candidates retrospectively if they presented with &gt;12 score points. In the following 137 consecutive pts. treated in 1997 and 1998, this score was applied prospectively, and again correctly identified all candidates for a DDD-PM. In addition, a low risk group was identified with &lt;8 score points. From 1999 on, the score was applied in routine clinical decision-making in 120 consecutive pts. with respect to DDD-PM implantation. All low risk pts. (&lt;8 points) remained free from bradycardias, while 2/54 pts. (4%) of the intermediate risk group, and 20/23 pts. (87%) of the high risk group had to undergo DDD-PM implantation. Pts. with a first-degree AV block or those with a right bundle branch block at baseline had no excess risk, while 50% of the pts. with a left bundle branch block (LBBB) needed a DDD-PM. Conclusions: Based on pre-interventional data and careful monitoring of the first 48 hours after PTSMA, identification of pts. at risk for CHB and subsequent DDD-PM implantation seems to be possible. Pts. with a score &lt;8 seem to be at low, those with &gt;12 points at high risk. In the remaining cases watchful waiting with prolonged monitoring may allow AV conduction to recover, thus, reducing the number of unnecessary DDD-PM implantations. In cases with LBBB at baseline, however, implantation of a DDD-PM should be considered first-line therapy. abstract_id: PUBMED:22733983 Predictors of permanent pacemaker implantation after Medtronic CoreValve bioprosthesis implantation. Aims: High-grade conduction disturbances requiring permanent pacemaker (PPM) implantation occur in up to 40% of patients following transcatheter aortic valve implantation (TAVI). The aim of this study was to identify pre-operative risk factors for PPM implantation after TAVI with the Medtronic CoreValve prosthesis (CVP). Methods And Results: We retrospectively analysed 109 patients following transfemoral CVP implantation performed between 2008 and 2009 at the Leipzig Heart Center. Patients who had indwelling PPM at the time of TAVI (n = 21) were excluded, leaving 88 patients for analysis. Mean age was 80.3 ± 6.6 years and logistic EuroScore predicted risk of mortality was 23.3 ± 12.1%. A total of 32 patients (36%) underwent PPM implantation post-TAVI during the same hospital admission. A total of 27/88 (31%) had evidence of pre-operative abnormal conduction, including first degree AV block and left bundle brunch block. Statistically significant risk factors for the need for post-operative PPM were patient age &gt;75 years [P = 0.02, odds ratio (OR) 4.6], pre-operative heart rate &lt;65 beats per minute (b.p.m.; P = 0.04, OR 2.9), CVP oversizing &gt;4 mm (P = 0.03, OR 2.8), CVP prosthesis &gt;26 mm (OR 2.2), atrial fibrillation (P = 0.001, OR 5.2), and ventricular rate &lt;65 b.p.m. at the first post-operative day (P = 0.137, OR 6.0). Conclusion: PPM implantation occurs frequently after transfemoral TAVI with the CVP. Older age, chronic atrial fibrillation, pre-operative bradycardia, and larger or significantly oversized prostheses were independent risk factors for PPM implantation following TAVI with the CVP. abstract_id: PUBMED:32682674 Prolonged Continuous Electrocardiographic Monitoring Prior to Transcatheter Aortic Valve Replacement: The PARE Study. Objectives: This study sought to determine, using continuous electrocardiographic monitoring (CEM) pre-transcatheter aortic valve replacement (TAVR), the incidence and type of unknown pre-existing arrhythmic events (AEs) in TAVR candidates, and to evaluate the occurrence and impact of therapeutic changes secondary to the detection of AEs pre-TAVR. Background: Scarce data exist on the arrhythmic burden of TAVR candidates (pre-procedure). Methods: This was a prospective study including 106 patients with severe aortic stenosis and no prior permanent pacemaker screened for TAVR. A prolonged (1 week) CEM was implanted within the 3 months pre-TAVR. Following heart team evaluation, 90 patients underwent elective TAVR. Results: New AEs were detected by CEM in 51 (48.1%) patients, leading to a treatment change in 14 of 51 (27.5%) patients. Atrial fibrillation or tachycardia was detected in 8 of 79 (10.1%) patients without known atrial fibrillation or tachycardia, and nonsustained ventricular arrhythmias were detected in 31 (29.2%) patients. Significant bradyarrhythmias were observed in 22 (20.8%) patients, leading to treatment change and permanent pacemaker in 8 of 22 (36.4%) and 4 of 22 (18.2%) patients, respectively. The detection of bradyarrhythmias increased up to 30% and 47% among those patients with pre-existing first-degree atrioventricular block and right bundle branch block, respectively. Chronic renal failure, higher valve calcification, and left ventricular dysfunction determined (or tended to determine) an increased risk of AEs pre-TAVR (p = 0.028, 0.052, and 0.069, respectively). New onset AEs post-TAVR occurred in 22.1% of patients, and CEM pre-TAVR allowed early arrhythmia diagnosis in one-third of them. Conclusions: Prolonged CEM in TAVR candidates allowed identification of previously unknown AEs in nearly one-half of the patients, leading to prompt therapeutic measures (pre-TAVR) in about one-fourth of them. Pre-existing conduction disturbances (particularly right bundle branch block) and chronic renal failure were associated with a higher burden of AEs. abstract_id: PUBMED:17630564 Electrocardiographic changes in HIV-infected, drug-experienced patients being treated with atazanavir. The QRS interval of 56 out of 75 (74.7%) HIV-infected, drug-experienced patients (66.7% men) increased during treatment with boosted or unboosted atazanavir by a median 5 ms (interquartile range 0-9; P &lt; 0.0001); the PR and the QTc intervals did not change significantly. New asymptomatic bundle branch blocks were observed in four patients; one subject with a baseline first-degree atrioventricular block developed symptomatic bradyarrhythmia while receiving atenolol. The electrocardiographic monitoring of patients treated with atazanavir seems advisable. abstract_id: PUBMED:2023017 Predicting the outcomes of electrophysiologic studies of patients with unexplained syncope: preliminary validation of a derived model. Purpose: To develop and validate a predictive model that would allow clinicians to determine whether an electrophysiologic (EP) study is likely to result in useful diagnostic information for a patient who has unexplained syncope. Patients: One hundred seventy-nine consecutive patients with unexplained syncope who underwent EP studies at two university medical centers comprised the training sample. A test sample to validate the model was made up of 138 patients from the clinical literature who had undergone EP studies for syncope. Design: Retrospective analysis of patients undergoing EP studies for syncope. The data collector was blinded to the study hypothesis; the electrophysiologist assessing outcomes was blinded to clinical and historical data. Clinical predictor variables available from the history, the physical examination, electrocardiography (ECG), and Holter monitoring were analyzed via two multivariable predictive modeling strategies (ordinal logistic regression and recursive partitioning) for their abilities to predict the results of EP studies, namely tachyarrhythmic and bradyarrhythmic outcomes. These categories were further divided into full arrhythmia and borderline arrhythmia groups. Results: Important outcomes were 1) sustained monomorphic ventricular tachycardia (VT) and 2) bradyarrhythmias, including sinus node and atrioventricular (AV) conducting disease. The results of the logistic regression (in this study, the superior strategy) showed that the presence of organic heart disease [odds ratio (OR) = 3.0, p less than 0.001] and frequent premature ventricular contractions on ECG (OR = 6.7, p less than 0.004) were associated with VT, while the following abnormal ECG findings were associated with bradyarrhythmias: first-degree heart block (OR = 7.9, p less than 0.001), bundle-branch block (OR = 3.0, p less than 0.02), and sinus bradycardia (OR = 3.5, p less than 0.03). Eighty-seven percent of the 31 patients with important outcomes at EP study had at least one of these clinical risk factors, while 95% of the patients with none of these risk factors had normal or nondiagnostic EP studies. In the validation sample, the presence of one or more risk factors would have correctly identified 88% of the test VT patients and 65% of the test bradyarrhythmia patients as needing EP study. Conclusion: These five identified predictive factors, available from the history, the physical examination, and the initial ECG, could be useful to clinicians in selecting those patients with unexplained syncope who will have a serious arrhythmia identified by EP studies. Answer: The perioperative risk of bradyarrhythmias in patients with asymptomatic chronic bifascicular block or left bundle branch block (LBBB) does not appear to be significantly increased by the presence of an additional first-degree atrioventricular (A-V) block. A study involving 103 patients with either condition, with or without a prolonged P-R interval, found no significant difference in the incidence of severe bradyarrhythmias associated with hemodynamic compromise between patients with and without P-R prolongation (P = 1.00) (PUBMED:9523812). This suggests that an additional first-degree A-V block does not increase the incidence of severe bradyarrhythmias in this patient population. Furthermore, another study with 39 patients with chronic bifascicular block or LBBB and prolongation of the P-R interval found no perioperative block progression, and the few cases of bradycardia that occurred were successfully treated with drug therapy without pacemaker stimulation (PUBMED:10456813). This indicates that routine prophylactic transcutaneous pacemaker placement in these patients may not be justified, although appropriate drugs and temporary pacemaker equipment should be easily accessible. In summary, the presence of an additional first-degree A-V block in patients with asymptomatic chronic bifascicular block or LBBB does not seem to significantly alter the perioperative risk of bradyarrhythmias, and routine prophylactic insertion of a temporary pacemaker in such patients should be questioned, as pharmacotherapy alone was successful in nearly all cases (PUBMED:9523812). However, it is important to have the necessary equipment and medication readily available to manage any bradyarrhythmias that may occur (PUBMED:10456813).
Instruction: Is microvascular decompression surgery a high risk for postoperative nausea and vomiting in patients undergoing craniotomy? Abstracts: abstract_id: PUBMED:23649917 Is microvascular decompression surgery a high risk for postoperative nausea and vomiting in patients undergoing craniotomy? Purpose: Patients undergoing microvascular decompression surgery often experience postoperative nausea and vomiting (PONV). However, there is little information about the incidence of PONV after microvascular decompression. We hypothesized that microvascular decompression is an especially high-risk procedure for PONV in patients undergoing neurosurgery, and investigated risk factors related to PONV after neurosurgery. Methods: All patients who underwent craniotomy in our institution during a period of 2 years were investigated retrospectively. Medical charts were reviewed to identify PONV during the 24-h postoperative period and related risk factors. Multivariate logistic regression analysis was conducted to elucidate the impact of microvascular decompression on PONV after craniotomy. Results: Among 556 craniotomy cases, 350 patients met the inclusion criteria. Multivariate logistic regression analysis showed that microvascular decompression was an independent risk factor for PONV after craniotomy (odds ratio 5.38, 3.02-9.60), in addition to female gender, non-smoker status, amount of intraoperative fentanyl administered, and cerebrovascular surgery. Conclusion: In this retrospective study, microvascular decompression surgery was an especially high-risk factor for PONV in patients undergoing craniotomy. It may be necessary to adopt a combination of prophylactic methods to reduce the incidence of PONV after microvascular decompression. abstract_id: PUBMED:26257852 Comparison of the antiemetic effect of ramosetron with ondansetron in patients undergoing microvascular decompression with retromastoid craniotomy: a preliminary report. Background: Microvascular decompression with retromastoid craniotomy carries an especially high risk of postoperative nausea and vomiting. In this study, we compare the antiemetic efficacy of ramosetron and ondansetron in patients undergoing microvascular decompression with retromastoid craniotomy. Methods: Using balanced anesthesia with sevoflurane and remifentanil infusion, ondansetron 8 mg (group O, n = 31) or ramosetron 0.3 mg (group R, n = 31) was administered at the dural closure. The incidence and severity of postoperative nausea and vomiting, required rescue medications and the incidence of side effects were measured at post-anesthetic care unit, 6, 24 and 48 hours postoperatively. Independent t-tests and the chi-square test or Fisher's exact test were used for statistical analyses. Results: There were no differences in the demographic data between groups, except for a slightly longer anesthetic duration of group R (P = 0.01). The overall postoperative 48 hour incidences of nausea and vomiting were 93.6 and 61.3% (group O), and 87.1 and 51.6% (group R), respectively. Patients in group R showed a less severe degree of nausea (P = 0.02) and a lower incidence of dizziness (P = 0.04) between 6 and 24 hours. Conclusions: The preventive efficacy of ramosetron when used for postoperative nausea and vomiting was similar to that of ondansetron up to 48 hours after surgery in patients undergoing microvascular decompression with retromastoid craniotomy. A larger randomized controlled trial is needed to confirm our findings. abstract_id: PUBMED:33195332 Dexmedetomidine Combined With Butorphanol or Sufentanil for the Prevention of Post-operative Nausea and Vomiting in Patients Undergoing Microvascular Decompression: A Randomized Controlled Trial. Background: Patients undergoing microvascular decompression are often accompanied with high risk of post-operative nausea and vomiting (PONV). In this study, we compare the antiemetic efficacy of butorphanol or sufentanil combined with dexmedetomidine in patients undergoing microvascular decompression. Methods: Patients undergoing microvascular decompression were randomized into two groups. The primary outcome was the occurrence and severity of PONV during the 72 h after surgery. Secondary outcomes included levels of pain intensity and sedation and consumption of opioids at 1, 2, 6, 12, 24, 48, and 72 h after surgery. We also recorded the intraoperative hemodynamics, consumption of narcotic drugs, operation and anesthesia time, estimated blood loss, infusion volume and urine output, requirements of rescue antiemetics or analgesics, the satisfaction scores of patients and surgeons, complications, and length of stay. Results: The overall incidence rates of nausea and vomiting during the 72 h after surgery were significantly reduced in group DB (76.00 and 44.00% in group DS vs. 54.17% and 22.92% in group DB, P &lt; 0.05). Patients in group DB had a lower incidence of nausea than those in group DS at intervals of 1-6 and 6-24 h (P &lt; 0.05). However, patients in group DB had a lower incidence of vomiting than those in group DS only at intervals of 1-6 h (P &lt; 0.05). Similarly, the number of patients requiring rescue antiemetics was also significantly reduced in group DB compared with that in group DS at intervals of 1-6 h (P &lt; 0.05). The number of patients experiencing moderate to severe PONV was comparable between the two groups during 72 h after surgery (P &gt; 0.05). The consumption of opioid morphine equivalent was significantly reduced in group DB (P &lt; 0.05). Compared with those in group DS, the satisfaction scores of both patients and surgeons were significantly increased in group DB (P &lt; 0.05). Conclusion: Butorphanol combined with dexmedetomidine could reduce early PONV and the number of patients requiring rescue antiemetics, especially at intervals of 1-6 h, while the satisfaction scores of both patients and surgeons were significantly increased. abstract_id: PUBMED:30431604 Effect of prophylactic palonosetron and sugammadex on postoperative nausea and vomiting in patients undergoing microvascular decompression under propofol-maintained anesthesia: A retrospective observational study. Microvascular decompression (MVD) is associated with a particularly high risk of postoperative nausea and vomiting (PONV) among craniotomy patients. However, there is no information regarding the effect of prophylactic palonosetron and sugammadex on PONV in patients undergoing MVD under propofol-maintained anesthesia.Medical records of 274 adults who had undergone MVD under propofol-maintained anesthesia were reviewed. Patients were classified into 4 groups, based on the reversal agent used (sugammadex/pyridostigmine) and whether or not prophylactic palonosetron was used. The PONV incidence and risk factors were analyzed according to the use of these agents.The overall incidence of PONV was 30.7% during the first 24 hours postoperatively. The incidence of PONV was lower in the group using combination of prophylactic palonosetron and sugammadex (19.3%) compared with the group not using both agents (37.2%). The combined use of the prophylactic palonosetron and sugammadex was identified as a factor affecting the occurrence of PONV in both univariable (OR = 0.40, 95% CI: 0.21-0.77, P = .006) and multivariable (OR = 0.38, 95% CI: 0.20-0.75, P = .005) logistic regression analyses. In multivariable logistic regression analysis, female sex was also significant independent risk factor in PONV (OR = 2.62, 95% CI: 1.35-5.08, P = .004).In this retrospective observational study, the combined use of prophylactic palonosetron before anesthetic induction and sugammadex as a reversal of neuromuscular blockade are associated with a reduction in the incidence of PONV in patients undergoing MVD under propofol-maintained anesthesia. abstract_id: PUBMED:22828153 Indication for surgery and the risk of postoperative nausea and vomiting after craniotomy: a case-control study. Background: The primary hypothesis of the study is that acoustic neuroma (AN) surgery and microvascular decompression (MVD) of cranial nerves increase the risk of postoperative nausea and vomiting (PONV). Methods: We designed a retrospective case-control study matched on age, sex, and year of surgery (≤2005 and &gt;2005). Year of surgery was noted as a potential confounder, because routine antiemetic prophylaxis was strongly encouraged at the study site in 2005. Cases of PONV in the recovery room were matched to controls in a 1:2 manner using a perioperative database. Charts were then reviewed for the following data: American Society of Anesthesiologists grade, smoking status, craniotomy location, craniotomy indication, and type of anesthetic administered. Results: The final analysis included 117 cases that were matched with 185 controls. Patients had a mean age of 50 years (SD=13), and 65% were female. Overall, the majority of craniotomies were supratentorial (70%) and performed for tumor resection (41%). On multivariable analysis, MVD [odds ratio (OR)=6.7; 95% confidence interval (CI), 2.0-22.7; P=0.002], AN (OR=3.3; 95% CI, 1.0-11.0; P=0.05), and epilepsy surgery (OR=2.8; 95% CI, 1.1-7.5; P=0.04) were associated with an increased likelihood of PONV when compared with tumor surgery. There was effect modification of total intravenous anesthesia by location of surgery (P-interaction=0.02). The benefit of total intravenous anesthesia on PONV was observed in supratentorial (OR=0.41; 95% CI, 0.17-0.96; P=0.04) but not infratentorial location (OR=2.6; 95% CI, 0.78-8.7; P=0.11). Conclusions: MVD and AN resection were associated with an increased likelihood of PONV compared with craniotomies performed for other tumor resection. abstract_id: PUBMED:35490219 Association of intraocular pressure and postoperative nausea and vomiting after microvascular decompression - a prospective cohort study. Background: Postoperative nausea and vomiting is common in patients receiving microvascular decompression. In the current study, we examined whether postoperative nausea and vomiting is associated with reduced intraocular pressure (IOP) after microvascular decompression, a measure that reflects intracranial pressure. Methods: This is a prospective cohort study. Adult patients scheduled for microvascular decompression surgery for hemifacial spasm between January 2020 and August 2020 were eligible. IOP was measured immediately before anesthesia induction and 30 min after patients regained complete consciousness using non-contact tonometry. IOP reduction was defined by at least 1 mmHg decrease vs. preoperative baseline. The primary outcome was vomiting on postoperative day 1. Results: A total of 103 subjects were enrolled. IOP was reduced in 56 (54.4%) subjects. A significantly greater proportion of patients with IOP reduction had vomiting on postoperative day 1 (51.8% (29/56) vs. 23.4% (11/47) in those without IOP reduction; p = 0.003). In the multivariate regression analysis, vomiting on postoperative day 1 was associated with female sex [odds ratio = 7.87, 95% CI: 2.35-26.32, p = 0.001] and IOP reduction [odds ratio = 2.93, 95% CI: 1.13-7.58, p = 0.027]. Conclusions: In patients undergoing microvascular decompression surgery, postoperative IOP reduction is associated with postoperative vomiting. Trial Registration: Chinese Clinical Trial Registry: ChiCTR2000029083 . Registered 13 January 2020. abstract_id: PUBMED:35430789 Antiemetic Prophylaxis with Ramosetron for Postoperative Nausea and Vomiting in Patients Undergoing Microvascular Decompression : A Prospective, Randomized Controlled Trial. Objective: This prospective, randomized, double-blinded trial aimed to evaluate the efficacy and safety of prophylactic ramosetron administration against postoperative nausea and vomiting (PONV) in patients undergoing microvascular decompression (MVD). Methods: In this study, 100 patients undergoing MVD were randomly allocated to the control (normal saline, 2 mL) or ramosetron (ramosetron, 0.3 mg) groups at the end of surgery. The incidence and severity of PONV, need for rescue antiemetics, patient satisfaction score, duration of hospital stay, and the occurrence of adverse events were evaluated 48 hours post-surgery. Results: Data obtained from 97 patients were included in the final analysis. The incidence of PONV was significantly lower in the ramosetron group than in the control group throughout the 48-hour postoperative period (29.2% vs. 51.0%, p=0.028). A similar trend was observed with regard to PONV severity (p=0.041). The need for rescue antiemetics, satisfaction score, duration of hospital stays, and the occurrence of adverse events did not significantly differ between the groups. Conclusion: Prophylactic ramosetron administration reduced the incidence and severity of PONV in patients undergoing MVD without causing serious adverse events. Thus, ramosetron use may improve patient recovery following MVD. abstract_id: PUBMED:24978062 The efficacy of P6 acupressure with sea-band in reducing postoperative nausea and vomiting in patients undergoing craniotomy: a randomized, double-blinded, placebo-controlled study. Background: Postoperative nausea and vomiting (PONV) is a multifactorial problem after general anesthesia. Despite antiemetic prophylaxis and improved anesthetic techniques, PONV still occurs frequently after craniotomies. P6 stimulation is described as an alternative method for preventing PONV. The primary aim of this study was to determine whether P6 acupressure with Sea-Band could reduce postoperative nausea after elective craniotomy. Secondary aims were to investigate whether the frequency of vomiting and the need for antiemetics could be reduced. Methods: In this randomized, double-blinded, placebo-controlled study, patients were randomized into either a P6 acupressure group (n=43) or a sham group (n=52). Bands were applied unilaterally at the end of surgery, and all patients were administered prophylactic ondansetron. Postoperative nausea was evaluated with a Numerical Rating Scale, 0 to10, and the frequency of vomiting was recorded for 48 hours. Results: We found no significant effect from P6 acupressure with Sea-Band on postoperative nausea or vomiting in patients undergoing craniotomy. Nor was there any difference in the need for rescue antiemetics. Altogether, 67% experienced PONV, and this was especially an issue at &gt;24 hours in patients recovering from infratentorial surgery compared with supratentorial surgery (55% vs. 26%; P=0.014). Conclusions: Unilateral P6 acupressure with Sea-Band applied at the end of surgery together with prophylactic ondansetron did not significantly reduce PONV or the need for rescue antiemetics in patients undergoing craniotomy. Our study confirmed that PONV is a common issue after craniotomy, especially after infratentorial surgery. abstract_id: PUBMED:33663659 Application of Methylprednisolone Sodium Succinate Combined with Tropisetron in Prevention of Nausea and Vomiting under Microvascular Decompression of Hemifacial Spasm Objective To evaluate the effect of methylprednisolone sodium succinate combined with tropisetron on postoperative nausea and vomiting(PONV)under microvascular decompression of hemifacial spasm.Methods From January to June 2019,485 patients undergoing microvascular decompression for facial spasm at Department of Neurosurgery,Peking University People's Hospital were randomly assigned into two groups with random number table method.For group A(n=242),2 ml saline was administrated by intravenous drip before induction and 5 mg tropisetron after operation.For group B(n=243),40 mg methylprednisolone sodium succinate was administrated by intravenous drip before induction and 5 mg tropisetron after operation.The anesthesia time,operation time,and incidence of PONV in 0-24 h and 24-48 h were recorded for the comparison of the remedial treatment rate of nausea and vomiting between the two groups.Results There was no significant difference in age,gender,smoking history,body mass index value,American Society of Anesthesiologists score,medical history,surgical side,PONV history,operation time or anesthesia time between the two groups(all P &gt; 0.05).The incidence of PONV in group A was 35.5% and 18.2% during 0-24 h and 24-48 h,respectively,which was significantly higher than that(18.5%,χ 2=7.331,P=0.007;8.2%,χ 2=4.364,P=0.037)in group B.The application rate of antiemetic drugs in group A was 15.2% and 8.7% during 0-24 h and 24-48 h,respectively,which was significantly higher than that(5.3%,χ 2=5.327,P=0.021;2.0%,χ 2=4.432,P=0.035)in group B.Conclusion The combination of methylprednisolone sodium succinate and tropisetron can effectively prevent PONV under microvascular decompression of hemifacial spasm,with the performance superior to single drug treatment. abstract_id: PUBMED:37597331 5-Hydroxytryptamine and postoperative nausea and vomiting after microvascular decompression surgery. Background: The incidence of postoperative nausea and vomiting (PONV) after microvascular decompression (MVD) surgery is high; however, its underlying mechanisms remain unknown. Serum 5-hydroxytryptamine (5-HT) levels are elevated in patients with PONV. However, the relationship between 5-HT and patients experiencing PONV after MVD surgery is still unknown. Therefore, we hypothesized that 5-HT levels are associated with PONV after MVD surgery. Methods: This prospective study included 85 patients with hemifacial spasm who received MVD surgery. Blood samples were collected preoperatively, postoperatively, and on postoperative day 1, and cerebrospinal fluid samples were collected intraoperatively. 5-HT levels were detected by enzyme-linked immunosorbent assay (ELISA). The incidence and severity of PONV were evaluated at 2, 6, and 24 h after MVD surgery. Results: In the multivariate regression analysis, PONV within 24 h after MVD surgery was associated with elevated cerebrospinal fluid 5-HT levels [odds ratio (OR) = 1.21, 95% confidence interval (CI): 1.01-1.45, p = 0.044], and reduction of intraocular pressure [OR = 11.54, 95% CI: 1.43-92.84, p = 0.022]. Receiver operating characteristic curve analysis revealed an area under the curve of 0.873 (95% CI: 0.77-0.98, p &lt; 0.001). Conclusion: Our study found that the cerebrospinal fluid 5-HT levels is an independent risk factor for PONV within 24 h after MVD surgery. Answer: Yes, microvascular decompression surgery is considered a high-risk procedure for postoperative nausea and vomiting (PONV) in patients undergoing craniotomy. A retrospective study found that microvascular decompression was an independent risk factor for PONV, with an odds ratio of 5.38, indicating a particularly high risk compared to other types of craniotomy (PUBMED:23649917). Additionally, other studies have confirmed the high incidence of PONV in patients undergoing microvascular decompression, with reported rates of 93.6% for nausea and 61.3% for vomiting in one study (PUBMED:26257852), and a significant reduction in PONV incidence when specific antiemetic regimens were used (PUBMED:33195332, PUBMED:30431604). Furthermore, another study identified microvascular decompression as associated with an increased likelihood of PONV compared with craniotomies performed for other tumor resections (PUBMED:22828153). These findings underscore the need for effective prophylactic antiemetic strategies in patients undergoing microvascular decompression to improve patient outcomes and satisfaction.
Instruction: Is 1/f sound more effective than simple resting in reducing stress response? Abstracts: abstract_id: PUBMED:24704658 Is 1/f sound more effective than simple resting in reducing stress response? Background: It has been previously demonstrated that listening to 1/f sound effectively reduces stress. However, these findings have been inconsistent and further study on the relationship between 1/f sound and the stress response is consequently necessary. Objective: The present study examined whether sound with 1/f properties (1/f sound) affects stress-induced electroencephalogram (EEG) changes. Methods: Twenty-six subjects who voluntarily participated in the study were randomly assigned to the experimental or control group. Data from four participants were excluded because of EEG artifacts. A mental arithmetic task was used as a stressor. Participants in the experiment group listened to 1/f sound for 5 minutes and 33 seconds, while participants in the control group sat quietly for the same duration. EEG recordings were obtained at various points throughout the experiment. After the experiment, participants completed a questionnaire on the affective impact of the 1/f sound. Results: The results indicated that the mental arithmetic task effectively induced a stress response measurable by EEG. Relative theta power at all electrode sites was significantly lower than baseline in both the control and experimental group. Relative alpha power was significantly lower, and relative beta power was significantly higher in the T3 and T4 areas. Secondly, 1/f sound and simple resting affected task-associated EEG changes in a similar manner. Finally, participants reported in the questionnaire that they experienced a positive feeling in response to the 1/f sound. Conclusions: Our results suggest that a commercialized 1/f sound product is not more effective than simple resting in alleviating the physiological stress response. abstract_id: PUBMED:34610370 Resting-state dynamic functional connectivity predicts the psychosocial stress response. Acute stress triggers a complex cascade of psychological, physiological, and neural responses, which show large and enduring individual differences. Although previous studies have examined the relationship between the stress response and dynamic features of the brain's resting state, no study has used the brain's dynamic activity in the resting state to predict individual differences in the psychosocial stress response. In the current study, resting-state scans of forty-eight healthy participants were collected, and then their individual acute stress responses during the Montreal Imaging Stress Test (MIST) paradigm were recorded. Results defined a connectivity state (CS) characterized by positive correlations across the whole brain during resting-state that could negatively predict participants' feelings of social evaluative threat during stress tasks. Another CS characterized by negative correlations between the frontal-parietal network (FPN) and almost all other networks, except the dorsal attentional network (DAN), could predict participants' subjective stress, feelings of uncontrollability, and feelings of social evaluative threat. However, no CS could predict participants' salivary cortisol stress response. Overall, these results suggested that the brain state characterized as attentional regulation, linking self-control, and top-down regulation ability, could predict the psychosocial stress response. This study also developed an objective indicator for predicting human stress responses. abstract_id: PUBMED:29559922 Correlation Between Resting Testosterone/Cortisol Ratio and Sound-Induced Vasoconstriction at Fingertip in Men. A sound-induced sympathetic tone has been used as an index for orienting responses to auditory stimuli. The resting testosterone/cortisol ratio is a biomarker of social aggression that drives an approaching behavior in response to environmental stimuli, and a higher testosterone level and a lower cortisol level can facilitate the sympathetic response to environmental stimuli. Therefore, it is possible that the testosterone/cortisol ratio is correlated with the sound-induced sympathetic tone. The current study investigated the relationship between the resting testosterone/cortisol ratio and vasoconstriction induced by listening to sound stimuli. Twenty healthy males aged 29.0 ± 0.53 years (mean ± S.E.M) participated in the study. They came to the laboratory for 3 days and listened to one of three types of sound stimuli for 1 min on each day. Saliva samples were collected for an analysis of salivary testosterone and cortisol levels on the day of each experiment. After the collecting the saliva sample, we measured the blood volume pulse (BVP) amplitude at a fingertip. Since vasoconstriction is mediated by the activation of the sympathetic nerves, the strength of the reduction in BVP amplitude at a fingertip was called the BVP response (finger BVPR). No difference was observed between the sound-induced finger BVPR for the three types of sound stimuli (p = 0.779). The correlation coefficient between the sound-induced finger BVPR and the salivary testosterone/cortisol ratio within participants was significantly different from no correlation (p = 0.011) and there was a trend toward a significance in the correlation between the sound-induced finger BVPR and the salivary testosterone/cortisol ratio between participants (r = 0.39, p = 0.088). These results suggest that the testosterone/cortisol ratio affects the difference in the sound-evoked sympathetic response. abstract_id: PUBMED:31632840 Didgeridoo Sound Meditation for Stress Reduction and Mood Enhancement in Undergraduates: A Randomized Controlled Trial. Background: College students report feeling frequently stressed, which adversely impacts health. Meditation is one effective method for reducing stress, but program length and required effort are potential obstacles. Research on sound meditation, involving focused listening to sounds, is nascent but may appeal to undergraduates. The effects of listening to didgeridoo, an Australian wind instrument producing a low, resonant, droning sound, have not been studied. Objective: This study compared the effect of a 30-minute didgeridoo sound meditation versus silent meditation with focus on one's breath on acute self-perceived stress and mood in undergraduates without prior meditation experience. Methods: Seventy-four undergraduates were randomized to 2 interventions: (1) didgeridoo meditation (n = 40) performed live by a musician or (2) silent meditation (n = 34) taught by a meditation instructor. Immediate pre-post effects of the session were examined using the 4-Dimension Mood Scale and an item assessing acute self-perceived stress. Intervention acceptability was assessed postintervention. Results: Two-way mixed analyses of variance were performed. Both groups reported significantly increased relaxation after meditation (Group D, P = .0001 and Group S, P = .0005). Both groups reported decreased negative arousal (Group D, P = .02 and Group S, P = .02), energy (Group D, P = .0001 and Group S, P = .003), tiredness (Group D, P = .0001 and Group S, P = .005), and acute stress (Group D, P = .0001 and Group S, P = .0007). Group Didgeridoo experienced significantly more relaxation (P = .01) and less acute stress (P = .03) than Group Silent. Fifty-three percent of silent participants and 80% of didgeridoo participants agreed that they would attend that type of meditation again. Forty-seven percent of silent participants and 80% of didgeridoo participants enjoyed the meditation. Conclusion: Didgeridoo sound meditation is as effective as silent meditation for decreasing self-perceived negative arousal, tiredness, and energy and more effective than silent meditation for relaxation and acute stress in undergraduates. Didgeridoo meditation participants reported higher levels of enjoyment and higher likelihood of attending another session. Further investigation into didgeridoo and sound meditation is warranted. abstract_id: PUBMED:33768383 Slow Resting State Fluctuations Enhance Neuronal and Behavioral Responses to Looming Sounds. We investigate both experimentally and using a computational model how the power of the electroencephalogram (EEG) recorded in human subjects tracks the presentation of sounds with acoustic intensities that increase exponentially (looming) or remain constant (flat). We focus on the link between this EEG tracking response, behavioral reaction times and the time scale of fluctuations in the resting state, which show considerable inter-subject variability. Looming sounds are shown to generally elicit a sustained power increase in the alpha and beta frequency bands. In contrast, flat sounds only elicit a transient upsurge at frequencies ranging from 7 to 45 Hz. Likewise, reaction times (RTs) in an audio-tactile task at different latencies from sound onset also present significant differences between sound types. RTs decrease with increasing looming intensities, i.e. as the sense of urgency increases, but remain constant with stationary flat intensities. We define the reaction time variation or "gain" during looming sound presentation, and show that higher RT gains are associated with stronger correlations between EEG power responses and sound intensity. Higher RT gain further entails higher relative power differences between loom and flat in the alpha and beta bands. The full-width-at-half-maximum of the autocorrelation function of the eyes-closed resting state EEG also increases with RT gain. The effects are topographically located over the central and frontal electrodes. A computational model reveals that the increase in stimulus-response correlation in subjects with slower resting state fluctuations is expected when EEG power fluctuations at each electrode and in a given band are viewed as simple coupled low-pass filtered noise processes jointly driven by the sound intensity. The model assumes that the strength of stimulus-power coupling is proportional to RT gain in different coupling scenarios, suggesting a mechanism by which slower resting state fluctuations enhance EEG response and shorten reaction times. abstract_id: PUBMED:35331822 Effective connectivity between resting-state networks in depression. Rationale: Although depression has been widely researched, findings characterizing how brain regions influence each other remains scarce, yet this is critical for research on antidepressant treatments and individual responses to particular treatments. Objectives: To identify pre-treatment resting state effective connectivity (rsEC) patterns in patients with major depressive disorder (MDD) and explore their relationship with treatment response. Methods: Thirty-four drug-free MDD patients had an MRI scan and were subsequently treated for 6 weeks with an SSRI escitalopram 10 mg daily; the response was defined as ≥50% decrease in Hamilton Depression Rating Scale (HAMD) score. Results: rsEC networks in default mode, central executive, and salience networks were identified for patients with depression. Exploratory analyses indicated higher connectivity strength related to baseline depression severity and response to treatment. Conclusions: Preliminary analyses revealed widespread dysfunction of rsEC in depression. Functional rsEC may be useful as a predictive tool for antidepressant treatment response. A primary limitation of the current study was the small size; however, the group was carefully chosen, well-characterized, and included only medication-free patients. Further research in large samples of placebo-controlled studies would be required to confirm the results. abstract_id: PUBMED:29867310 Sparse Estimation of Resting-State Effective Connectivity From fMRI Cross-Spectra. In functional magnetic resonance imaging (fMRI), functional connectivity is conventionally characterized by correlations between fMRI time series, which are intrinsically undirected measures of connectivity. Yet, some information about the directionality of network connections can nevertheless be extracted from the matrix of pairwise temporal correlations between all considered time series, when expressed in the frequency-domain as a cross-spectral density matrix. Using a sparsity prior, it then becomes possible to determine a unique directed network topology that best explains the observed undirected correlations, without having to rely on temporal precedence relationships that may not be valid in fMRI. Applying this method on simulated data with 100 nodes yielded excellent retrieval of the underlying directed networks under a wide variety of conditions. Importantly, the method did not depend on temporal precedence to establish directionality, thus reducing susceptibility to hemodynamic variability. The computational efficiency of the algorithm was sufficient to enable whole-brain estimations, thus circumventing the problem of missing nodes that otherwise occurs in partial-brain analyses. Applying the method to real resting-state fMRI data acquired with a high temporal resolution, the inferred networks showed good consistency with structural connectivity obtained from diffusion tractography in the same subjects. Interestingly, this agreement could also be seen when considering high-frequency rather than low-frequency connectivity (average correlation: r = 0.26 for f &lt; 0.3 Hz, r = 0.43 for 0.3 &lt; f &lt; 5 Hz). Moreover, this concordance was significantly better (p &lt; 0.05) than for networks obtained with conventional functional connectivity based on correlations (average correlation r = 0.18). The presented methodology thus appears to be well-suited for fMRI, particularly given its lack of explicit dependence on temporal lag structure, and is readily applicable to whole-brain effective connectivity estimation. abstract_id: PUBMED:28716048 Transcriptome analysis uncovers Arabidopsis F-BOX STRESS INDUCED 1 as a regulator of jasmonic acid and abscisic acid stress gene expression. Background: The ubiquitin 26S proteasome system (UPS) selectively degrades cellular proteins, which results in physiological changes to eukaryotic cells. F-box proteins are substrate adaptors within the UPS and are responsible for the diversity of potential protein targets. Plant genomes are enriched in F-box genes, but the vast majority of these have unknown roles. This work investigated the Arabidopsis F-box gene F-BOX STRESS INDUCED 1 (FBS1) for its effects on gene expression in order elucidate its previously unknown biological function. Results: Using publically available Affymetrix ATH1 microarray data, we show that FBS1 is significantly co-expressed in abiotic stresses with other well-characterized stress response genes, including important stress-related transcriptional regulators. This gene suite is most highly expressed in roots under cold and salt stresses. Transcriptome analysis of fbs1-1 knock-out plants grown at a chilling temperature shows that hundreds of genes require FBS1 for appropriate expression, and that these genes are enriched in those having roles in both abiotic and biotic stress responses. Based on both this genome-wide expression data set and quantitative real-time PCR (qPCR) analysis, it is apparent that FBS1 is required for elevated expression of many jasmonic acid (JA) genes that have established roles in combatting environmental stresses, and that it also controls a subset of JA biosynthesis genes. FBS1 also significantly impacts abscisic acid (ABA) regulated genes, but this interaction is more complex, as FBS1 has both positive and negative effects on ABA-inducible and ABA-repressible gene modules. One noteworthy effect of FBS1 on ABA-related stress processes, however, is the restraint it imposes on the expression of multiple class I LIPID TRANSFER PROTEIN (LTP) gene family members that have demonstrated protective effects in water deficit-related stresses. Conclusion: FBS1 impacts plant stress responses by regulating hundreds of genes that respond to the plant stress hormones JA and ABA. The positive effect that FBS1 has on JA processes and the negative effect it has on at least some ABA processes indicates that it in part regulates cellular responses balanced between these two important stress hormones. More broadly then, FBS1 may aid plant cells in switching between certain biotic (JA) and abiotic (ABA) stress responses. Finally, because FBS1 regulates a subset of JA biosynthesis and response genes, we conclude that it might have a role in tuning hormone responses to particular circumstances at the transcriptional level. abstract_id: PUBMED:1377631 Effect of gepirone on increases in tryptophan hydroxylase in response to sound stress. Pretreatment (15 min) of male rats with gepirone given parenterally (10 mg/kg i.p.) or intracranially into the dorsal raphe nucleus (14 or 21 micrograms) blocks the rapidly reversible increase in brain tryptophan hydroxylase activity and 5-hydroxyindolamine acetic acid tissue levels seen in vitro after 1-h acute sound stress. Chronic gepirone treatment over 28 days (40 mg/day s.c.) prevents the stable enzyme activity increase induced by repeated sessions of sound stress, and the rapidly reversible increase always observed following sound stress. The gepirone metabolite, 1-(2-pyrimidinyl)-1-piperazine, is inactive in each of these experiments. Transient blood pressure elevations occur with each sound presentation, but no persistent hypertension is observed with repeated sound-stress exposures. Gepirone may block the sound stress-induced biochemical increases by its inhibition of serotonergic neuronal firing in the dorsal raphe nucleus that is mediated by its agonist action at the somatodendritic (5-HT1A) autoreceptors. abstract_id: PUBMED:25656176 Stress detection and classification of laying hens by sound analysis. Stress adversely affects the wellbeing of commercial chickens, and comes with an economic cost to the industry that cannot be ignored. In this paper, we first develop an inexpensive and non-invasive, automatic online-monitoring prototype that uses sound data to notify producers of a stressful situation in a commercial poultry facility. The proposed system is structured hierarchically with three binary-classifier support vector machines. First, it selects an optimal acoustic feature subset from the sound emitted by the laying hens. The detection and classification module detects the stress from changes in the sound and classifies it into subsidiary sound types, such as physical stress from changes in temperature, and mental stress from fear. Finally, an experimental evaluation was performed using real sound data from an audio-surveillance system. The accuracy in detecting stress approached 96.2%, and the classification model was validated, confirming that the average classification accuracy was 96.7%, and that its recall and precision measures were satisfactory. Answer: Based on the provided abstracts, it appears that 1/f sound is not more effective than simple resting in reducing the stress response. The study described in abstract PUBMED:24704658 specifically examined whether 1/f sound affects stress-induced electroencephalogram (EEG) changes. The results indicated that the mental arithmetic task used as a stressor effectively induced a stress response measurable by EEG. However, both 1/f sound and simple resting affected task-associated EEG changes in a similar manner. Additionally, participants reported experiencing a positive feeling in response to the 1/f sound, but the study concluded that a commercialized 1/f sound product is not more effective than simple resting in alleviating the physiological stress response. Therefore, based on this study, it can be concluded that 1/f sound does not have a superior effect compared to simple resting when it comes to reducing the physiological markers of stress.
Instruction: Does pretreatment insomnia or anxiety predict acute response to bupropion SR? Abstracts: abstract_id: PUBMED:15941026 Does pretreatment insomnia or anxiety predict acute response to bupropion SR? Background: This retrospective analysis was conducted to determine whether pretreatment levels of insomnia or anxiety were associated with likelihood of or time to antidepressant response with bupropion sustained release (SR). Methods: Data from an open-label, 8-week, acute phase multicenter study of 797 adult outpatients with recurrent, nonpsychotic major depressive disorder who received bupropion SR (300 mg/day) were used. Depressive symptom severity was measured by the 17-item Hamilton Rating Scale for Depression (HAM-D17), insomnia by totaling the three HAM-D17 insomnia items (early, middle, late), and anxiety by the 14-item Hamilton Rating Scale forAnxiety. Results: Overall, 67% (533/797) of patients responded (defined as &gt; or = 50% reduction in baseline HAM-D17). Neither baseline insomnia nor baseline anxiety was related to the likelihood of achieving response. Higher baseline insomnia and lower baseline anxiety were associated with an earlier onset of response (about one week sooner in each). Conclusions: Predicting the likelihood of antidepressant response with bupropion SR cannot be based on either baseline insomnia or anxiety levels. abstract_id: PUBMED:11403727 Relationship between drug exposure and the efficacy and safety of bupropion sustained release for smoking cessation. A population pharmacokinetic and pharmacodynamic analysis evaluated the relationships of dose, plasma concentrations of bupropion and metabolites, and patient covariates with the safety and efficacy of bupropion sustained release (SR) for smoking cessation. A total of 519 outpatient chronic cigarette smokers were randomized to one of three bupropion SR doses: 100, 150, or 300 mg/day or placebo. The bupropion plasma concentration time data were fit and subject-specific bayesian estimates of clearance were obtained. Logistic regression analyses evaluated the role of dose, concentrations, and covariates in predicting efficacy and safety endpoints. For the evaluation of efficacy, patients were classified as quitters or non-quitters on the basis of a 4-week quit variable (defined as complete abstinence for weeks 4-7 of the study). For the evaluation of safety, patients were classified into two categories for each adverse event evaluated, corresponding to whether the patient ever experienced the adverse event during the course of the study or never experienced the event, regardless of whether the event was treatment-emergent. The efficacy of bupropion SR in facilitating smoking cessation was found to be related to dose and a mean metabolite concentration, and quitting in general was found to be related to the number of cigarettes smoked per day at baseline. Smoking cessation was 1.42, 1.69, and 2.84 times more likely in patients receiving 100, 150, and 300 mg/day of bupropion SR, respectively, as compared to placebo (p = 0.0001). As the baseline number of cigarettes smoked per day increased, the likelihood of quitting decreased regardless of the treatment condition. Insomnia and dry mouth were positively associated with mean metabolite concentrations, and dry mouth was inversely related to patient weight. Anxiety was inversely related to predicted steady-state concentration (Cpss), suggesting a positive effect on this withdrawal symptom. Bupropion SR exhibits a statistically significant dose/plasma level-response relationship for smoking cessation. Dry mouth and insomnia, related to concentrations, may be managed with dose reduction, with the realization that smoking cessation may be impaired. abstract_id: PUBMED:34041117 Bupropion-induced acute dystonia in a patient with bulimia nervosa: A case report. Bupropion is taken as an antidepressant for treatment of major depressive disorders, treatment of sexual side effects of selective serotonin reuptake inhibitors, and as a smoking cessation aid, however, it may result in adverse effects such as nausea, dry mouth, headache, insomnia, dizziness, anxiety, tremor, and constipation. We investigate the case of a 34-year-old woman with bulimia nervosa where acute dystonia was induced by bupropion in 8 months. Following this diagnosis and after normal tests and MRI results, the patient was advised to discontinue bupropion intake. In the follow-up done 2 weeks later, 3 months later, and 6 months later, no signs of acute dystonia was observed. A physician who administers dopamine blocking agents must be aware of the prevalence of and the risk factors for acute dystonia and also the way of prevention and treatment. abstract_id: PUBMED:12109935 Tolerability and safety of sustained-release bupropion in the management of smoking cessation. Sustained-release bupropion (bupropion SR) was first launched in the US in 1997 as an aid to smoking cessation and has since been launched in many other countries. Adverse events associated with the use of bupropion SR at the recommended dosage of 150mg twice daily in clinical trials most commonly included insomnia, headache, dry mouth, nausea and anxiety; insomnia and anxiety are also recognised as symptoms of nicotine withdrawal. Only insomnia and dry mouth occurred significantly more frequently with bupropion SR than with placebo. Relative to placebo, no significant changes in mean values for heart rate, blood pressure or routine laboratory parameters have been reported in smokers using bupropion SR alone in clinical trials. When bupropion SR was compared with a nicotine transdermal patch in a clinical trial, insomnia predominated in the bupropion SR group, while dream abnormalities were more common in smokers using the nicotine patch. Bupropion SR and the nicotine transdermal patch in combination can be used safely (with appropriate monitoring) as an aid to smoking cessation. Infrequent but clinically important adverse reactions to bupropion SR include seizures and hypersensitivity reactions: in controlled clinical trials of bupropion SR (300 mg/day), where smokers were carefully screened for risk factors for seizure, the incidence of both seizures and severe hypersensitivity reactions was approximately 0.1% for each event. In order to avoid a risk of seizure of greater than 0.1%, smokers should be screened for predisposing risk factors and adhere to the manufacturer's dosage recommendations (maximum daily dose of 300mg). Thus, bupropion SR is generally well tolerated, as seen by the low discontinuation rate due to an adverse event in clinical trials (6 to 12%). The most common adverse events (insomnia and dry mouth) are generally transient and often resolve quickly without therapeutic intervention; they can be managed if necessary by a reduction in bupropion dose. abstract_id: PUBMED:33225492 Continuation phase treatment outcomes for switching, combining, or augmenting strategies for treatment-resistant major depressive disorder: A VAST-D report. Background: This secondary analysis of the VA Augmentation and Switching Treatments for Depression study compared the continuation phase treatment outcomes of three commonly used second-step treatment strategies following at least one prior failed medication treatment attempt. Methods: In total, 1522 outpatients with MDD were randomized to switching to bupropion-SR (S-BUP), combining with bupropion-SR (C-BUP), or augmenting with aripiprazole (A-ARI). Following 12 weeks of acute phase treatment, 725 entered the 24-week continuation treatment phase. Depressive symptom severity, relapse, "emergent" remission, anxiety, suicidal ideation, quality of life, health status, and side effects were compared. Results: We did not find clinically significant differential treatment effects with the exception that A-ARI was associated with less anxiety than S-BUP or C-BUP. Participants who entered continuation treatment as remitters had milder depressive symptom severity and lower relapse rates than those not in remission; they also experienced more improvement on most other outcomes. A-ARI was associated with less anxiety, insomnia, and dry mouth but more somnolence, extrapyramidal effects, akathisia, abnormal laboratory values, and appetite and weight gain. Conclusions: Continuation treatment is a dynamic period. Regardless of the treatment, participants who entered continuation treatment at Week 12 in full remission continued to have better outcomes over the subsequent 24 weeks than those who were not in remission at the start of the continuation phase. abstract_id: PUBMED:12109932 Pharmacokinetic optimisation of sustained-release bupropion for smoking cessation. Sustained-release bupropion (bupropion SR) is a unique, non-nicotine smoking cessation aid that is hypothesised to act upon neurological pathways involved in nicotine dependence. Pharmacokinetic and metabolism studies reveal that bupropion SR is metabolised by multiple pathways with no single pathway predominating. When one pathway is inhibited, others are available to compensate. Therefore, only a few clinically relevant drug-drug interactions involving bupropion SR have been observed, although the potential for interactions exists, as with any extensively metabolised drug. Population pharmacokinetic/pharmacodynamic analyses of data from patients receiving daily oral doses of 100mg, 150mg, or 300mg reveal that the anti-smoking efficacy of bupropion SR is directly related to dose. The incidences of dry mouth and insomnia were directly related to bupropion plasma concentrations while the incidence of anxiety was inversely proportional to bupropion plasma concentrations. To maximise efficacy (with an acceptable safety profile), the optimal daily dose for the majority of patients is 300mg. abstract_id: PUBMED:9448656 Double-blind comparison of bupropion sustained release and sertraline in depressed outpatients. Background: A sustained-release formulation of bupropion (bupropion SR), developed with an improved pharmacokinetic profile to permit less frequent dosing than the immediate-release form, has not been evaluated in active comparator trials. This randomized, double-blind, parallel-group trial was conducted to compare the efficacy and safety of bupropion SR and sertraline. Method: Outpatients with moderate to severe major depressive disorder (DSM-IV) received bupropion SR (100-300 mg/day) or sertraline (50-200 mg/day) for 16 weeks. Psychiatric evaluations, including the Hamilton Rating Scale for Depression (HAM-D), the Hamilton Rating Scale for Anxiety (HAM-A), the Clinical Global Impressions scale for Severity of Illness (CGI-S), and for Improvement (CGI-I) were completed, and adverse events were assessed in the clinic periodically throughout treatment. Patients' orgasm function was also assessed. Results: Mean HAM-D, HAM-A, CGI-I, and CGI-S scores improved over the course of treatment in both the bupropion SR group and the sertraline group; no between-group differences were observed on any of the scales. Orgasm dysfunction was significantly (p &lt; .001) more common in sertraline-treated patients compared with bupropion SR-treated patients. The adverse events of nausea, diarrhea, somnolence, and sweating were also experienced more frequently (p &lt; .05) in sertraline-treated patients. No differences were noted between the two treatments for vital signs and weight. Conclusion: This double-blind comparison of bupropion SR and sertraline demonstrates that bupropion and sertraline are similarly effective for the treatment of depression. Both compounds were relatively well tolerated, and orgasm dysfunction, nausea, diarrhea, somnolence, and sweating were reported more frequently in sertraline-treated patients. abstract_id: PUBMED:10817105 Bupropion sustained release versus paroxetine for the treatment of depression in the elderly. Background: Depression is a serious and widespread emotional disorder among the elderly. This study compared the efficacy and safety of bupropion sustained release (SR) with the selective serotonin reuptake inhibitor paroxetine in the treatment of major depression in elderly outpatients. Method: Elderly (&gt; or = 60 years) outpatients with major depressive disorder (DSM-IV criteria) were evaluated in this 6-week multicenter, randomized, double-blind study comparing bupropion SR, 100-300 mg/day, and paroxetine, 10-40 mg/day. Efficacy was assessed by changes in scores on the Hamilton Rating Scales for Depression (HAM-D) and Anxiety (HAM-A) and the Clinical Global Impressions-Severity of Illness and -Improvement scales. Safety was assessed by monitoring adverse events, vital signs, and body weight. Results: A total of 100 patients ranging in age from 60 to 88 years were randomly assigned to treatment with bupropion SR (N = 48) or paroxetine (N = 52). Measurements of efficacy were similar between the 2 treatment groups, with both groups showing improved scores on all depression rating scales. Headache, insomnia, dry mouth, agitation, dizziness, and nausea occurred in &gt; 10% of patients in both groups; somnolence, diarrhea, constipation, and anorexia occurred in &gt; 10% of patients in the paroxetine group. No statistically significant differences between groups in vital signs or weight were found. Conclusion: Both bupropion SR and paroxetine were safe and effective for the treatment of depression in the elderly. Because of its favorable side effect profile, bupropion SR may provide a safe and effective nonserotonergic treatment alternative that is well suited as an antidepressant for the elderly. abstract_id: PUBMED:29670752 Quality of life, binge eating and sexual function in participants treated for obesity with sustained release naltrexone/bupropion. Objective: This multicenter, randomized, controlled, open-label trial examined weight-related quality of life, control over eating behaviour and sexual function after 26 weeks of treatment with either 32 mg naltrexone sustained release (SR)/360 mg bupropion SR plus a comprehensive lifestyle intervention program (NB + CLI, N = 153) or usual care (UC, N = 89), which included minimal lifestyle intervention. Methods: Impact of Weight on Quality of Life-Lite, Binge Eating Scale and Arizona Sexual Experiences Scale were assessed at baseline (BL) and weeks 16 and 26. Results: NB + CLI and UC participants lost 9.46 and 0.94% respectively of initial body weight at week 26 (P &lt; 0.0001). NB + CLI participants had greater improvements in Impact of Weight on Quality of Life-Lite total score than UC participants (P &lt; 0.0001). In participants with moderate/severe Binge Eating Scale scores at BL, 91% of NB + CLI and 18% of UC participants experienced categorical improvements. In participants with Arizona Sexual Experiences Scale-defined sexual dysfunction at BL, 58% of NB + CLI and 19% of UC participants no longer met dysfunction criteria at week 26. The most frequent adverse events leading to discontinuation before week 26 in NB + CLI included nausea (10.5%); anxiety (3.3%); and headache, hypertension, insomnia and palpitations (1.3% each). Conclusion: Compared with UC, participants treated with NB + CLI experienced greater improvements in weight-related quality of life, control over eating behaviour, and sexual function. abstract_id: PUBMED:19017591 Using second-generation antidepressants to treat depressive disorders: a clinical practice guideline from the American College of Physicians. Description: The American College of Physicians developed this guideline to present the available evidence on the pharmacologic management of the acute, continuation, and maintenance phases of major depressive disorder; dysthymia; subsyndromal depression; and accompanying symptoms, such as anxiety, insomnia, or neurovegetative symptoms, by using second-generation antidepressants. Methods: Published literature on this topic was identified by using MEDLINE, EMBASE, PsychLit, the Cochrane Central Register of Controlled Trials, and International Pharmaceutical Abstracts from 1980 to April 2007. Searches were limited to English-language studies in adults older than 19 years of age. Keywords for search included terms for depressive disorders and 12 specific second-generation antidepressants-bupropion, citalopram, duloxetine, escitalopram, fluoxetine, fluvoxamine, mirtazapine, nefazodone, paroxetine, sertraline, trazodone, and venlafaxine-and their specific trade names. This guideline grades the evidence and recommendations by using the American College of Physicians clinical practice guidelines grading system. RECOMMENDATION 1: The American College of Physicians recommends that when clinicians choose pharmacologic therapy to treat patients with acute major depression, they select second-generation antidepressants on the basis of adverse effect profiles, cost, and patient preferences (Grade: strong recommendation; moderate-quality evidence). RECOMMENDATION 2: The American College of Physicians recommends that clinicians assess patient status, therapeutic response, and adverse effects of antidepressant therapy on a regular basis beginning within 1 to 2 weeks of initiation of therapy (Grade: strong recommendation; moderate-quality evidence). RECOMMENDATION 3: The American College of Physicians recommends that clinicians modify treatment if the patient does not have an adequate response to pharmacotherapy within 6 to 8 weeks of the initiation of therapy for major depressive disorder (Grade: strong recommendation; moderate-quality evidence). RECOMMENDATION 4: The American College of Physicians recommends that clinicians continue treatment for 4 to 9 months after a satisfactory response in patients with a first episode of major depressive disorder. For patients who have had 2 or more episodes of depression, an even longer duration of therapy may be beneficial (Grade: strong recommendation; moderate-quality evidence). Answer: No, pretreatment insomnia or anxiety does not predict the acute response to bupropion SR. According to a retrospective analysis conducted to determine whether pretreatment levels of insomnia or anxiety were associated with the likelihood of or time to antidepressant response with bupropion SR, neither baseline insomnia nor baseline anxiety was related to the likelihood of achieving a response. However, higher baseline insomnia and lower baseline anxiety were associated with an earlier onset of response, approximately one week sooner in each case (PUBMED:15941026).
Instruction: Two-year audit of computed tomographic colonography in a teaching hospital: are we meeting the standard? Abstracts: abstract_id: PUBMED:19292776 Two-year audit of computed tomographic colonography in a teaching hospital: are we meeting the standard? Objective: We aimed to determine whether adopting the published recommendations has led to successful implementation of computed tomographic colonography (CTC) in a teaching hospital setting outside the context of a clinical trial. Method: An audit of all the CTC examinations between April 2005 and June 2007 was conducted to determine the following: adequacy of bowel preparation, CTC indications and findings (compared with available colonoscopy), complications and experience of reporting radiologist. Results: The most common indications for the 111 CTC patients reviewed included exclusion of synchronous colonic tumours, incomplete colonoscopy and altered bowel habit. Only 16% of ascending colon/caecal segments was clear of faecal or fluid contamination. The rectum and sigmoid colon were free of contamination in 78% and 74% of cases respectively. Appropriately trained radiologists reported 91% of studies. Thirty-two percent of studies were normal. The most common positive findings were diverticular disease or a rectal tumour. Sensitivity, specificity and positive predictive value were 89%, 94% and 90% respectively (all polyps) with a sensitivity of 98.5% for lesions &gt; 5 mm in size. Twenty-five percent of patients had extra colonic abnormalities. There were no recorded complications. Conclusion: Our CTC practice is within accepted published guidelines. Bowel preparation is suboptimal in a significant proportion of cases and faecal tagging is being implemented. abstract_id: PUBMED:19114774 Computed tomographic colonography. Purpose Of Review: Computed tomographic colonography is a new and noninvasive method to evaluate the colon. The goal of this review is to discuss the latest data and define outstanding issues related to computed tomographic colonography. Recent Findings: Computed tomographic colonography is gaining momentum as a potential primary colon cancer screening method in the USA. Although not as accurate as colonoscopy, the accuracy of computed tomographic colonography for detection of large lesions appears to be in the 80-90% range. The field is rapidly evolving, not only in terms of technology but also in a variety of other practical areas. Summary: Current data suggest that computed tomographic colonography is a viable colon cancer screening modality in the USA. However, it is not ready for widespread implementation, largely because of lack of standards for training and reading and the fact that the number of skilled readers is limited. abstract_id: PUBMED:24547623 Computed tomographic colonography Computed tomographic colonography, i.e. CT colonography has already widely replaced double-contrast colonography as a method of examination of the colon. A prerequisite of successful examination is a well-emptied and distended colon. CT colonography detects cancer with a sensitivity of 96%, which corresponds to the results of colonoscopy. Furthermore, CT colography allows the simultaneous assessment of the extracolonic organs in the abdominal cavity. The risk of bowel perforation is smaller than in conventional colonoscopy, and for the patient the CT examination is more convenient than conventional endoscopy or double-contrast colonography. abstract_id: PUBMED:16013646 Computed tomographic colonography. Computed tomographic colonography, also called virtual colonoscopy, is an evolving technology under evaluation as a new method of screening for colorectal cancer. However, its performance as a test has varied widely across studies, and the reasons for these discrepancies are poorly defined. We provide an overview of some potential causes and discuss the available, often indirect, evidence. In addition, several other obstacles that may influence implementation are discussed. Future investigations should demonstrate the influence of these potential factors on sensitivity of computed tomographic colonography. Despite a growing body of evidence, it remains uncertain to what extent patient acceptance, radiation issues, flat lesions, and extracolonic findings will be a stumbling block to using computed tomographic colonography for colorectal cancer screening. abstract_id: PUBMED:20124234 Is computed tomographic colonography being held to a higher standard? Recent guidelines for colorectal cancer screening have reached different conclusions on whether computed tomographic colonography (CTC) is an acceptable screening option, and the Centers for Medicare &amp; Medicaid Services recently decided not to cover CTC screening. The rationale against recommending or covering CTC screening includes concerns about radiation exposure, false-negative rates for small polyps, the discovery of extracolonic findings, variability in performance, a lack of targeted studies, a higher adenoma rate in the Medicare-eligible age group, and an absence of evidence that covering CTC would increase overall screening rates. Similar concerns can be raised for other recommended and covered colon cancer screening tests, but it seems that CTC is being held to a new and higher standard. abstract_id: PUBMED:19148358 Evaluation of computed tomographic colonography for detection of colorectal polyps Background: Computed tomographic colonography has been proposed for detection of colorectal polyps instead of colonoscopy in colorectal cancer screening programs. Aim: To evaluate the performance of computed tomographic colonography in the detection of colorectal polyps with colonoscopy used as the gold standard. Methods: We prospectively studied 20 patients at high risk for colorectal neoplasia (14 men and 6 women; mean age, 55 years and 59 years). Computed tomographic colonography was performed immediately before colonoscopy. We inserted a rectal balloon catheter and insufflated the colon with room air to the level that a good distension was observed. Twenty milligrams of hioscin was given immediately before computed tomographic imaging of the abdomen and pelvis in order to minimize the degree of smooth-muscle spasm and peristalsis and to reduce the patient's discomfort. Results: Computed tomographic colonography images were considered satisfactory in all cases. Colonoscopy detected 85 polyps in 19 of 20 patients (95%). All the observed polyps were successfully removed and examined histologically. The radiological examination correctly identified 8 of 10 polyps 10 mm or more in diameter, 2 of 19 (18,2%) with 5-9 mm, and just 1 of 53 &lt;5mm (9,1%). Seventeen of the 43 adenomatous polyps were &gt; or =5 mm. Eight (47%) were correctly identified on computed tomographic colonography. None of the neoplasias &lt;5 mm were identified on colonography. Nineteen patients preferred colonoscopy in the event of having to repeat on of the two examinations. Conclusion: For the detection of colorectal polyps, computed tomographic colonography seems to be useful only when the result is positive, as the negative results of this examination cannot eliminate the presence of these lesions. abstract_id: PUBMED:11918874 Detection of colorectal lesions with virtual computed tomographic colonography. Background: The aim of our study was to compare the performance of virtual computed tomographic colonography with that of conventional colonoscopy in a blinded, prospective study in 165 patients with suspected colorectal lesions. Methods: There were 165 patients, all referred for conventional colonoscopy, who underwent preliminary virtual computed tomographic colonography. Computed tomograhic images of all suspected lesions were analyzed and subsequently compared with conventional colonoscopy findings. Results: There were 30 colorectal cancers and 37 polyps identified at conventional colonoscopy. Virtual computed tomographic colonography correctly detected all cancers, as well as 11 of 12 polyps of 10 mm in diameter or larger (sensitivity, 92%); 14 of 17 polyps between 6 and 9 mm (sensitivity, 82%); and 4 of 8 polyps of 5 mm or smaller (sensitivity, 50%). The per-patient sensitivity and specificity were 92% and 97%, respectively. Conclusions: Virtual computed tomographic colonography has a diagnostic sensitivity similar to that of conventional colonoscopy for the detection of colorectal lesions larger than 6 mm in diameter. abstract_id: PUBMED:21184904 Computed tomographic colonography. This article provides basic information about computed tomographic colonography (CTC) and reviews the preparation, methods, and tools required for the procedure. The clinical uses for CTC (screening/diagnosis of colon cancer and colonic obstruction) are outlined, and its accuracy and validity are compared with other diagnostic methods. A summary of the benefits and risks of the test are presented and the current practicalities for implementation are addressed. abstract_id: PUBMED:19578607 Performance of computed tomographic colonography for the screening of colorectal polyp in acromegalic patients: a prospective study. Context: Acromegalic patients have better chances to develop colorectal polyps and cancer and, considered a high-risk group, need to undergo frequent screening examinations. Moreover, in acromegalia, the increased bowel length and the intestinal loop complexity can lead to higher levels of technical difficulties and increase the risks of complications at conventional colonoscopy. Computed tomographic colonography, also known as virtual colonoscopy, is an innovative and secure technology which is revolutionizing the diagnosis of colon and rectum neoplasias. Objective: To analyze computed tomographic colonography performance for the screening of colorectal polyps in acromegalic patients. Methods: A prospective study of 21 asymptomatic acromegalic patients, 12 male and 9 female, average age 49, who underwent computed tomographic colonography and conventional colonoscopy. Computed tomographic colonography was performed with a GE Helical Multislice Computed Tomography Apparatus. Conventional colonoscopy was performed in the same day, without previous knowledge of the computed tomographic colonography diagnostics. The study evaluated the capacity of computed tomographic colonography to detect patients with colorectal polyps and identify each colorectal lesion described by the colonoscopy. Results: In two patients (2/21), conventional colonoscopy was incomplete. However, in all patients computed tomographic colonography was complete. In Phase I ('per patient'), computed tomographic colonography diagnosed eight of the nine patients with colorectal polyps and showed 88% sensitivity, 75% specificity and 81% accuracy. In Phase II ('per polyp'), out of the 21 acromegalic patients included in this study, 12 presented normal findings at conventional colonoscopy. A total of 19 polyps were identified in 9 patients. Ten of the 19 polyps were smaller than 10 mm, and 9 were equal to or larger than 10 mm. Computed tomographic colonography identified 7 of the 9 polyps &gt;10 mm described by conventional colonoscopy and only 6 of the 10 small polyps identified at conventional colonoscopy were detected by computed tomographic colonography. The histological analysis of resected lesions revealed 12 tubular adenomas, 6 hyperplastic polyps and 1 colonic tubulo-villous adenoma with an adenocarcinoma focus. Conclusion: The authors present the first reports of computed tomographic colonography in the screening of colorectal polyps in acromegalic patients. In this study, computed tomographic colonography was performed without complications and a complete and safe colorectal evaluation was possible in all acromegalic patients. Moreover, computed tomographic colonography presented good sensitivity, specificity and accuracy for the identification of acromegalic patients with polyps of any size and better results in the diagnosis of large polyps, when they were compared to small polypoid lesions. abstract_id: PUBMED:36462802 Training in Computed Tomographic Colonography Interpretation: Recommendations for Best Practice. The value of computed tomographic colonography (CTC) as a sensitive diagnostic investigation for colorectal cancer is well established. However, there is lack of consensus in the best way to achieve expertise in interpreting these studies. In this review we discuss the value of CTC training, accreditation and performance monitoring; the qualities of good CTC interpretation training, and specific training cases with associated learning points. Answer: The two-year audit of computed tomographic colonography (CTC) at a teaching hospital aimed to determine if the implementation of CTC met published recommendations and standards outside of a clinical trial setting (PUBMED:19292776). The audit reviewed 111 CTC examinations conducted between April 2005 and June 2007, focusing on the adequacy of bowel preparation, indications and findings of CTC (compared with available colonoscopy), complications, and the experience of the reporting radiologist (PUBMED:19292776). The audit found that the most common indications for CTC were to exclude synchronous colonic tumors, incomplete colonoscopy, and altered bowel habits. However, bowel preparation was suboptimal in a significant number of cases, with only 16% of ascending colon/caecal segments clear of fecal or fluid contamination. In contrast, the rectum and sigmoid colon were free of contamination in 78% and 74% of cases, respectively. To address this, the implementation of fecal tagging was considered (PUBMED:19292776). The audit also revealed that appropriately trained radiologists reported 91% of the studies, and 32% of the studies were normal. The most common positive findings were diverticular disease or a rectal tumor. The sensitivity, specificity, and positive predictive value for all polyps were 89%, 94%, and 90%, respectively, with a sensitivity of 98.5% for lesions greater than 5 mm in size. Additionally, 25% of patients had extracolonic abnormalities, and no complications were recorded (PUBMED:19292776). The conclusion of the audit was that the CTC practice at the teaching hospital was within accepted published guidelines, despite the noted issue with bowel preparation. The results suggest that while there is room for improvement in certain areas, such as bowel preparation, the overall implementation of CTC at the hospital met the standard recommendations (PUBMED:19292776).
Instruction: Standards of reporting of randomized controlled trials in general surgery: can we do better? Abstracts: abstract_id: PUBMED:18827662 Randomized controlled trials in plastic surgery: a 20-year review of reporting standards, methodologic quality, and impact. Background: Randomized controlled trials in plastic surgery have not been analyzed comprehensively. We analyzed plastic surgical randomized controlled trials with respect to reporting standards, methodologic quality, and impact on the specialty. Methods: Randomized controlled trials published from 1986 to 2006 in three major plastic surgery journals were scored for quality and impact using the Consolidated Standards of Reporting Trials checklist, the Jadad criteria, citation numbers, and other parameters. The associations between the quality scores and multiple independent parameters, including trial impact, were explored. The relative impact of randomized controlled trials in plastic surgery was compared with that in other specialties. Results: A total of 163 randomized controlled trials were evaluated. The average Consolidated Standards of Reporting Trials and Jadad scores were 49 percent and 2.3, respectively. There were deficiencies in the reporting of parameters that influence bias and statistical significance. Randomized controlled trials with high impact or high methodologic quality had higher reporting scores. However, the quality and impact scores did not correlate with the number of participants, subject category, country of origin, or year or journal of publication. Nonsurgical trials had significantly higher quality and impact than surgical trials. Randomized controlled trials in plastic surgery had relatively lower impact as compared with randomized controlled trials in other specialties. Conclusions: The reporting and methodologic standards of randomized controlled trials in plastic surgery need improvement. Standards could be improved if well-accepted reporting and methodologic criteria are considered when designing and evaluating randomized controlled trials. Instituting higher standards may improve the impact of randomized controlled trials and make them more influential in plastic surgery. abstract_id: PUBMED:33175208 The reporting quality of randomized controlled trials and experimental animal studies for urethroplasty. Objectives: To assess the reporting quality of randomized controlled trials and experimental animal studies examining urethroplasty in reconstructive urological surgery literature. Methods: We performed a comprehensive literature search to identify all urethroplasty-related RCTs examining humans as well as animal models. We used the Consolidated Standards of Reporting Trials (CONSORT) and the Animals in Research: Reporting in vivo Experiments (ARRIVE) guidelines to assess reporting quality. Two reviewers performed data abstraction independently and in duplicate. We then generated descriptive statistics including CONSORT (0-25) and ARRIVE (0-20) summary scores using the median and interquartile range. Results: Twenty studies were ultimately included; 14 randomized controlled trials and 6 experimental animal studies. All studies were two-armed, parallel group studies. Median sample sizes (and interquartile range) of the human and animal studies were 48.5 (31.8-53.8) and 18 (15.3-27.5), respectively. The median CONSORT and ARRIVE scores were 10.0 (8.75-12.63) and 7.97 (6.79-8.64), respectively. Human randomized controlled trials did not consistently report the method of allocation concealment (6/14; 42.9%), blinding (2/14; 14.3%), or discuss the generalizability of the results (6/14; 42.9%). Animal studies infrequently reported why a given animal model was used (1/6; 16.7%), how they were allocated to groups (0/6; 0%) or what the experimental primary and secondary outcomes were (0/6; 0%). Conclusions: Urethroplasty literature is marked by a paucity of both randomized controlled trials and experimental design animal studies. The existing studies are inconsistently reported and are therefore of uncertain methodological quality. abstract_id: PUBMED:27514437 Methodological reporting quality of randomized controlled trials: A survey of seven core journals of orthopaedics from Mainland China over 5 years following the CONSORT statement. Background: In recent years, the number of randomized controlled trials (RCTs) in the field of orthopaedics is increasing in Mainland China. However, randomized controlled trials (RCTs) are inclined to bias if they lack methodological quality. Therefore, we performed a survey of RCT to assess: (1) What about the quality of RCTs in the field of orthopedics in Mainland China? (2) Whether there is difference between the core journals of the Chinese department of orthopedics and Orthopaedics Traumatology Surgery &amp; Research (OTSR). Material And Methods: This research aimed to evaluate the methodological reporting quality according to the CONSORT statement of randomized controlled trials (RCTs) in seven key orthopaedic journals published in Mainland China over 5 years from 2010 to 2014. All of the articles were hand researched on Chongqing VIP database between 2010 and 2014. Studies were considered eligible if the words "random", "randomly", "randomization", "randomized" were employed to describe the allocation way. Trials including animals, cadavers, trials published as abstracts and case report, trials dealing with subgroups analysis, or trials without the outcomes were excluded. In addition, eight articles selected from Orthopaedics Traumatology Surgery &amp; Research (OTSR) between 2010 and 2014 were included in this study for comparison. The identified RCTs are analyzed using a modified version of the Consolidated Standards of Reporting Trials (CONSORT), including the sample size calculation, allocation sequence generation, allocation concealment, blinding and handling of dropouts. Results: A total of 222 RCTs were identified in seven core orthopaedic journals. No trials reported adequate sample size calculation, 74 (33.4%) reported adequate allocation generation, 8 (3.7%) trials reported adequate allocation concealment, 18 (8.1%) trials reported adequate blinding and 16 (7.2%) trials reported handling of dropouts. In OTSR, 1 (12.5%) trial reported adequate sample size calculation, 4 (50.0%) reported adequate allocation generation, 1 (12.5%) trials reported adequate allocation concealment, 2 (25.0%) trials reported adequate blinding and 5 (62.5%) trials reported handling of dropouts. There were statistical differences as for sample size calculation and handling of dropouts between papers from Mainland China and OTSR (P&lt;0.05). Conclusion: The findings of this study show that the methodological reporting quality of RCTs in seven core orthopaedic journals from the Mainland China is far from satisfaction and it needs to further improve to keep up with the standards of the CONSORT statement. Level Of Evidence: Level III case control. abstract_id: PUBMED:31787318 Adequate reporting of the sample size calculation in surgical randomized controlled trials. Background: The sample size calculation for a randomized controlled trial should assure that a statistically significant and clinically relevant difference can be detected with a high probability. To be certain that readers of a randomized controlled trial can critically judge whether the assumptions for the sample size calculation were realistic, a number of specific subitems should be reported. Methods: In this minireview, the frequency of adequate reporting of specific sub-items of the sample size was assessed from surgical randomized controlled trials that were published in the 5 leading general surgical journals in 2017. Subitems from the Consolidated Standards for Reporting Trials and from the Standard Protocol Items: Recommendations for Interventional Trials Statement were considered. Results: A total of 85 relevant randomized controlled trials were identified over PubMed. The overall adequate reporting of the sample size according to the Consolidated Standards for Reporting Trials was 53% (45 of 85). When considering additional subitems from the Standard Protocol Items: Recommendations for Interventional Trials Statement which are not explicitly requested by the Consolidated Standards for Reporting Trials Statement, the adherence decreased to 12% (10 of 85). Conclusion: The adherence in reporting potentially important subitems of the sample size from the Consolidated Standards for Reporting Trials Statement was moderate among surgical randomized controlled trials. It is crucial that surgeons who plan and conduct randomized controlled trials be aware of these subitems. abstract_id: PUBMED:16872940 Improving the quality of reporting randomized controlled trials in cardiothoracic surgery: the way forward. Objective: To evaluate the quality of reporting of randomized controlled trials in cardiothoracic surgery, to identify factors associated with good reporting quality, and to assess the awareness of the Consolidated Standards for Reporting of Trials statement and ascertain the views of authors reporting randomized controlled trials on the difficulties in conducting randomized controlled trials and the possible ways to further improve the reporting quality of randomized controlled trials in cardiothoracic surgery. Methods: Randomized controlled trials of cardiothoracic surgery published in principal cardiothoracic and 4 general medical journals in 2003 were included. The quality of reporting of randomized controlled trials was assessed by using allocation concealment, the Jadad score, and a Consolidated Standards for Reporting of Trials checklist devised for the purpose. A questionnaire survey of authors reporting randomized controlled trials in principal cardiothoracic journals in 2003 was conducted. Results: The overall reporting quality of the 64 randomized controlled trials included in the analysis was suboptimal as assessed by the 3 methods adopted. Most of the authors (63.5%) were not aware of the Consolidated Standards for Reporting of Trials statement; however, awareness was not associated with reporting quality. More than 65% of the authors responded that conducting randomized controlled trials in surgical specialties was difficult, and the main difficulties were blinding and obtaining a large-enough sample size to detect statistically significant differences. Fifty-four percent of the authors responded that endorsement of the Consolidated Standards for Reporting of Trials statement by the cardiothoracic journals may improve the reporting quality. Conclusions: The quality of reporting randomized controlled trials in cardiothoracic surgery is suboptimal. Endorsement of the Consolidated Standards for Reporting of Trials statement by the cardiothoracic journals may improve the quality of reporting. abstract_id: PUBMED:34575180 A Historical Analysis of Randomized Controlled Trials in the Management of Pain in Rotator Cuff Tears. The aim of this analysis was to assess the quality of reporting of randomized controlled trials (RCTs) relating to pain management in rotator cuff (RC) tears. This review evaluated the quality of the studies in the literature regarding this topic through the use of some factors and trends. The online databases used to search all RCTs on the topic of RC surgery were Medline, Scopus, CINAHL, EMBASE, and CENTRAL. This research was completed in September 2020. To assess the quality of reports, the Consolidated Standards of Reporting Trials (CONSORT) and the modified Coleman methodology score (MCMS) were used. From the research, 262 articles emerged. Finally, 79 studies were included in this historical analysis. There were no statistically significant changes in MCMS across trials that included or did not include a CONSORT diagram (p = 0.10). A statistically significant difference in MCMS was discovered between papers produced prior to 2009 and publications produced after 2015 (p = 0.03). There was no association between the number of checklist items for each article and the Coleman score. During the years there has been a significant increase in both quantity and quality of RCTs relating to pain in RC tears. abstract_id: PUBMED:33817962 Adherence to the CONSORT statement and extension for nonpharmacological treatments in randomized controlled trials of bariatric surgery: A systematic survey. Reporting is critical for establishing the value of randomized controlled trials (RCTs). This study evaluated the adherence of bariatric surgery RCT reporting to the CONsolidated Standards Of Reporting Trials (CONSORT) statement 2010 and its 2017 extension for non-pharmacologic treatments (NPT extension). We identified all RCTs comparing bariatric surgery with conservational therapy or alternative bariatric surgery up to June 30, 2020. Reporting quality was assessed using criteria developed from the CONSORT statement and the NPT extension and scored as a percentage. The factors associated with reporting quality were explored by univariate and multivariate analysis. In total, 102 RCTs of bariatric surgery were included. The median scores according to the CONSORT statement and NPT extension were 63.3 and 26.8 of a maximum possible 100, respectively. Two-thirds of NPT extension items were reported in less than 25% of the RCTs. The median score improved over time for the CONSORT statement but not the NPT extension. A higher CONSORT score was associated with publication in core clinical journals, protocol registration, and funding. No factors associated with the NPT extension score were identified. Substantial efforts are warranted from authors, journals, registration platforms, and funders to overcome the flaws in the reporting of bariatric surgery RCTs. abstract_id: PUBMED:12006922 Reporting randomized, controlled trials: where quality of reporting may be improved. Introduction: Evidence-based medicine relies on reproducible, high-quality reporting in the literature. Previous evaluations, which have assessed 11 basic elements of design and analysis in top impact clinical journals (both nonsurgical and surgical), have demonstrated that the reporting quality is less than perfect, although improving. The current study evaluates the quality of reporting in Diseases of the Colon and Rectum and other clinically related journals to identify specific areas where future improvements may be made. Methods: Two independent evaluators assessed all randomized, controlled trials published in Diseases of the Colon and Rectum in the years 1990, 1995, and 2000. Additional assessments for 2000 were performed on all randomized, controlled trials published in Annals of Surgery, Archives of Surgery, and Gastroenterology. The frequency of reporting of 11 explicitly defined, traditionally important, basic elements of design and analysis were determined. These elements included reporting of eligibility criteria, admission before allocation, randomization (and method), blinded assessment (patient and observer), complications, loss to follow-up, statistical approach and tests, and power calculation. Results: Interobserver reliability was strong (kappa, 0.76). The number of randomized, controlled trials published in Diseases of the Colon and Rectum increased from 5 (in 1990) to 13 (in 1995) to 17 (in 2000). Of the 1990 randomized, controlled trials, an average of 60 percent of the 11 basic elements were reported. Of the 1995 randomized, controlled trials, 72 percent of the items were reported (P = 0.05), whereas of the 2000 randomized, controlled trials, 77 percent of the 11 items were reported (P &lt; 0.002 vs. 1990). The best-reported items were eligibility criteria, discussion of statistical tests, and accounting for all patients lost to follow-up. Only 11 percent of the 2000 randomized, controlled trials reported statistical power calculations. For the other journals that were evaluated, 72 to 88 percent of items were reported, with eligibility criterion being the best consistently reported item and power calculation being the worst. Conclusions: For Diseases of the Colon and Rectum, the number of randomized, controlled trials and the quality of reporting is improving. However, although certain research standards are reported adequately, others are not. The calculation of statistical power is clearly important when interpreting randomized, controlled trial results (whether differences are reported or not), yet only 11 percent of studies contained this information. Improving the reporting of this single item would likely lead to improving the overall quality of clinical studies in colorectal surgery. Improved reporting might be best facilitated by having authors adhere to a list of explicitly determined elements that should be included. abstract_id: PUBMED:21224750 The quality of randomized controlled trial reporting in spine literature. Study Design: Retrospective literature review. Objective: To assess the quality of randomized controlled trial reporting in spine surgery. Summary Of Background Data: The use of the Consolidated Standards of Reporting Trials (CONSORT) criteria in clinical trials aims to improve adherence to a set of generally accepted practices in the reporting of randomized controlled trials. Randomized controlled trials in spine surgery are important sources of evidence-based practice, but the quality of their execution and reporting have not been reported adequately. Methods: All randomized controlled trials published in three selected dedicated spine journals from 2008 were reviewed with respect to the 40 criteria derived from CONSORT descriptors; 10 criteria applying to the abstract, and 30 to the body of the article. Each article was scored by three reviewers in group format for each criterion and was assigned a score. Reviewers always came to a consensus before assigning a score. Results: A total of 32 RCTs met the inclusion criteria for this review. The average modified CONSORT score was 65%. Disclosing certain criteria was associated with a higher overall score, including method of generation of random sequence (P &lt; 0.000071), allocation concealment (P &lt; 0.00014), inclusion of a flow chart or description of patient numbers at different stages (P &lt; 0.00034), and identification of outcome measures (P &lt; 0.00064). Conclusion: Conclusions drawn from current randomized controlled trials in the spine literature may thus be difficult to interpret. Greater effort must be put into compliance with these guidelines to improve data quality. abstract_id: PUBMED:28566206 Quality of reporting for randomized controlled trials in the hypospadias literature: Where do we stand? Introduction: To assess the quality of reporting of randomized controlled trials (RCTs) in the hypospadias literature using the 2010 Consolidated Standards of Reporting Trials (CONSORT) statement. It was hypothesized that hypospadias RCTs that contained clear descriptions of key methodological items, allocation concealment, blinding, and sample size justification would have higher overall quality of reporting scores (OQS). Materials And Methods: A comprehensive search was conducted through MEDLINE to identify RCTs in hypospadias surgical techniques and postoperative management during the period 1990-2014. Two reviewers independently selected articles, which were evaluated using the CONSORT checklist. An overall quality score (%) was calculated to assess the quality of reporting. In addition, a methodological index score out of 4 was calculated based on the following items: use of intention to treat/sample size justification, allocation concealment, specification of randomization type, and blinding of outcome assessors. Results: Of the 76 initial results, 39 (51%) were excluded due to their predominant focus on anesthesia. After full-text screening, 10 (13%) citations were further excluded because they were case control studies or did not focus on hypospadias techniques, resulting in 27 (36%) studies included for analysis. The mean overall quality score was 37 ± 12% and a median of 36% (range: 14-61%). Fifteen (56%) studies were identified as low quality (score &lt;40%) and 12 (44%) as moderate quality (40-70%). No studies were classified as high quality (&gt;70%). Hypospadias RCTs published between 2007 and 2014 versus those reported before 2007 (44 ± 9% vs 33 ± 11%, P = 0.01), RCTs with a sample size &gt;100 patients versus those &lt;100 (47 ± 8% vs 36 ± 11%, P = 0.01), RCTs that disclosed having received funding versus those that did not (56 ± 4% vs 38 ± 10%, P &lt; 0.01) and RCTs that had proof of biostatistician/epidemiologist support versus those that did not (58 ± 5% vs 36 ± 11%, P = 0.01) had a higher mean OQS. The number of articles that met specific 2010 CONSORT criteria is illustrated in Summary Fig. Discussion: It was found that the contemporary hypospadias literature continues to suffer from suboptimal reporting standards. There seems to be an improvement in the OQS for studies published after 2007 and those with larger sample sizes, usually &gt;100 patients. Nevertheless, none of the studies obtained high quality of reporting (OQS &gt;70%) as per the CONSORT statement checklist. The inadequacies in reporting were related to sample size justifications, randomization method, allocation concealment strategy, blinding, description of subjects lost to follow-up and Intention To Treat (ITT) analysis. These findings were consistent with the poor quality of reporting observed in other surgical fields. Conclusions: The current overall quality score in hypospadias literature is suboptimal and efforts must be made to improve quality. Answer: The standards of reporting randomized controlled trials (RCTs) in general surgery, as in other medical and surgical specialties, have been found to be suboptimal and there is a recognized need for improvement. Several studies have assessed the quality of RCT reporting across various surgical fields, including general surgery, and have identified areas where reporting is lacking and could be enhanced. For instance, a study assessing the quality of RCT reporting in spine literature found that the average modified CONSORT score was 65%, indicating that many RCTs did not fully adhere to the CONSORT guidelines, which are designed to improve the reporting of RCTs (PUBMED:21224750). Similarly, a review of RCTs in cardiothoracic surgery revealed that the overall reporting quality was suboptimal, and the authors suggested that endorsement of the CONSORT statement by cardiothoracic journals may improve the quality of reporting (PUBMED:16872940). In the field of plastic surgery, an evaluation of RCTs published over a 20-year period found that the reporting and methodological standards needed improvement, with average CONSORT and Jadad scores being 49 percent and 2.3, respectively (PUBMED:18827662). The study in orthopaedics from Mainland China also highlighted that the methodological reporting quality of RCTs was far from satisfactory, with no trials reporting adequate sample size calculation and very few reporting adequate allocation concealment or blinding (PUBMED:27514437). Moreover, a systematic survey of RCTs in bariatric surgery found that adherence to the CONSORT statement and its extension for nonpharmacological treatments was low, with median scores of 63.3 and 26.8 out of 100, respectively (PUBMED:33817962). This indicates that there is considerable room for improvement in the reporting of RCTs in this surgical field as well. In conclusion, the evidence suggests that the standards of reporting RCTs in general surgery, as well as in other surgical specialties, could be significantly improved. Adherence to established reporting guidelines such as the CONSORT statement is crucial for enhancing the quality, transparency, and reliability of RCTs, which in turn can lead to better evidence-based practice in surgery.
Instruction: Buttonhole cannulation in hemodialysis: improved outcomes and increased expense--is it worth it? Abstracts: abstract_id: PUBMED:20426358 Buttonhole cannulation in hemodialysis: improved outcomes and increased expense--is it worth it? Background: Access to an adequate blood flow is a requirement for successful hemodialysis (HD). This often means repeated cannulation of an arteriovenous fistula (AVF), which can lead to damage that needs repair and revision. The Buttonhole (BH) method offers a successful cannulation with minimal damage. Design: A prospective cohort research study was initiated in two HD units in St. John's, Newfoundland and Labrador, to assess the effects of cannulating AVFs using the BH technique from the patient and nurse perspective. Methods: Twenty-five nurses and 29 patients completed questionnaires at four times throughout the three-month study period, rating their confidence levels about BH cannulation issues. Patients also provided information on the pain of the cannulation and the frequency of cannulation complications. Nurses documented data on arterial and venous pressures, and hemostasis times. Patient charts were also reviewed for complications requiring extensive interventions such as AVF repair or Central Line Catheter (CVC) placement. The cost of providing the BH cannulation was also examined. Results: At the end of the study, it was noted that cannulation pain was statistically reduced with both the arterial (p = .002) and venous (p = .010) needles, and vessel pressures and hemostasis times were decreased slightly or stayed the same throughout the study. The frequency of access infections, however, increased, although not significantly. Using a 10-point Likert scale in which a score of &gt; or = 8 indicates a high level of confidence, 77.5% of nurses and 73.9% of patients reported a high level of confidence in the nurses' abilities to use the BH technique effectively. In terms of expense, no significant changes were noted in frequency of procedures required for AVF repair with the BH cannulation, although an increase of approximately $358.80 per patient per year for BH supplies was noted. Conclusion: BH cannulation did provide significant improvements. However, the increase in infection rate was an issue of concern. The additional cost of the BH procedure should be weighed against the positive outcomes realized. abstract_id: PUBMED:26994293 Pro: Buttonhole cannulation of arteriovenous fistulae. Arteriovenous fistulae remain the access of choice for most hemodialysis patients. However, several factors limit their uptake and long-term patency, resulting in suboptimal prevalent rates in many high-income countries. Patients place considerable value on the avoidance of vascular access complications, pain and disfigurement. The approach to cannulation is a modifiable practice that could improve patient-important outcomes, with buttonhole needling offering some theoretical advantages over the standard rope-ladder and area methods. In this narrative review, we summarize key findings of studies reporting the benefits and risks associated with the buttonhole method, highlighting methodological limitations as well as recent refinements to the technique that may represent potential opportunities for reducing infection risk. We highlight the need for greater certainty surrounding the potential benefits of the buttonhole cannulation technique and propose some directions for future research. abstract_id: PUBMED:30707492 Stenosis and thrombosis-unveiled complications of buttonhole cannulation. The vascular access is the mainstay of hemodialysis. Arteriovenous fistula has been prioritized as the first choice of long-term vascular access for its inferior rate of complications and improved patient survival. Rope ladder and buttonhole venipuncture are the most common techniques for arteriovenous fistulae cannulation. Much of the concern regarding buttonhole cannulation is the increased risk for potentially severe systemic infections. Increased risk for stenosis and thrombosis has been addressed but not confirmed. We discuss two cases of stenosis and thrombosis related to buttonhole puncture which heralded angiography intervention, raising awareness for the demand of close surveillance of arteriovenous fistulae when using buttonhole cannulation. abstract_id: PUBMED:25874197 Formation of abnormal fistula on arteriovenous fistula in a patient with buttonhole cannulation: a case report. Buttonhole cannulation has been suggested as a technique that may lead to a reduction in many complications when compared with other techniques such as rope-ladder cannulation. Despite all above, some complications still continue, which may lead to a dysfunction of the arteriovenous fistula (AVF) and inadequate dialysis, having an impact on the quality of life of the patients or eventually making the vascular access unusable. We report a 47-year-old Chinese male with end-stage renal disease who had maintenance hemodialysis three times a week for 5 years. The AVF on the left wrist was operated 5 years ago and proved to be functionally effective with some monitoring and surveillance measures in the fourth year of hemodialysis. Two months later, the buttonhole cannulation technique was applied due to fistula punctuation difficulty by previous aneurysmal dilatation. After 10 times of buttonhole cannulation, the edema on the left forearm of the patient indicated a dysfunction of the AVF. This was further confirmed by the HD02 hemodialysis monitor and a vascular ultrasound, revealing a recirculation of 87%, an abnormal fistula between the AVF and the vein together with thrombus, and aneurysmal dilatation near the anastomosis. The site of cannulation was changed to avoid heavy recirculation (reduced to 25%), and a new AVF on the left wrist was successfully operated. Abnormal fistula on the AVF and vein during buttonhole cannulation is rarely reported, which reminds us that vascular access surveillance should be done regularly before and during the process of construction to find a possible dysfunction of the AVF in patients undergoing buttonhole cannulation. abstract_id: PUBMED:32734206 Staphylococcus aureus Bacteremia Risk in Hemodialysis Patients Using the Buttonhole Cannulation Technique: A Prospective Multicenter Study. Rationale & Objective: Arteriovenous fistulas are the preferred access type for hemodialysis. The buttonhole needling technique has become an alternative to stepladder or area puncture. However, an increased risk for infection has been described. The present study examined the risk for infectious complications with different needling techniques. Study Design: Prospective multicenter observational cohort study with 5 years of follow-up. Setting & Participants: In-center hemodialysis patients from 5 hemodialysis units in Denmark, dialyzed on a native arteriovenous fistula. 286 patients were included; 144 cannulated with the buttonhole technique. Exposure: The buttonhole cannulation technique was compared to the stepladder or area puncture technique. Outcomes: Primary end points: event rates of access-related Staphylococcus aureus bacteremia and the HR for first access-related S aureus bacteremia. Secondary end points: local infections and access-related S aureus bacteremia-related metastatic infections and mortality. Analytical Approach: Time-to-event analysis using Cox proportional hazards regression to estimate the HR of access-related S aureus bacteremia in buttonhole cannulation compared to stepladder/area puncture. Poisson regression was used for incidence rate ratio calculations. Results: S aureus caused 48 access-related bacteremias; 43 (90%) in the buttonhole group compared with 5 (10%) in the stepladder/area group. The HR for first access-related S aureus bacteremia was significantly higher for buttonhole cannulation compared to stepladder/area needling (unadjusted, 6.8 [95% CI, 2.4-19.1]; adjusted, 8.4 [95% CI, 2.9-24.2]). The incidence rate ratio for access-related S aureus bacteremia was 6.8 (95% CI, 2.9-16.1), and the incidence rate ratio of local cannulation-site infection without access-related S aureus bacteremia was 3.8 (95% CI, 1.3-15.4) for buttonhole cannulation compared to stepladder/area needling. Limitations: Nonrandomized observational design, prevalent hemodialysis patients. Conclusions: Access-related S aureus bacteremia rates were very high for buttonhole cannulation compared to stepladder/area needling, questioning the use of buttonhole cannulation in routine clinical practice. A restrictive approach to buttonhole use is recommended, with buttonhole cannulation only being used as a second alternative to area technique when stepladder cannulation is not feasible. abstract_id: PUBMED:35372920 Buttonhole Cannulation of Arteriovenous Fistulas in the United States. The cannulation technique of a hemodialysis vascular access has remained controversial with differing viewpoints. The quality of dialysis, overall patient safety, and individual dialysis experience often dictate the type of cannulation technique used in clinical practice. The three commonly used techniques to access a hemodialysis vascular access are the rope ladder, area, and buttonhole. Although the buttonhole technique has been around since the mid-1970s, the dialysis community remains divided on its suitability for routine use to provide maintenance hemodialysis therapy. The proponents of this technique value the ease of cannulation with less pain and discomfort whereas the opponents highlight the increased risk of infection. The actual clinical evidence from the United States is limited and remains inconclusive. The current review provides an overview of the available experience from the United States, highlighting the correct technique of creating a buttonhole, summarizing the current evidence, and recommending a need for larger randomized controlled studies in both in-center and home hemodialysis populations. abstract_id: PUBMED:31490050 Arteriovenous Buttonhole Access Cannulation in Pediatric Patients on Hemodialysis. The arteriovenous fistula (AVF) is the recommended hemodialysis access for pediatric patients who weight more than 20 kg and who are not expected to receive a kidney transplant for one year or longer. Whereas buttonhole cannulation of the AVF has been discouraged in adults because of the associated risk of infection, the published pediatric experience with this technique is extremely limited. A retrospective chart review of all buttonhole cannulated AVFs in a single pediatric hemodialysis unit was performed. Approximately 5,600 cannulations were performed over 215.5 patient months with no infections of the AVF or adjacent skin in 13 of 14 (93%) patients. Results from this experience provide evidence that the buttonhole cannulation technique can be successfully performed in pediatric patients on hemodialysis without an associated increased risk of infection. abstract_id: PUBMED:35243305 Direction- and Angle-Assisted Buttonhole Cannulation of Arteriovenous Fistula in Hemodialysis Patients: A Multicenter Randomized Controlled Trial. Rationale & Objective: Arteriovenous fistula cannulation with the buttonhole technique is often preferred by patients but has been associated with an increased infection risk. Guidelines disagree on whether it should be abandoned, thus we assessed a technologically simple method to facilitate gentler arteriovenous fistula cannulation with potentially less discomfort and damage to the epithelial lining of the buttonhole tract. Study Design: 8-week, prospective, open-label, randomized controlled trial. Setting & Participants: Patients with buttonhole tracts receiving hemodialysis at 7 dialysis centers in Norway were randomized to the intervention group (43 patients, 658 cannulations) or control group (40 patients, 611 cannulations). Intervention: Direction and angle of the established buttonhole tract were marked on the forearm skin in the intervention group, whereas the control group had no structured cannulation information system. Outcomes: The primary outcome was successful cannulation, defined as correct placement of both blunt needles at the first attempt without needing to change needles, perform extra perforations, or reposition the needle. The secondary outcomes were patient-reported difficulty of cannulation (verbal rating scale: 1 = very easy, 6 = impossible) and intensity of pain (numeric rating scale: 0 = no pain, 10 = unbearable pain). Results: After a 2-week run-in period, successful cannulation was achieved in 73.9% and 74.8% of the patients in the intervention and control groups, respectively (relative risk [RR], 0.99; 95% CI, 0.87-1.12; P = 0.85). However, the probability of a difficult arterial cannulation (verbal rating scale, 3-6) was significantly lower in the intervention group (RR, 0.69; 95% CI, 0.55-0.85; P = 0.001). There were no improvements for venous cannulations. Furthermore, the probability of a painful cannulation (numeric rating scale, 3-10) was lower in the intervention group (RR, 0.72; 95% CI, 0.51-1.02; P = 0.06). Limitations: Unable to evaluate hard end points such as infections and thrombosis owing to the small sample size. Conclusions: Marking direction and angle of cannulation did not improve cannulation success rates; however, patients more often reported an unproblematic procedure and less pain. Funding: None. Trial Registration: ClinicalTrials.gov (NCT01536548). abstract_id: PUBMED:29605379 Bacteriology of the Buttonhole Cannulation Tract in Hemodialysis Patients: A Prospective Cohort Study. Background: The buttonhole cannulation technique for arteriovenous fistulas is widely used, but has been associated with an increased rate of vascular access-related infections. We describe the frequency and type of bacterial colonization of the buttonhole tract over time and associated clinical infections. Study Design: A prospective observational cohort study with 9 months of follow-up. Setting & Participants: 84 in-center hemodialysis patients using the buttonhole cannulation technique at 2 Danish dialysis centers. Outcomes: Bacterial growth from the buttonhole tract and dialysis cannula tip and clinically important infections during follow-up. Measurements: On 3 occasions 1 month apart, cultures before dialysis (from the skin surrounding the buttonhole before disinfection and from the cannulation tract after disinfection and scab removal) and the cannula tip after dialysis. Patients with positive cultures from the buttonhole tract or cannula tip had repeat cultures within 1 week, along with blood cultures. Results: Growth from the cannulation tract and/or cannula tip at each of the 3 monthly sets of cultures was found in 18%, 20%, and 17% of patients, respectively. 38% of patients had at least 1 positive culture from the buttonhole tract. Sustained growth was detected in 11% of patients, whereas asymptomatic bacteremia was seen in 30% of those with positive buttonhole cultures. Staphylococci species were the most common pathogens (Staphylococcus aureus, 25%; and Staphylococcus epidermidis, 41%). Colonization-positive buttonholes had more localized redness and slightly more tenderness. During follow-up, significantly more access-related infections were diagnosed among those with positive buttonhole cultures (P&lt;0.001). Limitations: No comparison to area puncture cannulation technique. Blood cultures were obtained only from patients with positive buttonhole bacteriology. Conclusions: Transient or sustained colonization of the buttonhole tract by staphylococci and asymptomatic bacteremia is common in hemodialysis patients, implying a substantial risk for access-related infections among patients using a buttonhole cannulation technique. These findings suggest the possible value of surveillance of buttonhole colonization. abstract_id: PUBMED:25110302 Buttonhole versus rope-ladder cannulation of arteriovenous fistulas for hemodialysis: a systematic review. Background: The buttonhole technique is an alternative method of cannulating the arteriovenous fistula (AVF) in hemodialysis (HD), frequently used for home HD patients. However, the balance of risks and benefits of the buttonhole compared with the rope-ladder technique is uncertain. Study Design: A systematic review of randomized trials and observational studies (case reports, case series, studies without a control group, non-English studies, and abstracts were excluded). Setting & Population: HD patients (both in-center conventional HD and home HD) using an AVF for vascular access. Selection Criteria For Studies: We searched MEDLINE, EMBASE, EBM Reviews, and CINAHL from the earliest date in the databases to March 2014 for studies comparing clinical outcomes of the buttonhole versus rope-ladder technique. Intervention: Buttonhole versus rope-ladder cannulation technique. Outcomes: The primary outcomes of interest were patient-reported cannulation pain and rates of AVF-related local and systemic infections. Secondary outcomes included access survival, intervention, hospitalization, and mortality, as well as hematoma and aneurysm formation, time to hemostasis, and all-cause hospitalization and mortality. Results: Of 1,044 identified citations, 23 studies were selected for inclusion. There was equivocal evidence with respect to cannulation pain: pooled observational studies yielded a statistical reduction in pain with buttonhole cannulation (standardized mean difference, -0.76 [95%CI, -1.38 to -0.15] standard deviations), but no difference in cannulation pain was found among randomized controlled trials (standardized mean difference, 0.34 [95%CI, -0.76 to 1.43] standard deviations). Buttonhole, as compared to rope-ladder, technique appeared to be associated with increased risk of local and systemic infections. Limitations: Overall poor quality and substantial heterogeneity among studies precluded pooling of most outcomes. Conclusions: Evidence does not support the preferential use of buttonhole over rope-ladder cannulation in either facility-based conventional HD or home HD. This does not preclude buttonhole cannulation as being appropriate for some patients with difficult-to-access AVFs. Answer: Buttonhole cannulation in hemodialysis has been a topic of debate due to its potential benefits and drawbacks. On one hand, studies have shown that buttonhole cannulation can lead to statistically reduced cannulation pain for both arterial and venous needles, and may result in decreased vessel pressures and hemostasis times (PUBMED:20426358). Additionally, a high level of confidence in the nurses' abilities to use the buttonhole technique effectively has been reported by both nurses and patients (PUBMED:20426358). However, the buttonhole technique has been associated with an increased risk of infection, particularly access-related Staphylococcus aureus bacteremia, which raises concerns about its safety (PUBMED:32734206). The increased infection rate is a significant issue, with one study reporting that 90% of access-related bacteremias were in the buttonhole group compared to 10% in the stepladder/area group (PUBMED:32734206). Moreover, there have been cases of stenosis and thrombosis related to buttonhole puncture, which required angiography intervention (PUBMED:30707492). In terms of cost, an increase of approximately $358.80 per patient per year for buttonhole supplies was noted, without significant changes in the frequency of procedures required for arteriovenous fistula (AVF) repair (PUBMED:20426358). This additional cost must be weighed against the positive outcomes realized. The evidence does not uniformly support the preferential use of buttonhole over rope-ladder cannulation, as the overall poor quality and substantial heterogeneity among studies precluded pooling of most outcomes (PUBMED:25110302). While some patients with difficult-to-access AVFs may find buttonhole cannulation appropriate, the increased risk of local and systemic infections associated with the technique is a major concern. In conclusion, while buttonhole cannulation may offer some benefits such as reduced pain and potentially easier cannulation, the increased risk of infection and additional costs are significant factors that must be considered. The decision on whether it is worth it may depend on individual patient circumstances, the ability to mitigate infection risks, and the value placed on the potential benefits by patients and healthcare providers.
Instruction: Is a return to baseline sexual function possible? Abstracts: abstract_id: PUBMED:21107844 Is a return to baseline sexual function possible? An analysis of sexual function outcomes following laparoscopic radical prostatectomy. Objectives: Outcome measures following radical prostatectomy are not standardized. Though excellent potency rates are widely reported, few studies address a return to baseline function. We analyze validated sexual health-related quality-of-life outcomes by a strict definition, a return to baseline function, and compare them to less stringent, yet more frequently referenced, categorical definitions of potency. Methods: Patients undergoing laparoscopic radical prostatectomy from April 2001 to September 2007 completed the Expanded Prostate Cancer Index Composite (EPIC) questionnaire at baseline and 3, 6, 12, and 24 months postoperatively. We defined a return to baseline as a recovery to greater than one-half standard deviation of the studied population below the patient's own baseline (clinically detectable threshold). We compared these outcomes to a categorical definition of potency involving intercourse frequency. To limit confounders, we performed subset analyses of preoperatively potent men who received bilateral nerve preservation. Factors predictive of return to baseline function were assessed in multivariable analysis. Results: A total of 568 patients met inclusion criteria. Mean age and follow-up were 57.2 years and 16.9 months, respectively. Using the categorical definition, 85% of preoperatively potent men with bilateral nerve preservation were "potent" at 24 months; however, only 27% returned to their baseline sexual function. In multivariable analyses baseline function, number of nerves spared, and age were independent predictors of a return to baseline function. Conclusion: While most preoperatively potent men who receive bilateral nerve preservation engage in intercourse postoperatively, few return to their baseline sexual function. We believe that these data provide context for the expectations of patients who elect extirpative therapy. abstract_id: PUBMED:31590961 Assessment of Return to Baseline Urinary and Sexual Function Following Primary Focal Cryotherapy for Nonmetastatic Prostate Cancer. Background: The oncological outcomes in men with clinically significant prostate cancer following focal cryotherapy are promising, although functional outcomes are under-reported. Objective: To determine the impact of focal cryotherapy on urinary and sexual function, specifically assessing return to baseline function. Design, Setting, And Participants: Between October 2013 and November 2016, 58 of 122 men who underwent focal cryotherapy for predominantly anterior clinically significant localised prostate cancer within a prospective registry returned patient-reported outcome measure questionnaires, which included International Prostate Symptom Score (IPSS) and International Index of Erectile Function (IIEF-15) questionnaires. Intervention: Standard cryotherapy procedure using either the SeedNet or the Visual-ICE cryotherapy system. Outcome Measurements And Statistical Analysis: Primary outcome was return to baseline function of IPSS score and IIEF erectile function (EF) subdomain. Cumulative incidence and Cox-regression analyses were performed. Results And Limitations: Probability of returning to baseline IPSS function was 78% at 12 mo and 87% at both 18 and 24 mo, with recovery seen up to 18 mo. For IIEF (EF domain), the probability of returning to baseline function was 85% at 12 mo and 89% at both 18 and 24 mo, with recovery seen up to 18 mo. Only the preoperative IIEF-EF score was associated with a poor outcome (hazard ratio 0.96, 95% confidence interval 0.93-0.999, p = 0.04). The main limitation was that only half of the patients returned their questionnaires. Conclusions: In men undergoing primary focal cryotherapy, there is a high degree of preservation of urinary and erectile function with return to baseline function occurring from 3 mo and continuing up to 18 mo after focal cryotherapy. Patient Summary: In men who underwent focal cryotherapy for prostate cancer, approximately nine in 10 returned to their baseline urinary and sexual function. Keeping in mind that level 1 evidence and long-term data are still needed, in men who wish to preserve urinary and sexual function, focal cryotherapy may be considered an alternative treatment option to radical therapy. abstract_id: PUBMED:37444764 Quality of Life from Return to Work and Sport Activities to Sexual Dysfunction after Surgical Treatment of Pelvic Ring Fractures. Background: Pelvic ring fractures are life-threatening injuries that have a severe impact on patients' lives. The aim of this clinical study was to evaluate the outcome of surgical treatment in terms of Quality of Life (QoL), return to work, functional results and sport activities, and post-operative sexual dysfunction. Methods: A retrospective study with patients retrieved from a Level 1 Trauma Center was performed. Minimum patient follow-up was 12 months: QoL was evaluated with the SF-12 (Short Form Survey) questionnaire, return to work with the Workplace Activity Limitation Survey (WALS), functional outcomes and sport activities with the Harris Hip Score and Tegner activity score, respectively, and sexual function damage with a 0-10 NRS. Results: Seventy-six patients (41 males and 35 females) were enrolled, with a mean age at surgery of 56.4 years (18-89 years). Overall, their quality of life remained significantly affected, with male patients reporting worse WALS outcomes (p = 0.036), sexual damage (p = 0.001), and SF-12 Bodily Pain (p = 0.046) than females. In particular, 70.7% of men and 45.7% of women reported sexual limitations, and only 53.7% returned to their job, with 35.2% losing their job as a consequence of the pelvic ring disruption. Conclusions: An important deterioration in general health state, return to work, and sexual function was documented in patients treated surgically for pelvic trauma, especially in male patients. There are disabling secondary sequels at all levels beyond the mere functional scores, and both patients and clinicians should be aware and have the correct expectations. abstract_id: PUBMED:36573663 Changes in Performance on Common Return-to-Sport Tests During a Collegiate Women's Soccer Season: Implications for Baseline Testing. Background: Baseline testing of objective lower limb function may help clinicians make more informed return-to-sport (RTS) decisions in the event of an anterior cruciate ligament (ACL) injury. However, as these tests are based on physical performance, it is possible that they improve during the season as athletes get stronger and fitter. Hence, it may be difficult to ascertain the patient's preinjury status and have an accurate reference for comparison when determining readiness for RTS. The purpose of this study was to examine changes in common ACL RTS tests during a college soccer season to determine the most appropriate time to perform baseline testing. Hypothesis: Hop test performance will improve across the season. Study Design: Descriptive laboratory; prospective cohort. Level Of Evidence: Level 4. Methods: A total of 31 women's soccer players from 1 NCAA Division I university agreed to participate this study. Participants performed 4 single-leg hop tests and the 505-agility test to measure cutting speed on 3 occasions: preseason (PRE), midseason (MID), and end-of-season (END). Performance on each test was compared across days to determine whether performance increased during the season. As a secondary analysis, limb symmetry index (LSI) was also compared across the season. Results: A total of 23 participants (age, 19.7 ± 1.3 years; height, 1.69 ± 0.07 m; weight 60.9 ± 7.2 kg) completed all 3 testing sessions during the season. Performance during PRE was better than MID and END for all hop tests (all P &lt; 0.01). LSI did not change during the season (P value range, 0.18-0.79). Conclusion: Performance on the hop tests was best during preseason and declined thereafter, which may be reflective of cumulative fatigue. Clinical Relevance: Baseline testing of RTS tests performed during preseason may provide an accurate representation of an athlete's best abilities over the course of a collegiate soccer season. Preseason testing would also enable athletic trainers to acquire baseline data for all athletes before they are injured. abstract_id: PUBMED:26539834 Erectile function after radical prostatectomy: Do patients return to baseline? Objective: The aim of this study was to assess postprostatectomy erectile function compared to preoperative status by subjective patient perception and the abbreviated International Index of Erectile Function (IIEF-5) questionnaire. Materials And Methods: The study used data from a prospectively collected database and a cross-sectional, questionnaire-based study in patients following radical prostatectomy. Erectile function was assessed with the IIEF-5 and the question "Is your erectile function as good as before the surgery (yes/no)". Patients were included if they were sexually active before surgery and had at least 1 year of follow-up. The main outcome measure was the proportion of patients returning to self-perceived baseline erectile function. Secondary outcome measures included the proportion of patients returning to baseline erectile function according to the IIEF-5 and predictors of return to baseline function. Results: Questionnaires from 210 patients were available. Overall, 14 patients (6.7%) reported that their erections were as good as before surgery. Bilateral nerve-sparing was the only significant predictor of a return to baseline erectile function (p = 0.004). Forty-three patients (20.5%), who did not report use of erectile aids, showed no decline in IIEF-5 score. When including patients who used erectogenic aids, 69 (32.9%) maintained their preoperative IIEF-5 score. On multivariate analysis a low preoperative IIEF-5 score was a significant predictor of return to baseline IIEF-5 score (p &lt; 0.0001). Conclusions: Return to subjective baseline erectile function following radical prostatectomy is rare. The IIEF-5 questionnaire may not adequately reflect patients' experience. This should be considered in preoperative patient counselling. abstract_id: PUBMED:35547208 Return of Sexual Activity Within Six Weeks of Childbirth Among Married Women Attending Postpartum Clinic of a Teaching Hospital in Ethiopia. Introduction: Postpartum sexual health gets very little attention compared to pregnancy and childbirth, even though most maternal deaths and disabilities occur during this time. Therefore, the study aimed to assess return of sexual activity within 6 weeks of childbirth among married women attending postpartum clinic of a teaching hospital in Ethiopia, 2021. Methods: The hospital-based cross-sectional study design was implemented from September to October 2021. Eligible postpartum women were sampled by systematic random sampling technique. The data was entered into EPI-info and exported to SPSS version 24 for further analysis. All variables with a p-value &lt; 0.05 in multivariable analyses were taken as associated factors with the return to sexual activity before 6 weeks of childbirth. Results: A total of 421 postpartum women participated in the study. The prevalence of women who return to sexual activity 6 weeks after childbirth was 31.6%. The study revealed that monogamy (Adjusted Odds Ratio (AOR) = 4.4, 95% Confidence Interval (CI) (2.1, 9.4)), parity (AOR= 0.11, 95% CI (0.02-0.81)) and (AOR = 0.1, 95% CI (0.015-0.72)), postnatal care (AOR= 1.8, 95% CI (1.01-3)) and infants feeding status (AOR=2.3, 95% CI (1.3-4)) were significantly associated with return of sexual activity before 6 weeks of childbirth. Conclusion: The findings of this study suggested that, nearly one-third of postpartum women engaged to sexual activity within 6 weeks of childbirth. Return of sexual activity within 6 weeks of childbirth associated with monogamy type of marriage, parity, postnatal care, and child feeding status. Strengthening postpartum counseling regarding the appropriate time to resume sexual activity is crucial. Regular postpartum visits following deliveries should also be encouraged by health care providers. abstract_id: PUBMED:30911872 A Qualitative Examination of the Sexual Possible Selves and Strategies of First-Semester College Students: How Sexual Possible Selves are Developed During the Transition to College. According to the possible selves theory, individuals have possible selves that they hope to attain, or feared selves they hope to avoid, in the proximal future. In addition, individuals may have strategies they use to help them attain these possible selves or avoid their feared selves. Recent work has applied this theory to the realm of sexuality (i.e., sexual possible selves; SPS) in the developmental period of emerging adulthood, as this period is considered a time of increased sexual identity development. The purpose of this study was to extend this research by conducting semi-structured qualitative interviews with a sample of first-semester college-attending emerging adults (N = 35) at a 4-year university. We examined the developmental influences on expected and feared SPS to better understand why various internalized expectations develop. Interviews were conducted during the first four weeks of the Fall 2016 semester and were analyzed using applied thematic analysis. Prominent themes that emerged within the expected SPS included: sex and commitment, taking a passive approach, delaying sex and relationships, plans for hooking up, and abstinence. Feared SPS themes that emerged included: non-committed sexual avoidance, sexual assault/coercion, reputation, and sexual health. Prominent influences on participants' SPS included: family, alcohol and parties, peers/friends, past experiences, changes in expectations, college culture, and religion. Implications for sexuality research, education, and intervention are discussed. abstract_id: PUBMED:35218369 Quality of life from return to work and sports activities to sexual dysfunction after surgical treatment of acetabular fractures. Introduction: To evaluate the outcomes of patients treated for acetabular fractures in terms of Quality of Life (QoL), return to work, functional outcome, sports activities, and post-operative sexual dysfunction. Materials And Methods: The database of a level 1 trauma center was used to investigate the results of patients treated for acetabular fractures from 2010 to 2020. The minimum patient follow-up was 12 months. QoL was evaluated with the SF-12 questionnaire, return to work with the Workplace Activity Limitation Survey (WALS), functional outcomes and sports activities with Harris Hip Score (HHS) and Tegner activity scores, respectively, and the sexual function damage with a 0-10 NRS. Results: Sixty-five patients (44 men, 21 women) were enrolled, mean age at surgery of 53.4 years (20-85 years). The HHS and Tegner scores were 79.0 ± 19.4 and 2.6 ± 1.5, respectively, and 18.5% of patients underwent THA after 21.3 ± 28.6 months. QoL evaluated with the SF-12 showed the worst results in terms of Bodily Pain and Role Physical; 35.4% of the patients who were working before the trauma lost their job, and one third was affected at sex functional level. Sex was found to influence significantly both Harris Hip Score (p &lt; 0.05) and Tegner activity score (p &lt; 0.05), with men reporting better scores. On the other hand, men presented a higher impact in terms of job loss and sexual dysfunction. Conclusion: Acetabular fractures affect patients' life as a whole, beyond the mere functional outcomes, often leading to a high rate of job loss and a significant probability of sexual functional worsening, especially for men. A significant number of patients also underwent THA at an early follow-up. Further efforts should aim at optimizing the surgical management, to improve functionality and the entire QoL sphere of patients affected by acetabular fractures. abstract_id: PUBMED:34638719 Oxytocin, Erectile Function and Sexual Behavior: Last Discoveries and Possible Advances. A continuously increasing amount of research shows that oxytocin is involved in numerous central functions. Among the functions in which oxytocin is thought to be involved are those that play a role in social and sexual behaviors, and the involvement of central oxytocin in erectile function and sexual behavior was indeed one of the first to be discovered in laboratory animals in the 1980s. The first part of this review summarizes the results of studies done in laboratory animals that support a facilitatory role of oxytocin in male and female sexual behavior and reveal mechanisms through which this ancient neuropeptide participates in concert with other neurotransmitters and neuropeptides in this complex function, which is fundamental for the species reproduction. The second part summarizes the results of studies done mainly with intranasal oxytocin in men and women with the aim to translate the results found in laboratory animals to humans. Unexpectedly, the results of these studies do not appear to confirm the facilitatory role of oxytocin found in male and female sexual behavior in animals, both in men and women. Possible explanations for the failure of oxytocin to improve sexual behavior in men and women and strategies to attempt to overcome this impasse are considered. abstract_id: PUBMED:27889829 Sexual function changes during pregnancy. Introduction And Hypothesis: We describe changes in sexual activity and function among pregnant nulliparous women. Methods: This secondary analysis evaluated sexual activity and function with the Female Sexual Function Index (FSFI) at baseline in the first (T1), second (T2) and early third (T3) trimester. Evaluation was repeated in T3 after 36 completed weeks of gestation. Data were assigned to the trimester in which they were collected and compared across trimesters. FSFI items were dichotomized into favorable and unfavorable responses. ANOVA and logistic regression models were used to compare values across trimesters, taking into account repeated measurements. Pair-wise comparisons of trimesters were done when there were significant overall differences. Significance was set at p &lt; 0.05. We adjusted for baseline differences. Results: Of 627 women, four did not give sexual function data. Baseline data were collected in 124 women in T1, 403 in T2, and 96 in early T3. Of these 623 women, 496 (80 %) gave data again in T3. The participants' mean age was 24.2 ± 5.1 years and 44.5 % were Hispanic. Rates of sexual activity (T1 94 %, T2 90 %, T3 77 %; p &lt; 0.001) and mean FSFI scores decreased as pregnancy progressed (T1 26.5 ± 7.7, T2 25.6 ± 9.0, T3 21.5 ± 10.3; T1/T2 vs. T3, p &lt; 0.001). Using the FSFI cut-off score for sexual dysfunction of 26.55, women in T3 were more likely to report dysfunction than women in T2 (57 % vs. 37 %, p &lt; 0.001). For specific FSFI questions, the proportions of women reporting favorable responses did not change between T1 and T2 (all p &gt; 0.05) and the proportions of women with a favorable response decreased for all questions between T2 and T3. Conclusions: As pregnancy progresses, women report poorer sexual function. Answer: Based on the provided abstracts, a return to baseline sexual function after certain medical interventions or life events is possible, but the likelihood and extent of recovery can vary significantly depending on the specific circumstances and individual factors. For instance, after laparoscopic radical prostatectomy, while 85% of preoperatively potent men with bilateral nerve preservation were able to engage in intercourse postoperatively, only 27% returned to their baseline sexual function (PUBMED:21107844). Factors such as baseline function, number of nerves spared, and age were independent predictors of a return to baseline function. In contrast, men undergoing primary focal cryotherapy for prostate cancer had a higher likelihood of returning to baseline sexual function, with 85% at 12 months and 89% at both 18 and 24 months achieving this outcome (PUBMED:31590961). The preoperative International Index of Erectile Function (IIEF) score was associated with the outcome. For patients treated surgically for pelvic ring fractures, there was a significant deterioration in general health state, return to work, and sexual function, especially in male patients, indicating a challenging recovery to baseline sexual function (PUBMED:37444764). In another study, only 6.7% of patients reported that their erections were as good as before radical prostatectomy surgery, suggesting that a return to subjective baseline erectile function is rare (PUBMED:26539834). For postpartum women, approximately 31.6% engaged in sexual activity within 6 weeks of childbirth, indicating that some women do return to sexual activity relatively quickly, although this does not necessarily reflect a return to baseline sexual function (PUBMED:35547208). Overall, while a return to baseline sexual function is possible, it is not guaranteed and is influenced by various factors including the type of medical intervention, individual health status, age, and the presence of supportive care and counseling.
Instruction: Does synergized pyrethrin applied over wetlands for mosquito control affect Daphnia magna zooplankton or Callibaetis californicus mayflies? Abstracts: abstract_id: PUBMED:18366067 Does synergized pyrethrin applied over wetlands for mosquito control affect Daphnia magna zooplankton or Callibaetis californicus mayflies? Background: Public health agencies may apply aerosolized synergized pyrethrin over wetlands repeatedly to control mosquitoes. This concerns wildlife managers because studies have shown the accumulation of pyrethroids, which are chemically similar to pyrethrin, in sediments in amounts that can be toxic to invertebrates. The authors tested whether repeated applications of synergized pyrethrin over wetlands caused mortality of two aquatic invertebrates: the zooplankton Daphnia magna Straus and a mayfly, Callibaetis californicus Banks. Fifteen wetland mesocosms were either exposed to repeated pyrethrin sprays or were protected by lids. Invertebrates in screened cages were placed in mesocosms before the fifth and eleventh spray, and directly into wetlands before spray 11. Six mesocosms were exposed to spray deposition. Caged adult mosquitoes were used to verify that sprays drifted over mesocosms. Sediments were analyzed for insecticide residues. Results: There were no detectable effects of synergized pyrethrin on 36 h survival of Daphnia or mayflies, but most exposed adult mosquitoes died. Some exposed sediments yielded pyrethrin (&lt; or =34.5 ng g(-1)); most showed piperonyl butoxide (PBO) (&lt; or =14.9 ng g(-1)). Conclusions: Deposition of aerosolized 25% pyrethrin + 5% PBO may contaminate wetlands, but its application at rates used for mosquito control did not produce detectable effects on indicator species. abstract_id: PUBMED:21661316 The potential of cladocerans as controphic competitors of the mosquito Culex pipiens. This study assesses the potential of cladocerans as competitors for controlling the oviposition and larval abundances of mosquitoes. Control of mosquito larvae involving the use of antagonists has focused mostly on predators. We hypothesized that cladoceran competitors have a strong potential to control larval populations of some species of mosquitoes that can be early colonizers of newly-filled waterbodies, and should be less efficient competitors. To test this hypothesis, the establishment and development of larval populations of wild Culex pipiens were investigated in outdoor microcosms varying in terms of Daphnia magna populations. When the population was well established (i.e., high densities of D. magna), oviposition was fully inhibited and there was consequently no mosquito larvae. When the population was more recently established (i.e., lower densities of D. magna), oviposition and larval development of Cx. pipiens occurred. In the absence of D. magna, oviposition, larval biomass, and abundance of Cx. pipiens reached high values. In this situation, conspecifics were inhibiting further oviposition of Cx. pipiens. Based on these results, we suggest that competing zooplankton species, such as D. magna, could be used for the control of mosquito species such as Cx. pipiens. This approach could be beneficial for the management of wetlands. abstract_id: PUBMED:36995001 Accumulation Kinetics and Gut Microenvironment Responses to Environmentally Relevant Doses of Micro/Nanoplastics by Zooplankton Daphnia Magna. Microplastics are emerging pollutants that have been widely reported in aquatic ecosystems. Based on the analysis of environmentally relevant concentrations of microplastics in global freshwater systems, herein, we employed aggregated-induced emission (AIE) microplastic fluorogens and imaged and quantified the bioaccumulation of differentially charged micro- (20 μm)/nano- (200 nm) plastics (MNPs) in zooplankton Daphnia magna. We found that all particles of different sizes and charges were readily ingested, especially larger-sized and positively charged MNPs, with over 50% of the ingested particles accumulating in the gut. Bioaccumulation of MNPs reached 50% of steady-state condition within 1 h. The presence of algae inhibited the ingestion and depuration of MNPs. To further demonstrate the effects of such accumulation on gut health, we further applied the AIE probes for visualizing the pH and esterase in the digestive tract, as well as the gut inflammation. An accumulation of MNPs in D. magna significantly and rapidly induced the acidification of gut pH while inducing esterase activity. The NPs apparently induced gut inflammation in contrast to the MPs, demonstrating the size-dependent effects on oxidative stress. Our results highlighted that MNP exposure at environmentally relevant concentrations perturbed the microenvironments of zooplankton guts, which may significantly affect their digestion and assimilation of food materials as well as contaminant uptake. abstract_id: PUBMED:30797563 Atrazine exposed phytoplankton causes the production of non-viable offspring on Daphnia magna. This study focuses on the possibility that herbicide-exposed phytoplankton will cause sub-lethal effect on zooplankton. Atrazine, phytoplankton Raphidocelis subcapitata and zooplankton Daphnia magna were chosen as a model chemical and organisms. R. subcapitata was exposed to atrazine at 150 μg/L, harvested and fed to D. magna. While the mothers fed with atrazine-exposed phytoplankton did not show any abnormalities, they produced non-viable offspring. Number of non-viable offspring at the first clutch was high but the number was reduced at later stages and viable offspring was produced. This result indicates that phytoplankton exposed to sub-lethal dose of atrazine affects population dynamics of its predator, D. magna. abstract_id: PUBMED:34913522 Water Chemistry, Exposure Routes, and Metal Forms Determine the Bioaccumulation Dynamics of Silver (Ionic and Nanoparticulate) in Daphnia magna. Treatment wetlands utilize various physical and biological processes to reduce levels of organic contaminants, metals, bacteria, and suspended solids. Silver nanoparticles (AgNPs) are one type of contaminant that can enter treatment wetlands and impact the overall treatment efficacy. Grazing by filter-feeding zooplankton, such as Daphnia magna, is critical to treatment wetland functioning; but the effects of AgNPs on zooplankton are not fully understood, especially at environmentally relevant concentrations. We characterized the bioaccumulation kinetics of dissolved and nanoparticulate (citrate-coated) 109 Ag in D. magna exposed to environmentally relevant 109 Ag concentrations (i.e., 0.2-23 nmol L-1 Ag) using a stable isotope as a tracer of Ag. Both aqueous and nanoparticulate forms of 109 Ag were bioavailable to D. magna after exposure. Water chemistry affected 109 Ag influx from 109 AgNP but not from 109 AgNO3 . Silver retention was greater for citrate-coated 109 AgNP than dissolved 109 Ag, indicating a greater potential for bioaccumulation from nanoparticulate Ag. Feeding inhibition was observed at higher dietary 109 Ag concentrations, which could lead to reduced treatment wetland performance. Our results illustrate the importance of using environmentally relevant concentrations and media compositions when predicting Ag bioaccumulation and provide insight into potential effects on filter feeders critical to the function of treatment wetlands. Environ Toxicol Chem 2022;41:726-738. © 2021 SETAC. abstract_id: PUBMED:26014379 Betaproteobacteria Limnohabitans strains increase fecundity in the crustacean Daphnia magna: symbiotic relationship between major bacterioplankton and zooplankton in freshwater ecosystem. How symbioses between bacteria and aquatic animals influence food webs in freshwater ecosystems is a fundamental question in ecology. We investigated symbiosis between a crustacean zooplankton Daphnia magna and its dominant bacterial symbiont Limnohabitans, an abundant and globally distributed freshwater Betaproteobacteria. Aposymbiotic juvenile Daphnia were prepared and exposed to any of four Limnohabitans sp. - Limnohabitans strains DM1, 2KL-3, 2KL-7 and Limnohabitans planktonicus strain II-D5, all previously found in D. magna digestive tract or culture. Re-infected Daphnia were cultured until they produced the first clutch of juveniles. Limnohabitans strain DM1 and L. planktonicus strain II-D5 successfully re-infected Daphnia through single exposure at the first instar juvenile stage. In contrast to aposymbiotic Daphnia that produced non-viable juveniles, re-infected Daphnia produced viable juveniles and increased fecundity to levels of that of symbiotic Daphnia. Re-infected Daphnia did not increase their number of eggs nor growth rates. Limnohabitans strains 2KL-7 and 2KL-3 could not recover fecundity even in multiple exposures during culture. This study shows the functional evidence demonstrating that a single bacterium Limnohabitans regulates fecundity of the consumer Daphnia through symbiosis. Our results indicated that symbiotic relationship between major bacterioplankton and zooplankton is important for maintaining the population of zooplankton in freshwater ecosystems. abstract_id: PUBMED:23222252 North American wetlands and mosquito control. Wetlands are valuable habitats that provide important social, economic, and ecological services such as flood control, water quality improvement, carbon sequestration, pollutant removal, and primary/secondary production export to terrestrial and aquatic food chains. There is disagreement about the need for mosquito control in wetlands and about the techniques utilized for mosquito abatement and their impacts upon wetlands ecosystems. Mosquito control in wetlands is a complex issue influenced by numerous factors, including many hard to quantify elements such as human perceptions, cultural predispositions, and political climate. In spite of considerable progress during the last decades, habitat protection and environmentally sound habitat management still remain inextricably tied to politics and economics. Furthermore, the connections are often complex, and occur at several levels, ranging from local businesses and politicians, to national governments and multinational institutions. Education is the key to lasting wetlands conservation. Integrated mosquito abatement strategies incorporate many approaches and practicable options, as described herein, and need to be well-defined, effective, and ecologically and economically sound for the wetland type and for the mosquito species of concern. The approach will certainly differ in response to disease outbreaks caused by mosquito-vectored pathogens versus quality of life issues caused by nuisance-biting mosquitoes. In this contribution, we provide an overview of the ecological setting and context for mosquito control in wetlands, present pertinent information on wetlands mosquitoes, review the mosquito abatement options available for current wetlands managers and mosquito control professionals, and outline some necessary considerations when devising mosquito control strategies. Although the emphasis is on North American wetlands, most of the material is applicable to wetlands everywhere. abstract_id: PUBMED:31181495 Combined effects of ZnO nanoparticles and toxic Microcystis on life-history traits of Daphnia magna. Rise in cyanobacterial blooms and massive discharge of nanoparticles (NPs) in aquatic ecosystems cause zooplankton to be exposed in toxic food and NPs simultaneously, which may impact on zooplankton interactively. Therefore, the present study focused on assessing the combined effects of different ZnO NPs levels (0, 0.10, 0.15, 0.20 mg L-1) and different proportions of toxic Microcystis (0%, 10%, 20%, 30%) in the food on a model zooplankton, Daphnia magna. The results showed that both toxic Microcystis and ZnO NPs significantly delayed the development of D. magna to maturation, but there was no significant interaction between the two factors on the times to maturation except the body length at maturation. Both ZnO NPs and toxic Microcystis also significantly decreased the number of neonates in the first brood, total offspring, and number of broods per female, and there was a significant interaction between ZnO NPs and food composition on the reproductive performance of D. magna. Specifically, presence of toxic Microcystis reduced the gap among the effects of different ZnO NPs concentrations on the reproductive performance of D. magna. When the ZnO NPs concentration was at 0.15 mg L-1, the gap of the reproductive performance among different proportions of toxic Microcystis also tended to be narrow. Similar phenomenon also occurred in mortality. Such results suggested that low concentration of ZnO NPs and toxic Microcystis can mutually attenuate their harmful effects on D. magna, which has significantly implications in appropriately assessing the ecotoxicological effects of emerging pollutants in a complex food conditions. abstract_id: PUBMED:27640016 Effects of Bacillus thuringiensis var. israelensis on nonstandard microcrustacean species isolated from field zooplankton communities. The toxicity of Bacillus thuringiensis var. israelensis on zooplanktonic microcrustaceans was evaluated using individuals collected in coastal wetlands where this larvicide has been used for mosquito control over the last decades. We tested five zooplankton species that coexist with mosquito larvae: two copepods (both nauplii and adults of Tropocyclops prasinus and Acantocyclops americanus), and three cladocerans (Ceriodaphnia reticulata, Chydorus sphaericus, and Daphnia cf. pulex). Our experiments included seven replicates of six concentrations (Bti Vectobac12AS 1200 Bti ITU/mg): 0, 5, 25, 50, 250, and 500 mg L-1. We analyzed acute and sub-chronic effects after a single inoculation. Despite the high variability of responses among our tested organisms, we found a general pattern of increasing mortality with concentration and time. We conclude that negative effects at the community level are not unlikely as some species were affected at doses close to those used in field applications. abstract_id: PUBMED:6498328 Daphnia magna Straus as consumers of enteropathogenic bacteria The ability of Daphnia magna Straus to digest enteropathogenic bacteria, i.e. Salmonella, Shigella and Yersinia was studied in a series of experiments. High antibacterial activity of Daphnia magna Straus against these enteropathogenic bacteria was demonstrated by bacteriological and radionuclide methods. Thus, on the 3d-5th day of experiment a complete water clearance was observed in experimental flasks where Daphnia were fed Salmonella, Shigella and Yersinia, while attempts to isolate bacteria from these flasks ended in failure. Radionuclide studies disclosed active assimilation of the degradation products of enteropathogenic bacteria by Daphnia throughout the entire experiment. Answer: The application of synergized pyrethrin over wetlands for mosquito control does not produce detectable effects on Daphnia magna zooplankton or Callibaetis californicus mayflies. In a study where fifteen wetland mesocosms were either exposed to repeated pyrethrin sprays or were protected by lids, and invertebrates in screened cages were placed in mesocosms before the fifth and eleventh spray, and directly into wetlands before spray 11, there were no detectable effects of synergized pyrethrin on 36-hour survival of Daphnia or mayflies. However, most exposed adult mosquitoes died. Some exposed sediments yielded pyrethrin residues, but the application at rates used for mosquito control did not produce detectable effects on the indicator species (PUBMED:18366067).
Instruction: Prevention of catheter-related blood stream infection: back to basics? Abstracts: abstract_id: PUBMED:21171811 Prevention of catheter-related blood stream infection: back to basics? Background: Central venous catheter (CVC)-related infections are a substantial problem in the intensive care unit (ICU). Our infection control team initiated the routine use of antiseptic-coated (chlorhexidine-silver sulfadiazine; Chx-SS) CVCs in our adult ICUs to reduce catheter-associated (CA) and catheter-related (CR) blood stream infection (BSI) as we implemented other educational and best practice standardization strategies. Prior randomized studies documented that the use of Chx-SS catheters reduces microbial colonization of the catheter compared with an uncoated standard (Std) CVC but does not reduce CR-BSI. We therefore implemented the routine use of uncoated Std CVCs in our surgical ICU (SICU) and examined the impact of this change. Hypothesis: The use of uncoated Std CVCs does not increase CR-BSI rate in an SICU. Methods: Prospective evaluation of universal use of uncoated Std CVCs, implemented November 2007 in the SICU. The incidences of CA-BSI and CR-BSI were compared during November 2006-October 2007 (universal use of Chx-SS CVCs) and November 2007-October 2008 (universal use of Std CVCs) by t-test. The definitions of the U.S. Centers for Disease Control and Prevention were used for CA-BSI and CR-BSI. Patient data were collected via a dedicated Acute Physiology and Chronic Health Evaluation (APACHE) III coordinator for the SICU. Results: Annual use of CVCs increased significantly in the last six years, from 3,543 (2001) to 5,799 (2006) total days. The APACHE III scores on day 1 increased from a mean of 54.4 in 2004 to 55.6 in 2008 (p = 0.0010; 95% confidence interval [CI] 1.29-5.13). The mean age of the patients was unchanged over this period, ranging from 58.2 to 59.6 years. The Chx-SS catheters were implemented in the SICU in 2002. Data regarding the specific incidence of CR-BSI were collected beginning at the end of 2005, with mandatory catheter tip cultures when CVCs were removed. Little difference was identified in the incidence of BSI between the interval with universal Chx-SS use and that with Std CVC use. (Total BSI 0.7 vs. 0.8 per 1,000 catheter days; CA-BSI 0.5 vs. 0.8 per 1,000 catheter days; CR-BSI 0.2 vs. 0 per 1,000 catheter days.) No difference was seen in the causative pathogens of CA-BSI or CR-BSI. Conclusion: Eliminating the universal use of Chx-SS-coated CVCs in an SICU with a low background incidence of CR-BSIs did not result in an increase in the rate of CR-BSIs. This study documents the greater importance of adherence to standardization of the processes of care related to CVC placement than of coated CVC use in the reduction of CR-BSI. abstract_id: PUBMED:27816140 Prevention of Vascular Catheter-Related Bloodstream Infections. Catheter-related bloodstream infections (CRBSI) are responsible for significant morbidity, mortality, and excess health care costs. It is increasingly evident that many CRBSI can be prevented with current knowledge and techniques. Preventive measures can be broadly grouped into clinical practice-based interventions and technologic innovations. Clinical practice-based interventions require changes in human behavior and can be subdivided into interventions before and at the time of insertion and postinsertion. Despite recent successes with prevention of CRBSI, pertinent questions regarding pathogenesis and prevention remain unanswered and work on improved surveillance, devices less prone to infection, and more effective prevention techniques are needed. abstract_id: PUBMED:24864412 Recommendations from CDC for the prevention of catheter-related infections (2013 update) Nosocomial infections or infections related to health-care system have a great impact in the healthcare due to its high frequency and high morbidity and mortality they caused. Inside of them, there are the catheter-related bacteraemia. The paper of the nursing in the care and maintenance of the intravenous therapy is basic to avoid this type of infections. There has been demonstrated the efficacy of the implementation of catheter-related infections prevention programmes through a bundle of measures about the insertion, maintenance, surveillance and prevention of adverse events in the vascular access. The aim of this work is to show and emphasize the last updated recommendations collected in the 2011 Guidelines for the Prevention of Intravascular Catheter-Related Infections to nursing staff. abstract_id: PUBMED:30333275 Catheter-related Blood Stream Infection in a Patient with Hemodialysis. A 31-year-old patient came to visit the outpatient clinic at the hospital for his routine twice-weekly hemodialyis (HD) session. During HD, the patient suddenly developed a fever with shivering. At that time, a diagnosis of catheter-related blood stream infection (CR-BSI) was developed, HD catheter or the catheter double lumen (CDL) was uninstalled and the patient was hospitalized. Results of culture withdrawn through the tip of catheter lumen and peripheral blood revealed identical microorganism, i.e. the Enterobacter cloacae. Diagnosis of CR-BSI in the present case was made based on the 2009 Infectious Disease Society of America (IDSA) criteria. In general, prevention measures for CR-BSI should be taken into account including education for patient, awareness of the health care providers who install the CDL, implementation of procedure for appropriate skin aseptic technique and best practice for HD catheter care, particularly on the exit site of the CDL to prevent the development of CR-BSI. abstract_id: PUBMED:24554477 Ethanol locks in the prevention and treatment of catheter-related bloodstream infections. Objective: To evaluate the evidence regarding the use of ethanol lock therapy (ELT) for catheter-related bloodstream infection (CRBSI) prophylaxis and treatment. Data Sources: A literature search was conducted using PubMed (August 2003-January 2013) with search terms: ethanol lock, ethanol locks, ethanol lock therapy, prophylaxis, prevention, catheter-related bloodstream infection, and catheter-related infection. Additional sources were identified through a subsequent review of relevant articles. Study Selection And Data Extraction: All English-language studies with &gt;1 patient and a primary outcome of rates of infection, clinical cure, catheter removal or line salvage were evaluated. Studies where ELT was not used for CRBSI prophylaxis or treatment, review articles, and in vitro studies were excluded. Data were abstracted through an independent review of all articles by 2 authors. Discrepancies were discussed and resolved. Data Synthesis: 13 prophylaxis studies evaluated 617 patients; all studies reported decreased rates of infection and catheter removal with ELT. The ELT regimen associated with the most consistent benefit was 70% ethanol, a 2- to 4-hour dwell time, and daily exchange for ≥1 month. 9 treatment studies evaluated 213 catheters, with 90% (192/213) cure and 84% (179/213) line salvage. ELT was always used in combination with systemic antibiotics. The most common ELT treatment regimen was 70% ethanol, a 12- to 24-hour dwell time, and a duration of 1-5 days. No serious adverse events were reported. Conclusion: The current literature suggests that prophylactic ELT decreases the rates of infection and catheter removal, and ELT treatment appears efficacious in combination with systemic antibiotics. abstract_id: PUBMED:21458716 Update on pathogenesis and diagnosis of intravascular catheter-related infections. Infections associated with the use of intravascular catheters cause a substantial morbidity and mortality. New knowledge in the pathogenesis of catheter related bloodstream infections has led to advances in the prevention and management of these infections. The purpose of the present chapter is to review the most relevant data published recently on pathogenesis and diagnosis of intravascular catheter-related infections. It is focus in four different aspects: a) pathogenesis of catheter-related infections and particularly factors affecting biofilm formation and modulation; b) pathogenesis of intravascular catheter-related infections caused by Staphylococcus lugdunensis; c) news on microbiological diagnosis of catheter-related bacteremia; and d) evaluation of current use of blood cultures in the era of continuous monitoring blood cultures systems. abstract_id: PUBMED:27152256 Antimicrobial-impregnated catheters for the prevention of catheter-related bloodstream infections. Central venous catheters are commonly used in critically ill patients. Such catheterization may entail mechanical and infectious complications. The interest in catheter-related infection lies in the morbidity, mortality and costs that it involved. Numerous contributions have been made in the prevention of catheter-related infection and the current review focuses on the possible current role of antimicrobial impregnated catheters to reduce catheter-related bloodstream infections (CRBSI). There is evidence that the use of chlorhexidine-silver sulfadiazine (CHSS), rifampicin-minocycline, or rifampicin-miconazol impregnated catheters reduce the incidence of CRBSI and costs. In addition, there are some clinical circumstances associated with higher risk of CRBSI, such as the venous catheter access and the presence of tracheostomy. Current guidelines for the prevention of CRBSI recommended the use of a CHSS or rifampicin-minocycline impregnated catheter in patients whose catheter is expected to remain in place &gt; 5 d and if the CRBSI rate has not decreased after implementation of a comprehensive strategy to reduce it. abstract_id: PUBMED:30411464 Prevention of hemodialysis catheter infections: Ointments, dressings, locks, and catheter hub devices. Tunneled central venous catheters used for the provision of hemodialysis are associated with excess morbidity and mortality. Catheter related exit site and blood stream infections are major risks of their use. Although catheter-avoidance is the best strategy to reduce infections and mortality in the hemodialysis population, the use of catheters remains unacceptably high. In this review, the existing clinical practice guidelines for the prevention of hemodialysis catheter associated infections are outlined, and a comprehensive evidenced-based summary of interventions is provided. This includes details about the use of topical antimicrobial ointments and dressings, intranasal ointment application, prophylactic use of antibiotic and non-antibiotic catheter lock solutions, and catheter hub devices for the prevention of catheter blood stream infections. abstract_id: PUBMED:20969471 Catheter-related vs. catheter-associated blood stream infections in the intensive care unit: incidence, microbiology, and implications. Background: Catheter-associated blood stream infections (CA-BSI) and catheter-related blood stream infections (CR-BSIs) differ in the degree of proof required to show that the catheter is the cause of the infection. The U.S. Centers for Disease Control and Prevention (CDC) National Healthcare Safety Network (NHSN; formerly the National Nosocomial Infections Surveillance [NNIS] group) collects data regarding CA-BSI nationally. We hypothesized that there would be a significant difference in the rates reported according to the definition. Methods: Prospective surveillance of CA-BSI (defined as bacteremia with no extravascular source identified) is performed in all intensive care units (ICUs) at our institution and reported as the rate per 1,000 catheter-days. In January 2006, we initiated cultures of all catheter tips to evaluate for CR-BSI (defined as a catheter tip culture with &gt;15 colony-forming units of the same microorganism(s) found in the blood culture) in the surgical, trauma-burn, and medical ICUs. Results: The CA-BSI rate across all ICUs for the 24-mo study period was 1.4/1,000 catheter-days. The CR-BSI rate was 0.4/1,000 catheter days, for a rate difference of 1.0 infections/1,000 catheter-days (p &lt; 0.001 vs. CA-BSI). The pathogens identified in CA-BSI included many organisms that are not associated with catheter-related BSIs. Conclusions: The CR-BSI rate is significantly lower than the CA-BSI rate. The organisms identified in CA-BSI surveillance often are not common in catheter-related infections. Reporting CR-BSI thus is a more accurate measure of complications of central venous catheter use, and this rate may be more sensitive to catheter-specific interventions designed to reduce rates of BSI in the ICU. abstract_id: PUBMED:24070632 Arterial catheter-related bloodstream infection: incidence, pathogenesis, risk factors and prevention. Background: Arterial catheters are essential in critical care for haemodynamic and blood gas monitoring. The risk of infection remains ill defined. Aims: To delineate the incidence, pathogenesis and risk factors for arterial catheter-related bloodstream infection (BSI). Methods: Arterial catheters in two randomized trials in 1998-2000 were studied prospectively. One trial studied the effect of a 1% chlorhexidine-75% alcohol solution for cutaneous antisepsis for intravascular catheters, and the other trial studied the efficacy of a chlorhexidine-impregnated sponge dressing, both for prevention of catheter-related BSI. At catheter removal, skin of the insertion site, catheter segments, hub and infusate were cultured quantitatively in all cases. Catheter-related BSI was confirmed by concordance between isolates from the catheter and from blood cultures by restriction-fragment DNA subtyping. Risk factors for arterial catheter-related BSI were determined using univariate analysis. Findings: Of 834 arterial catheters studied (3273 catheter-days), 109 (13%) were colonized and 11 caused bacteraemia (1.3%, 3.4 per 1000 catheter-days). The majority of catheter-related BSIs were acquired extraluminally from skin of the insertion site (63%). The risk of arterial catheter-related BSI was comparable with that for short-term non-cuffed central venous catheters (2.7%, 5.9 per 1000 CVC-days). Conclusion: In patients in intensive care with cryptogenic sepsis or bacteraemia, arterial catheter-related BSI must also be suspected and excluded. The most common route of infection is extraluminal; as such, novel technologies shown to prevent bloodstream infection with CVCs, such as chlorhexidine for cutaneous antisepsis and chlorhexidine-impregnated dressings, may also be of benefit with arterial catheters. Answer: The prevention of catheter-related bloodstream infections (CRBSIs) is a critical issue in healthcare, particularly in intensive care units (ICUs) where central venous catheters (CVCs) are commonly used. The "back to basics" approach emphasizes the importance of adhering to standard care processes and basic infection control practices over the reliance on advanced technologies or coated catheters. One study found that eliminating the universal use of antiseptic-coated CVCs (chlorhexidine-silver sulfadiazine; Chx-SS) in an SICU with a low background incidence of CRBSIs did not result in an increase in the rate of CRBSIs. This suggests that adherence to standardized care processes related to CVC placement may be more important than the use of coated CVCs in reducing CR-BSI rates (PUBMED:21171811). Preventive measures for CRBSIs can be broadly grouped into clinical practice-based interventions and technological innovations. Clinical practice-based interventions require changes in human behavior and include measures before and at the time of insertion, as well as post-insertion (PUBMED:27816140). The Centers for Disease Control and Prevention (CDC) recommends a bundle of measures for the insertion, maintenance, surveillance, and prevention of adverse events in vascular access, highlighting the role of nursing in the care and maintenance of intravenous therapy (PUBMED:24864412). Other preventive strategies include the use of ethanol lock therapy (ELT) for prophylaxis and treatment of CRBSIs, which has been shown to decrease rates of infection and catheter removal (PUBMED:24554477). Additionally, antimicrobial-impregnated catheters have been shown to reduce the incidence of CRBSIs and associated costs (PUBMED:27152256). For patients undergoing hemodialysis, interventions such as the use of topical antimicrobial ointments and dressings, prophylactic use of antibiotic and non-antibiotic catheter lock solutions, and catheter hub devices have been outlined to prevent catheter bloodstream infections (PUBMED:30411464). In conclusion, the prevention of CRBSIs involves a multifaceted approach that includes strict adherence to basic infection control practices, the use of evidence-based clinical interventions, and, in some cases, the use of technology such as antimicrobial-impregnated catheters and lock solutions. The "back to basics" approach underscores the importance of fundamental care practices in preventing these infections.
Instruction: Does personality explain social inequalities in mortality? Abstracts: abstract_id: PUBMED:18276626 Does personality explain social inequalities in mortality? The French GAZEL cohort study. Background: The 'indirect-selection' hypothesis proposes that some quality of the individual, a personality characteristic or intelligence, leads to both socioeconomic position (SEP) and health. We aim to quantify the contribution of personality measures to the associations between SEP and mortality. Methods: Of the GAZEL cohort, 14 445 participants aged 39-54 years in 1993 and followed-up over 12.7 years, completed the Bortner-Type-A-scale, the Buss-Durkee-Hostility-Inventory and the Grossarth-Maticek and Eysenck-Personality-Stress-Inventory. Indicators of SEP, such as father's social class, education, occupational grade and income, were assessed at baseline. Relative indices of inequality in Cox regression models were used to estimate associations. Results: In age-adjusted analyses, risk of death was inversely associated with SEP among men and women. Among men, the attenuation in this association depended on the measures of SEP and was 28-29% for 'neurotic-hostility', 13-22% for 'anti-social' and 13-16% for 'CHD-prone' personality. In women, the attenuation was evident only for type-A-behaviour, by 11%. After controlling simultaneously for all personality factors that predicted mortality, associations between SEP and mortality were attenuated in men: by 34% for education, 29% for occupational position and 28% for income; but were only attenuated by 11% for income in women. For cardiovascular mortality, the corresponding percentages of reduction were 42, 31 and 44% after adjustment for 'CHD-prone' personality in men. Conclusions: Personality measures explained some of the mortality gradients observed for measures of adult socioeconomic position in men, but had little explanatory power in women. Whether personality represents a predictor or an outcome of social circumstances needs further research. abstract_id: PUBMED:26433316 Social inequalities in maternal health Although medical literature on social inequalities in perinatal health is qualitatively heterogeneous, it is quantitatively important and reveals the existence of a social gradient in terms of perinatal risk. However, published data regarding maternal health, if also qualitatively heterogeneous, are relatively less numerous. Nevertheless, it appears that social inequalities also exist concerning severe maternal morbidity as well as maternal mortality. Analyses are still insufficient to understand the mechanisms involved and explain how the various dimensions of the women social condition interact with maternal health indicators. Inadequate prenatal care and suboptimal obstetric care may be intermediary factors, as they are related to both social status and maternal outcomes, in terms of maternal morbidity, its worsening or progression, and maternal mortality. abstract_id: PUBMED:34285096 Interrogating structural inequalities in COVID-19 mortality in England and Wales. Background: Numerous observational studies have highlighted structural inequalities in COVID-19 mortality in the UK. Such studies often fail to consider the hierarchical, spatial nature of such inequalities in their analysis, leading to the potential for bias and an inability to reach conclusions about the most appropriate structural levels for policy intervention. Methods: We use publicly available population data on COVID-19-related mortality and all-cause mortality between March and July 2020 in England and Wales to investigate the spatial scale of such inequalities. We propose a multiscale approach to simultaneously consider three spatial scales at which processes driving inequality may act and apportion inequality between these. Results: Adjusting for population age structure and number of local care homes we find highest regional inequality in March and June/July. We find finer grained within region inequality increased steadily from March until July. The importance of spatial context increases over the study period. No analogous pattern is visible for non-COVID-19 mortality. Higher relative deprivation is associated with increased COVID-19 mortality at all stages of the pandemic but does not explain structural inequalities. Conclusions: Results support initial stochastic viral introduction in the South, with initially high inequality decreasing before the establishment of regional trends by June and July, prior to reported regionality of the 'second-wave'. We outline how this framework can help identify structural factors driving such processes, and offer suggestions for a long-term, locally targeted model of pandemic relief in tandem with regional support to buffer the social context of the area. abstract_id: PUBMED:36625176 Widening Social Inequalities in Cancer Mortality of Children Under 5 Years in Korea. Background: To investigate the effect of parental social class on cancer mortality in children under 5 in Korea, two birth cohorts were constructed by linking national birth data to under-5 death data from the Statistics Korea for 1995-1999 (3,323,613 births) and 2010-2014 (2,297,876 births). Methods: The Cox proportional hazards model adjusted for covariates was used in this study. Results: Social inequalities of under-5 cancer mortality risk in paternal education and paternal employment status were greater in 2010-2014 than in 1995-1999. The gap of hazard ratio (HR) of under-5 cancer mortality between lower (high school or below) and higher (university or higher) paternal education increased from 1.23 (95% confidence interval, 1.041.46) in 1995-1999 to 1.45 (1.11-1.97) in 2010-2014; the gap of HR between parents engaged in manual work and non-manual work increased from 1.32 (1.12-1.56) in 1995-1999 to 1.45 (1.12-1.89) in 2010-2014 for fathers, and from 1.18 (0.7-1.98) to 1.69 (1.03-2.79) for mothers. When the parental social class was lower, the risk of under-5 cancer mortality was higher in not only adverse but normal births. Conclusion: Social inequalities must be addressed to reduce the disparity in cancer mortality of children under 5 years old. abstract_id: PUBMED:27531503 A Social History of Disease: Contextualizing the Rise and Fall of Social Inequalities in Cause-Specific Mortality. Fundamental cause theory posits that social inequalities in health arise because of unequal access to flexible resources, including knowledge, money, power, prestige, and beneficial social connections, which allow people to avoid risk factors and adopt protective factors relevant in a particular place. In this study, we posit that diseases should also be put into temporal context. We characterize diseases as transitioning through four stages at a given time: (1) natural mortality, characterized by no knowledge about risk factors, preventions, or treatments for a disease in a population; (2) producing inequalities, characterized by unequal diffusion of innovations; (3) reducing inequalities, characterized by increased access to health knowledge; and (4) reduced mortality/disease elimination, characterized by widely available prevention and effective treatment. For illustration, we pair an ideal-types analysis with mortality data to explore hypothesized incidence rates of diseases. Although social inequalities exist in incidence rates of many diseases, the cause, extent, and direction of inequalities change systematically in relation to human intervention. This article highlights opportunities for further development, specifically highlighting the role of stage duration in maintaining social inequalities in cause-specific mortality. abstract_id: PUBMED:23911611 Invited commentary: Personality phenotype and mortality--new avenues in genetic, social, and clinical epidemiology. In this issue of the Journal, Jokela et al. (Am J Epidemiol. 2013;178(5):667-675) scrutinize the association between personality phenotype and all-cause mortality in remarkable detail by using an "individual-participant meta-analysis" design. Across 7 large cohorts varying in demographics and methods of personality measurement, they find varying prospective associations for 4 dimensions of the five-factor (or "Big Five") model of personality, but robust and consistent prospective associations for Big Five dimension of "conscientiousness." Jokela et al. place an important exclamation point on a long era of study of this topic and hint directly and indirectly at new avenues for this line of research. I consider the following 3 areas particularly rife for further inquiry: the role of genetics in personality and health studies; the role of personality in social inequalities in health; and the health policy and clinical implications of work like that of Jokela et al., including the potential role of personality phenotype in the evolution of personalized medicine. abstract_id: PUBMED:19965888 Personality, socioeconomic status, and all-cause mortality in the United States. The authors assessed the extent to which socioeconomic status (SES) and the personality factors termed the "big 5" (neuroticism, extraversion, openness to experience, agreeableness, conscientiousness) represented confounded or independent risks for all-cause mortality over a 10-year follow-up in the Midlife Development in the United States (MIDUS) cohort between 1995 and 2004. Adjusted for demographics, the 25th versus 75th percentile of SES was associated with an odds ratio of 1.43 (95% confidence interval (CI): 1.11, 1.83). Demographic-adjusted odds ratios for the 75th versus 25th percentile of neuroticism were 1.38 (95% CI: 1.10, 1.73) and 0.63 (95% CI: 0.47, 0.84) for conscientiousness, the latter evaluated at high levels of agreeableness. Modest associations were observed between SES and the big 5. Adjusting each for the other revealed that personality explained roughly 20% of the SES gradient in mortality, while SES explained 8% of personality risk. Portions of SES and personality risk were explained by health behaviors, although some residual risk remained unexplained. Personality appears to explain some between-SES strata differences in mortality risk, as well as some individual risk heterogeneity within SES strata. Findings suggest that both sociostructural inequalities and individual disposition hold public health implications. Future research and prevention aimed at ameliorating SES health disparities may benefit from considering the risk clustering of social disadvantage and dispositional factors. abstract_id: PUBMED:10396674 Social inequalities in the mortality due to cardiovascular diseases in Italy Social inequalities in cardiovascular disease mortality are described in this paper focusing on the results of the Studio Longitudinale Torinese (SLT), an investigation that links census data with the statistical data that are currently available. The overall results confirm that cardiovascular disease mortality is higher in less-advantaged socioeconomic groups, irrespectively of the social indicator used: education, social class, housing quality, job security. Stratified data shows less important inequalities among ischemic heart disease as compared to cerebrovascular mortality. The differences are even more complex when the age groups in the two genders are analyzed, revealing cohort effects. Overall, the results agree with the previous survey carried out by ISTAT on 1981 Italian mortality, which confirmed the variations in inequalities according to geographical areas, gender and age. Differences in access to the health system are likely to be related to the differences detected for geographical areas, while differences in personal history and attitude towards health-associated behavior should explain age and gender variations in inequalities. Equity must be included in the evaluation of preventive programs and health-care models. Epidemiological and social research should be encouraged to better understand the factors that influence inequalities in cardiovascular disease mortality and in the health status of the population at large. abstract_id: PUBMED:37094231 Inequalities in diabetes mortality in Mexico: 2010-2019. Background: The impact of diabetes mellitus has been shown to be differentially expressed between social groups. Objective: To estimate inequality gaps in diabetes mellitus mortality through absolute and relative measures according to geographic distribution and social conditions. Material And Methods: Diabetes mellitus-related deaths recorded in Mexico between 2010 and 2019 were analyzed, and inequality measurements at the state level were calculated by gender. Results: National age-adjusted diabetes mellitus mortality rate showed an increase during the study period. Conclusion: The inequalities present in diabetes mortality should be considered for the design of health strategies. abstract_id: PUBMED:29162014 Technological innovations and the rise of social inequalities in health. Social inequalities in health have been categorised as a human-rights issue that requires action. Unfortunately, these inequalities are on the rise in many countries, including welfare states. Various theories have been offered to explain the persistence (and rise) of these inequalities over time, including the social determinants of health and fundamental cause theory. Interestingly, the rise of modern social inequalities in health has come at a time of great technological innovation. This article addresses whether these technological innovations are significantly influencing the persistence of modern social inequalities in health. A theoretical argument is offered for this potential connection and is discussed alongside the typical social determinants of health perspective and the increasingly popular fundamental cause perspective. This is followed by a proposed research agenda for further investigation of the potential role that technological innovations may play in influencing social inequalities in health. Answer: Personality may contribute to social inequalities in mortality, but its explanatory power appears to vary by gender and the specific measures of socioeconomic position (SEP) and personality traits considered. In a study of the French GAZEL cohort, personality measures explained some of the mortality gradients observed for measures of adult SEP in men, but had little explanatory power in women. For men, the attenuation in the association between SEP and mortality depended on the measures of SEP and ranged from 13-29% for various personality traits. After controlling for all personality factors that predicted mortality, associations between SEP and mortality were attenuated by 28-34% for education, occupational position, and income in men. For cardiovascular mortality, the reduction was even more pronounced, with percentages of reduction ranging from 31-44% after adjustment for 'CHD-prone' personality in men. In women, the attenuation was only evident for type-A behavior, by 11% (PUBMED:18276626). In the United States, personality was found to explain roughly 20% of the SES gradient in mortality, while SES explained 8% of personality risk. Portions of SES and personality risk were explained by health behaviors, although some residual risk remained unexplained. This suggests that both sociostructural inequalities and individual disposition hold public health implications (PUBMED:19965888). The role of personality in social inequalities in health is an area ripe for further inquiry, including the potential role of personality phenotype in the evolution of personalized medicine (PUBMED:23911611). However, it is important to note that while personality may play a role in explaining social inequalities in mortality, it is likely one of many factors, and the relationship between personality, SEP, and health outcomes is complex and multifaceted. Other factors such as access to healthcare, health behaviors, and broader social determinants of health also significantly contribute to social inequalities in mortality (PUBMED:26433316, PUBMED:34285096, PUBMED:36625176, PUBMED:27531503, PUBMED:10396674, PUBMED:37094231, PUBMED:29162014).
Instruction: Cutaneous gastric electrical stimulation alters gastric motility in dogs: New option for gastric electrical stimulation? Abstracts: abstract_id: PUBMED:28976676 Gastric electrical stimulation in patients with gastroparesis. As in the heart, there is a pacemaker in the human stomach and it generates myoelectrical activity with a frequency of approximately three cycles per minute. Abnormalities in gastric myoelectrical activity may result in gastric motility disorders, such as gastroparesis. Electrical stimulation of the stomach is achieved by delivering electrical currents via electrodes attached to the smooth muscle of the stomach. Recently, a number of studies on electrical stimulation of the stomach in both humans and dogs have indicated that gastric electrical stimulation with appropriate parameters is able to entrain gastric slow waves and normalize gastric dysrhythmias. This has led some investigators to use gastric electrical stimulation to treat patients with gastroparesis. Previous studies and the current state of the field in gastric electrical stimulation in treatment of gastroparesis will be discussed and summarized. abstract_id: PUBMED:31570014 Systematic review on gastric electrical stimulation in obesity treatment. Introduction: Obesity is a very common public health problem worldwide. However, there is a lack of effective therapies. Only a small portion of patients with morbid obesity are accepting bariatric surgery as the last option due to the risks associated with invasive therapy. Areas covered: In this paper, we review an emerging weight loss treatment: gastric electrical stimulation (GES). The feasibility of GES as a potential therapy for obesity is introduced. Methodologies and parameters of GES are presented. Several GES methods for treating obesity and their effects on food intake and body weight are presented. Possible mechanisms involved in the anti-obesity effect of GES are discussed. Finally, our comments on the potential of GES for obesity and expectations for future development of the GES therapy are provided. The PubMed central database was searched from inception to May 2019. The literature search used the following terms: 'Gastric electrical stimulation' combined with 'obesity' and 'Implantable gastric stimulation' and 'pharmaceutical therapy' and 'bariatric surgery'. Expert opinion: There is a potential to use GES for treating obesity. However, more efforts are needed to develop appropriate stimulation devices and to design an adequate therapy for treating obesity in humans. abstract_id: PUBMED:22523722 Gastric electrical stimulation for gastroparesis. Gastric electrical stimulation (GES) for gastroparesis has been in use for more than a decade. Multiple publications, consisting almost entirely of open label single center studies, reported a beneficial effect on symptoms, quality of life and nutritional status. Some predictors of better response to GES have been lately identified, primarily diabetic etiology and nausea and vomiting as the predominant symptoms. However, individual response to GES remains difficult to predict. The mechanism of action of GES remains poorly understood. Stimulation parameters approved in clinical practice do not regulate gastric slow wave activity and have inconsistent effect on gastric emptying. Despite such limitations, gastric electrical stimulation remains a helpful intervention in some patients with severe gastroparesis who fail to respond to medical therapy. abstract_id: PUBMED:21992252 Gastric electrical stimulation on gastric motility in dogs. Aims: The aim of this study was to systematically evaluate and compare the effects of synchronized dual-pulse gastric electrical stimulation (SDPS) and conventional gastric electrical stimulation (CGES) on antral contractions, gastric tone, and autonomic functions. Methods: Seven female hound dogs implanted with four pairs of electrodes on gastric serosa were studied in two separate experiments. The first experiment was designed to investigate the effects of SDPS/CGES on gastric tone and heart rate variations. The second experiment was performed to study the effects of SDPS/CGES on antral contractions. Results: 1) SDPS induced gastric antral contractions in the fasting state, whereas CGES at the intrinsic or a tachygastrial frequency had no effects on fasting antral contractions. 2) SDPS enhanced postprandial antral contractions impaired by glucagon; however, CGES at a tachygastrial frequency inhibited postprandial antral contractions. In addition, CGES at the intrinsic frequency of the stomach had no effects on postprandial antral contractions. 3) SDPS or CGES at the intrinsic frequency of the stomach had no effects on gastric tone, but CGES at a tachygastrial frequency reduced gastric tone. 4) SDPS excited vagal activity, but CGES at a tachygastrial frequency inhibited vagal activity. Conclusions: SDPS improves antral contractions but does not affect gastric tone and thus may have potential for treating gastric motility disorders. CGES at a tachygastrial frequency inhibits gastric tone and contractions and may be applicable for treating obesity. The excitatory or inhibitory effects of SDPS/CGES on gastric motility may be mediated via vagal pathway. abstract_id: PUBMED:30525253 The Effect of Gastric Electrical Stimulation on Small Bowel Motility in Patients With Gastroparesis and Concomitant Pancreatic and Small Bowel Dysfunction: From Animal Model to Human Application. Background/aims: Patients with gastroparesis often have biliary/pancreatic and small bowel symptoms but the effects of gastric electrical stimulation on small bowel electrical activity of the mid-gut have not been studied. Animal model aim: Establish gastric and upper small bowel/biliary slow wave activity relationships with electrical stimulation. Human study aim: Demonstrate improvement in symptoms associated with proximal small bowel dysmotility in gastric stimulated patients. Materials And Methods: Animal model: In vivo evoked responses of duodenal and Sphincter of Oddi measures recorded during gastric electrical stimulation in a nonsurvival swine model (N = 3). High-resolution electrical slow wave mapping of frequency, amplitude, and their ratio, for duodenal and Sphincter of Oddi electrical activity were recorded. Human study: Patients (N = 8) underwent temporary gastric stimulation with small bowel electrodes. Subjective and objective data was collected before and after temporary gastric stimulation. Symptom scores, gastric emptying times, and mucosal electrograms via low-resolution mapping were recorded. Results: Animal gastric stimulation resulted in some changes in electrical activity parameters, especially with the highest energies delivered but the changes were not statistically significant. Human study revealed improvement in symptom and illness severity scores, and changes in small bowel mucosal slow wave activity. Conclusions: Gastric electrical stimulation in an animal model seems to show nonsignificant effects small bowel slow wave activity and myoelectric signaling, suggesting the existence of intrinsic neural connections. Human data shows more significance, with possible potential for therapeutic use of electrical stimulation in patients with gastroparesis and pancreato-biliary and small bowel symptoms of the mid-gut. This study was limited by the nonsurvival pig model, small sample size, and open label human study. abstract_id: PUBMED:25228944 Updates on gastric electrical stimulation to treat obesity: Systematic review and future perspectives. Aim: To evaluate the current state-of-the-art of gastric electrical stimulation to treat obesity. Methods: Systematic reviews of all studies have been conducted to evaluate the effect of different types of gastric electrical stimulation (GES) on obesity. Results: Thirty-one studies consisting of a total of 33 different trials were included in the systematic review for data analysis. Weight loss was achieved in most studies, especially during the first 12 mo, but only very few studies had a follow-up period longer than 1 year. Among those that had a longer follow-up period, many were from the Transcend(®) (Implantable Gastric Stimulation) device group and maintained significant weight loss. Other significant results included changes in appetite/satiety, gastric emptying rate, blood pressure and neurohormone levels or biochemical markers such as ghrelin or HbA1c respectively. Conclusion: GES holds great promises to be an effective obesity treatment. However, stronger evidence is required through more studies with a standardized way of carrying out trials and reporting outcomes, to determine the long-term effect of GES on obesity. abstract_id: PUBMED:15479976 Improvement of gastric motility with gastric electrical stimulation in STZ-induced diabetic rats. Aims: The aims of this study were to observe whether gastric motility was impaired in streptozotocin (STZ)-induced diabetic rats and whether gastric electrical stimulation was able to restore the impaired motility. Methods: Ten control rats and 30 STZ-induced diabetic rats were used in this study. Gastric slow waves were recorded at baseline and 0, 1, 2, 3 and 4 weeks after the injection of STZ or vehicle. Gastric emptying with (long or short pulses) or without gastric electrical stimulation was measured 6 weeks after STZ injection in a group of 10 diabetic rats each. Results: (1) STZ injection resulted in hyperglycemia and weight loss. (2) Gastric motility was impaired in the diabetic rats. The percentage of normal slow waves was progressively reduced 2 weeks after STZ injection. Compared with the control rats, gastric emptying in the diabetic rats was significantly delayed 6 weeks after STZ injection (60 +/- 3 vs. 79 +/- 2%, p &lt; 0.02). (3) Gastric electrical stimulation with either long or short pulses accelerated gastric emptying in the diabetic rats. (4) Gastric electrical stimulation with long but not short pulses was capable of normalizing gastric dysrhythmia in the diabetic rats. Conclusion: Our data show that gastric motility is impaired in STZ-induced diabetic rats as reflected by a progressive reduction in the percentage of normal gastric slow waves and delayed gastric emptying. Moreover, here we show that gastric electrical stimulation normalizes delayed gastric emptying in diabetic rats and this normalization is not attributed to the effect of gastric electrical stimulation on gastric slow waves. abstract_id: PUBMED:7627865 Effect of electrical stimulation on gastric electrical activity, motility and emptying. The aim was to measure the effect of gastric electrical stimulation on the frequency of canine antral pacesetter potentials (PPs), the strength of antral contractions, and the rate of gastric emptying while fasting, after feeding and with pentagastrin stimulation. Four conscious dogs with a stimulating electrode placed 10 cm proximal to the pylorus and recording electrodes and strain gauges placed 7, 5 and 3 cm proximal to the pylorus underwent myoelectric and strain gauge recordings while fasting, after feeding (250 ml 5% dextrose labelled with polyethylene glycol), and during pentagastrin infusion (0.5 micrograms kg-1 min-1) on four separate days. On each day, electrical stimulation was done using one of four stimulation frequencies (0, 6, 30 and 1200 stimuli per minute [s.p.m.]). Stimulation at 6 and 30 s.p.m. increased the fasting and fed PP frequency, whereas 1200 s.p.m. stimulation did not. Feeding decreased the maximum driven frequency, and pentagastrin increased it. Neither the motility index nor the gastric emptying rate were consistently changed by stimulation at any frequency. In conclusion, canine proximal antral stimulation at 6 and 30 s.p.m. sped PP frequency during fasting and after feeding, but stimulation over a wide range of frequencies had little effect on gastric contractions and emptying. abstract_id: PUBMED:21253385 Gastric electrical stimulation for gastroparesis: a goal greatly pursued, but not yet attained. The lack of an effective medical treatment for gastroparesis has pushed the research of new techniques of gastric electrical stimulation (GES) for nearly half a century of experimentation with a large variety of electrical stimuli delivered to the gastric wall of animals and patients with gastroparesis. Three principal methods are currently available: gastric low-frequency/high-energy GES with long pulse stimulation, high-frequency/low-energy GES with short pulse stimulation and neural sequential GES. The first method aims to reset a regular slow wave rhythm, but has variable effects on contractions and requires devices with large and heavy batteries unsuitable for implantation. High-frequency/low-energy GES, although inadequate to restore a normal gastric electro-mechanical activity, improves dyspeptic symptoms, such as nausea and vomiting, giving patients a better quality of life together with a more satisfactory nutritional status and is suitable for implantation. Unfortunately, the numerous clinical studies using this type of GES, with the exception of two, were not controlled and there is a need for definitive verification of the effectiveness of this technique to justify the cost and the risks of this procedure. The last method, which is neural sequential GES, consists of a microprocessor-controlled sequential activation of a series of annular electrodes along the distal two thirds of the stomach and is able to induce propagated contractions causing forceful emptying of the gastric content. The latter method is the most promising, but has been used only in animals and needs to be tested in patients with gastroparesis before it is regarded as a solution for this disease. abstract_id: PUBMED:16984493 Systematic review: applications and future of gastric electrical stimulation. Background: Over the past 20 years, gastric electrical stimulation has received increasing attention among researchers and clinicians. Aim: To give a systematic review on the effects, mechanisms and applications of gastric electrical stimulation. Methods: Medline was used to identify the articles to be included in this review. Key words used for the search included gastric electrical stimulation, gastric pacing, electrical stimulation, stomach, gastrointestinal motility, central nervous system, gastroparesis, nausea and vomiting; obesity and weight loss. Combinational uses of these keywords were made to identify relevant articles. Most of the articles included in this review ranged from 1985 to 2006. Results: Based on the general search, the review was structured as follows: (i) peripheral and central effects and mechanisms of gastric electrical stimulation; (ii) clinical applications of gastric electrical stimulation for gastroparesis and obesity and (iii) future development of gastric electrical stimulation. Conclusions: Great progress has been made during the past decades. Gastric electrical stimulation has been shown to be effective in normalizing gastric dysrhythmia, accelerating gastric emptying and improving nausea and vomiting. Implantable device has been made available for treating gastroparesis as well as obesity. However, development of a new device and controlled clinical studies are required to further prove clinical efficacy of gastric electrical stimulation. Answer: The provided abstracts do not directly address the concept of cutaneous gastric electrical stimulation or its effects on gastric motility in dogs. The abstracts focus on gastric electrical stimulation (GES) involving electrodes attached to the smooth muscle of the stomach or implanted devices. For example, one study systematically evaluated the effects of synchronized dual-pulse gastric electrical stimulation (SDPS) and conventional gastric electrical stimulation (CGES) on antral contractions, gastric tone, and autonomic functions in dogs, finding that SDPS improved antral contractions but did not affect gastric tone, suggesting potential for treating gastric motility disorders (PUBMED:21992252). Another study observed the effects of GES on gastric motility in streptozotocin (STZ)-induced diabetic rats, showing that GES normalized delayed gastric emptying in diabetic rats (PUBMED:15479976). However, none of the abstracts mention cutaneous or transcutaneous approaches to GES, which would involve stimulating the stomach from the skin surface rather than through implanted electrodes. Therefore, based on the provided abstracts, it cannot be concluded that cutaneous gastric electrical stimulation is a new option for GES or that it alters gastric motility in dogs. Further research would be needed to explore this specific method of GES and its potential effects on gastric motility.
Instruction: Do neural factors underlie age differences in rapid ankle torque development? Abstracts: abstract_id: PUBMED:8675928 Do neural factors underlie age differences in rapid ankle torque development? Objectives: Rapid torque development is substantially slower in healthy old adults compared with young adults, but the underlying cause of this age-related loss remains unclear. Measurements of myoelectric signals in ankle dorsi- and plantarflexor muscles during rapid exertions were used to explore the extent to which the loss might be attributed to neural factors. Methods: Myoelectric signals were measured in a laboratory setting in 24 healthy young and 24 healthy old adult volunteers during rapid isometric and isokinetic torque development. Premotor times, muscle activation rates, and myoelectric activity levels of agonistic and antagonistic muscles were quantified. Results: There were few marked age differences in the premotor times or in the onset rates or magnitudes of agonistic muscles activities during maximum isometric and during isokinetic exertions. Premotor times were statistically associated with age but, in the mean, were only approximately 10 to 25 ms longer in the old. Age effects on agonist muscle activity magnitudes were significant only in the lateral gastrocnemius. Small decreases in antagonistic muscle activity levels with age were found. Conclusions: Given the outcomes of this study, the differences observed previously in rapid torque development abilities in healthy older adults, compared with healthy younger adults, seem attributable largely to differences in muscle contraction mechanisms rather than to differences in speeds of stimulus sensing or central processing of motor commands, or to differences in muscle recruitment strategies. abstract_id: PUBMED:33019077 Ankle torque forecasting using time-delayed neural networks. A method for ankle torque prediction ahead of the current time is proposed in this paper. The mean average value of EMG signals from four muscles, alongside the joint angle and angular velocity of the right ankle, were used as input parameters to train a time-delayed artificial neural network. Data collected from five healthy subjects were used to generate the dataset to train and test the model. The model predicted ankle torque for five different future times from zero to 2 seconds. Model predictions were compared to torque calculated from inverse dynamics for each subject. The model predicted ankle torque up to 1 second ahead of time with normalized root mean squared error of less than 15 percent while the coefficient of determination was over 0.85.Clinical Relevance- the potential of the model for predicting joint torque ahead of time is helpful to establish an intuitive interaction between human and assistive robots. This model has application to assist patients with neurological disorders. abstract_id: PUBMED:22189949 Effects of age and localized muscle fatigue on ankle plantar flexor torque development. Background And Purpose: Older adults often experience age-related declines in strength, which contribute to fall risk. Such age-related levels of fall risk may be compounded by further declines in strength caused by acute muscle fatigue. Both age- and fatigue-related strength reductions likely impact the ability to quickly develop joint torques needed to arrest falls. Therefore, the purpose of this study was to investigate the combined effects of age and localized muscle fatigue on lower extremity joint torque development. Methods: Young (mean age, 26 (2.5) years) and older (mean age, 71 (2.8) years) healthy male adults performed an isometric ankle plantar flexion force control task before and after an ankle plantar flexor fatiguing exercise. Force control performance was quantified using onset time, settling time, and rate of torque development. Results: Age-related increases and decreases were observed for onset time and rate of torque development, respectively. A fatigue-related decrease in rate of torque development was observed in young, but not older adults. Discussion: The results suggest performance declines that may relate to older adults' reduced ability to prevent falls. A fatigue-related performance decline was observed among young adults, but not older, suggesting the presence of age-related factors such as motor unit remodeling and alterations in perceived exertion. Conclusions: Older adults demonstrated an overall reduction in the ability to quickly produce ankle torque, which may have implications for balance recovery and fall risk among older adults. abstract_id: PUBMED:37725866 Decreased rate of torque development in ankle evertors for individuals with chronic ankle instability. Background: Individuals with chronic ankle instability have decreased peak torque during maximum voluntary contraction in ankle evertors/invertors, and hip abductors. However, it is unclear whether individuals with chronic ankle instability and/or copers demonstrate decreased rate of torque development in ankle evertors/invertors, and hip abductors. Methods: 54 university-aged participants (18 chronic ankle instability, 18 copers, and 18 controls) performed three maximal isometric contractions for ankle evertors and invertors, and hip abductors. Rate of torque development was defined as the linear slope of the torque-time curve during the first 200 ms of each contraction and compared between the three groups using a one-way analysis of variance (α = 0.05). Findings: The chronic ankle instability group showed 38.1% less rate of torque development than the coper (P = 0.03 and d = 0.84) and 37.1% than the control groups (P = 0.03 and d = 1.03) in the ankle evertors. For the hip abductors, there were moderate effects between the chronic ankle instability group and the copers (P = 0.06 and d = 0.70), and control groups (P = 0.06 and d = 0.75). Interpretations: The observed between-groups differences in rate of torque development indicate that restoring rate of torque development after lateral ankle sprain may be important to reduce risk of reinjury and development of chronic ankle instability. Clinicians should consider the rate of torque development in the ankle evertors and hip abductors during rehabilitation chronic ankle instability patients. abstract_id: PUBMED:37765769 Using Deep Learning Models to Predict Prosthetic Ankle Torque. Inverse dynamics from motion capture is the most common technique for acquiring biomechanical kinetic data. However, this method is time-intensive, limited to a gait laboratory setting, and requires a large array of reflective markers to be attached to the body. A practical alternative must be developed to provide biomechanical information to high-bandwidth prosthesis control systems to enable predictive controllers. In this study, we applied deep learning to build dynamical system models capable of accurately estimating and predicting prosthetic ankle torque from inverse dynamics using only six input signals. We performed a hyperparameter optimization protocol that automatically selected the model architectures and learning parameters that resulted in the most accurate predictions. We show that the trained deep neural networks predict ankle torques one sample into the future with an average RMSE of 0.04 ± 0.02 Nm/kg, corresponding to 2.9 ± 1.6% of the ankle torque's dynamic range. Comparatively, a manually derived analytical regression model predicted ankle torques with a RMSE of 0.35 ± 0.53 Nm/kg, corresponding to 26.6 ± 40.9% of the ankle torque's dynamic range. In addition, the deep neural networks predicted ankle torque values half a gait cycle into the future with an average decrease in performance of 1.7% of the ankle torque's dynamic range when compared to the one-sample-ahead prediction. This application of deep learning provides an avenue towards the development of predictive control systems for powered limbs aimed at optimizing prosthetic ankle torque. abstract_id: PUBMED:8808994 Effects of age on rapid ankle torque development. Background: When balance is disturbed, often only fractions of a second are available in which to make the initial responses needed for its restoration. Abilities to develop joint torques rapidly may be critical to such responses. We undertook this study to quantify age effects among healthy adults in abilities to develop ankle joint torques rapidly. Methods: Ankle dorsiflexion (DF) and plantarflexion (PF) torque development during rapid isometric and during isokinetic (30, 60, 120, 180, and 240 deg/sec) exertions was assessed in 24 healthy young (mean age 23 years) and 24 healthy old adults (mean age 72 years). The effects of age, gender, and torque direction on the times needed to reach given torque magnitudes, maximum rates of isometric torque development (MRTD), and maximum isokinetic torques were examined. Results: The old adults required substantially more time to reach given torque magnitudes than the young adults. For example, the young and old females needed approximately 236 and 337 msec to develop 15 Nm of DF torque, of which 141 and 164 msec were reaction times. Isometric MRTD were 25 to 36% lower in the old than in the young adults. The age declines in isometric torque development time were associated with losses in maximum isometric strength. Maximum isokinetic torques developed by the old were 20 to 40% lower than those of young adults. The percent losses in isokinetic torques with age were independent of joint angular velocity for PF, but increased with velocity for DF. Conclusions: We found substantial age declines in abilities of healthy old adults to rapidly develop ankle joint torques. The capacities of even healthy old adults to recover balance or to carry out other time-critical actions that require moderate-to-substantial strengths may be considerably degraded by these declines. abstract_id: PUBMED:16793286 Effects of joint angle and age on ankle dorsi- and plantar-flexor strength. This study aimed at examining the effects of joint angle and age on the maximal voluntary contraction (MVC) torque, for the agonist and antagonist muscle groups around the ankle, i.e., the dorsi- and plantar-flexors. To this aim, neural and muscular factors were investigated in two groups of healthy men: 11 young (mean age, 24 years) and 18 older (mean age, 78 years). Plantar-flexion (PF) and dorsiflexion (DF) isometric MVC torques were measured in three different ankle joint angles and surface electromyographic activities of the triceps surae and of the tibialis anterior muscles were recorded. The main findings were that the DF-to-PF MVC torque ratio varied with joint angle and age, indicating that aging affected at different rates the two muscle groups: this ratio was always higher in older adults because of the PF strength decline with aging. Furthermore, the DF MVC torque-angle relationship appeared to be especially explained by neural factors, whereas the relationship in PF seemed to be mainly due to muscular parameters. These relationships would not be a discriminating factor between the two age groups. As a consequence, measurements at one ankle joint angle, whatever the angle, are thus enough to examine the differences within age groups and to perform a rapid assessment of the imbalance at the ankle joint. abstract_id: PUBMED:33481823 Individual differences in intrinsic ankle stiffness and their relationship to body sway and ankle torque. When standing, intrinsic ankle stiffness is smaller when measured using large perturbations, when sway size is large, and when background torque is low. However, there is a large variation in individual intrinsic ankle stiffness. Here we determine if individual variation has consequences for postural control. We examined the relationship between ankle stiffness, ankle torque and body sway across different individuals. Ankle stiffness was estimated in 19 standing participants by measuring torque responses to small, brief perturbations. Perturbation sizes of 0.2 &amp; 0.9 degrees (both lasting 140 ms) measured short- and long-range stiffness respectively, while participants either stood quietly on a fixed platform or were imperceptibly tilted to reduce stability (0.1 Hz sinusoid; 0.2 &amp; 0.4 deg). The spontaneous body sway component (natural random relatively rapid postural adjustments) and background ankle torque were averaged from sections immediately before perturbations. The results show that, first, intrinsic ankle stiffness is positively associated with ankle torque, and that this relationship is stronger for long-range stiffness. Second, intrinsic ankle stiffness is negatively associated with body sway, but, in contrast to the relationship with torque, this relationship is stronger for short-range stiffness. We conclude that high short-range intrinsic ankle stiffness is associated with reduced spontaneous sway, although the causal relationship between these two parameters is unknown. These results suggest that, in normal quiet standing where sway is very small, the most important determinant of intrinsic ankle stiffness may be stillness. In less stable conditions, intrinsic ankle stiffness may be more dependent on ankle torque. abstract_id: PUBMED:10568533 Age-related changes in rate and magnitude of ankle torque development: implications for balance control. Background: One of the key components of postural control is the motor system's ability to produce appropriate torques to counteract perturbations that may lead to a loss of balance. Evidence exists to show that there is an age-related decline in absolute strength and in the ability to rapidly produce torque. The relationship between age-related decreases in these voluntary torque production capabilities and the ability to rapidly produce torques in a reactive balance task has not been studied. Thus, the purpose of this study was to examine the magnitude and rate of torque production in younger and older adults under reactive balance conditions. Methods: Older (OA) and younger (YA) adults received forward and backward support surface translations of varying amplitudes and velocities. Maximum ankle muscle torque (maxMa) and rate of change of ankle muscle torque (Ma) following a perturbation were calculated. Results: Two balance responses emerged: a no-step and a step response. With increasing perturbation difficulty, YA and OA used different responses. The no-step and step responses were examined for age-group differences in the force characteristics. No significant age-group differences were found for maxMa or rate of change of Ma within either no-step or step responses. Conclusion: The results of this study suggest that neither the magnitude nor rate of ankle muscle torque production, as produced during the initial balance response in this set of reactive balance control tasks, determines the different balance responses seen in younger versus older adults. abstract_id: PUBMED:15734242 Ankle eversion torque response to sudden ankle inversion Torque response in unbraced, braced, and pre-activated situations. In 13 young ankle stable subjects, ankle eversion torque and peroneal EMG were simultaneously recorded in response to sudden ankle inversion. The eversion torque response was bi-phasic. The initial development of torque, which was responsible for 30% of the maximal eversion torque response, was observed 135 ms after the start of platform rotation and correlated well with the onset of the automatic postural peroneal EMG response. The remaining eversion torque response commenced after 305 ms, strongly correlating with the onset of the peroneal long latency voluntary EMG activity. With the ankle unbraced, 66% of the maximal torque level was reached in 326 ms. While braced, the same torque magnitude was reached using 230 ms (p&lt;0.02), and pre-activation of the peroneal muscles allowed the subjects to reach the same level of torque in 89 ms (p&lt;0.0005). Prior to the study, a common reaction pattern to sudden inversion was expected in an ankle stable population, but review of the eversion torque and EMG data from the 13 subjects revealed three different voluntary reaction patterns: 10 subjects showed an efficient activation of evertor muscles; two subjects stiffened their ankles with activation of both in- and evertor muscles; and one subject showed a marginal voluntary activation of the ankle evertors. The results of the study indicate that the reaction to sudden ankle inversion is not solely automatic. The main part of the torque response is voluntarily mediated and inter-individual differences in strategy seem to exist in healthy subjects. Answer: Neural factors do not appear to be the primary underlying cause of age differences in rapid ankle torque development. The study by PUBMED:8675928 found that the differences in rapid torque development abilities between healthy older adults and younger adults are largely attributable to differences in muscle contraction mechanisms rather than to differences in speeds of stimulus sensing, central processing of motor commands, or muscle recruitment strategies. This conclusion is based on the observation that there were few marked age differences in premotor times or in the onset rates or magnitudes of agonistic muscle activities during maximum isometric and isokinetic exertions. The study also noted that premotor times were only slightly longer in older adults, and age effects on agonist muscle activity magnitudes were significant only in the lateral gastrocnemius, with small decreases in antagonistic muscle activity levels with age. Other studies have also explored the effects of age on ankle torque development. PUBMED:8808994 found substantial age declines in the ability of healthy older adults to rapidly develop ankle joint torques, which may degrade their capacity to recover balance or carry out time-critical actions. PUBMED:22189949 reported that older adults demonstrated a reduction in the ability to quickly produce ankle torque, which could have implications for balance recovery and fall risk. PUBMED:16793286 found that the decline in plantarflexion strength with aging affected the maximal voluntary contraction torque-angle relationship, suggesting that aging affects muscle groups at different rates. In summary, while neural factors such as premotor times and muscle activation rates may be associated with age, the predominant factors contributing to age differences in rapid ankle torque development seem to be related to changes in muscle contraction mechanisms rather than neural processing or recruitment strategies.
Instruction: Are there differences in information given to private and public prenatal patients? Abstracts: abstract_id: PUBMED:8333444 Are there differences in information given to private and public prenatal patients? Objectives: In 1989 the United States Public Health Service Expert Panel on the Content of Prenatal Care reported that health education should become a more integral part of prenatal care. Key questions about providing this education have not been examined. Our study compared the type of information provided to women who sought prenatal care in a public clinic and to those who were seen in a private practice and the degree to which the patients were satisfied with the information they received. Study Design: One hundred fifty-nine pregnant women (80 seen in a public clinic, 79 seen in a private practice) completed two questionnaires about 38 topics commonly cited as important during pregnancy. At the first prenatal visit, the women reported their level of interest in each of the topics. At 36 to 40 weeks' gestation the women completed a second questionnaire to assess whether information was provided for each topic and whether they had learned as much as desired. Results: Overall, the women in the public sector received more information than did the women who were cared for privately. This was statistically significant at the p &lt; 0.05 level for 25 of the 38 topics. Satisfaction with information learned was highly correlated with information received during prenatal care, but, surprisingly, it was not shown to be associated with the patient's interest level at the first visit. Fewer than 50% of private patients reported having received information about such important topics as acquired immunodeficiency syndrome, sexually transmitted diseases, preterm birth prevention, family planning, and family violence. Conclusions: The one-on-one approach to health education in pregnancy usually used in the private setting may not facilitate addressing many topics believed to be important components of contemporary prenatal care. Providers of private prenatal care should initiate discussion of prenatal health education topics rather than relying on patient interest in requesting information. Just as public prenatal care programs have devoted significant resources to more comprehensive prenatal education, the providers in the private sector must assure that pregnant women receive the same comprehensive information. In so doing, these providers can help promote an optimal outcome for their patients, their patients' unborn children, and the family unit. abstract_id: PUBMED:24337056 Quality of prenatal care in public and private services Purpose: To analyze prenatal care in public and private services. Methods: A cross-sectional, retrospective and analytic study was conducted based on the audit of files of pregnant women who had given birth at a reference hospital for low risk cases in the area of Campos Gerais - Paraná State, in the first semester of 2011. The Yates chi-squared test or exact Fisher test were used to determine the association between the lack of registration files for pregnant women regarding prenatal assistance in the public and private services, with the level of significance set at p ≤ 0.05. The quality of prenatal care was determined based on the percentile of non-registrations. Results: A total of 500 prenatal files were analyzed. There was a significant attendance of six or more prenatal visits, with a larger proportion in the private service (91.9%). The laboratory and obstetric exams most frequently not registered in the public and in the private services were, respectively: hepatitis B (79.3 and 48.4%), hemoglobin and hematocrit values (35.6 and 21.8%), anti-HIV serology (29.3 and 12.9%), fetal movement (84.3 and 58.9%) and length (60.4 and 88.7%), edema verification (60.9 and 54.8%), and fetal presentation (52.4 and 61.3%). The audit of the files of pregnant women allowed to determine the quality of the prenatal care provided and confirmed differences in assistance according to the place, showing excellent and good quality of private care, and regular public care for ultrasonography and blood type/Rh factor; regular quality of private care and poor quality of public care for urine tests and weight. For the other types of laboratory and obstetric exams and vaccines, the quality was poor or very poor in both types of services. Conclusion: The differences between the services showed that there is a need for actions aiming at the improvement of the prenatal care provided by public services. abstract_id: PUBMED:16675474 Public and private prenatal care providers in urban Mexico: how does their quality compare? Objective: To evaluate variations in prenatal care quality by public and private clinical settings and by household wealth. Design: The study uses 2003 data detailing retrospective reports of 12 prenatal care procedures received that correspond to clinical guidelines. The 12 procedures are summed up, and prenatal care quality is described as the average procedures received by clinical setting, provider qualifications, and household wealth. Setting: Low-income communities in 17 states in urban Mexico. Participants: A total of 1253 women of reproductive age who received prenatal care within 1 year of the survey. Main Outcome Measure: The mean of the 12 prenatal care procedures received, reported as unadjusted and adjusted for individual, household, and community characteristics. Results: Women received significantly more procedures in public clinical settings [80.7, 95% confidence interval (CI) = 79.3-82.1; P &lt; or = 0.05] compared with private (60.2, 95% CI = 57.8-62.7; P &lt; or = 0.05). Within private clinical settings, an increase in household wealth is associated with an increase in procedures received. Care from medical doctors is associated with significantly more procedures (78.8, 95% CI = 77.5-80.1; P &lt; or = 0.05) compared with non-medical doctors (50.3, 95% CI = 46.7-53.9; P &lt; or = 0.05). These differences are independent of individual, household, and community characteristics that affect health-seeking behavior. Conclusions: Significant differences in prenatal care quality exist across clinical settings, provider qualifications, and household wealth in urban Mexico. Strategies to improve quality include quality reporting, training, accreditation, regulation, and franchising. abstract_id: PUBMED:15544642 Private care and public health: do vaccination and prenatal care rates differ between users of private versus public sector care in India? Objective: To determine whether patients who use private sector providers for curative services have lower vaccination rates and are less likely to receive prenatal care. Data Sources/study Setting: This study uses data from the 52d round of the National Sample Survey, a nationally representative socioeconomic and health survey of 120,942 rural and urban Indian households conducted in 1995-1996. Study Design: Using logistic regression, we estimate the relationship between receipt of preventive care at any time (vaccinations for children, prenatal care for pregnant women) and use of public or private care for outpatient curative services, controlling for demographics, household socioeconomic status, and state of residence. Data Collection/extraction Methods: We analyzed samples of children ages 0 to 4 and pregnant women who used medical care within a 15-day window prior to the survey. Principal Findings: With the exception of measles vaccination, predicted probabilities of the receipt of vaccinations and prenatal care do not differ based on the type of provider at which children and women sought curative care. Children and pregnant women in households who use private care are almost twice as likely to receive preventive care from private sources, but the majority still obtains preventive care from public providers. Conclusions: We do not find support for the hypothesis that children and pregnant women who use private care are less likely to receive public health services. Results are consistent with the notion that Indian households are able to successfully navigate the coexisting public and private systems, and obtain services selectively from each. However, because the study employed an observational, cross-sectional study design, findings should be interpreted cautiously. abstract_id: PUBMED:12567924 Prenatal care services in the public and private arena. Purpose: This exploratory study described the prenatal care experience in the public and private arena from the perceptions of childbearing women using interpretive interactionism. Data Sources: A face-to-face interview comprised of eight open-ended questions was used to obtain pregnant women's perceptions of their prenatal care experience and prenatal care needs. The purposive sample consisted of six women who received private prenatal care and 14 women who received public prenatal care. Conclusions: Five essential elements of the prenatal care experience were identified. Prenatal care was viewed as a cooperative effort between informal self-care and formal care by health professionals. Issues related to individuality and normality were important considerations in the delivery of prenatal care. Implications For Practice: Controversy exists over the effectiveness of prenatal care in preventing poor outcomes, as the definition of what constitutes adequate prenatal care remains unclear. Advanced practice nurses (APNs) continue to play a pivotal role in the provision of prenatal care services. The expanded knowledge and skills possessed by APNs place them in a pivotal position to develop and implement individualized, developmentally appropriate prenatal care that the women in this study so desperately wanted. In addition, they can assist women in continuing the health promoting behaviors initiated prenatally through out their lifespan. abstract_id: PUBMED:35293633 Providing a service or seeking a favor? The role of private prenatal care on the continuity of care in Serbian public maternity hospitals. Background: For pregnant and birthing women, maintaining a relationship with the same health care practitioner is critical to establishing a sense of safety. In postsocialist Serbia, where care is fragmented and depersonalized, this continuity is often missing. Patients are frequently handed over to new practitioners, sometimes without even knowing the name of the person overseeing their care. In response, many women turn to the private sector to keep the same practitioner across their prenatal period and their birth by establishing a personal "connection" within the hospital that they believe will shield them from mistreatment during childbirth. Methods: Data presented in this paper were collected through participant observation in one public maternity hospital and one public primary care center in Serbia; semistructured interviews with 14 physicians; and semistructured interviews with 80 women who had given birth in one public maternity hospital in Serbia. Results: Public maternity hospital physicians who supplement their income working in the private sector have the power to blur the distinctions between favors and services. They offer continuity of personalized care to their private clients/patients once they enter the public system, and they themselves obtain social and economic security unavailable to those working in only one sector. At the same time, there is evidence that personalized continuity of care does not actually shield women from mistreatment in hospitals. Conclusions: The private sector is reshaping existing informal strategies and blurring the lines between formal and informal payments. In the postsocialist context, consumer practices are not separate from, but entangled with, informality. Neoliberalism has not brought about empowerment and eliminated informality, but has instead further exacerbated existing inequalities in maternity care in Eastern Europe. abstract_id: PUBMED:30047079 Implementing Group Prenatal Care in Southwest Georgia Through Public-Private Partnerships. Introduction CenteringPregnancy® is well-regarded as an innovative group model of prenatal care. In 2009, Georgia's Southwest Public Health District partnered with local obstetricians and medical centers to expand prenatal care access and improve perinatal outcomes for low-income women by implementing Georgia's first public health administered CenteringPregnancy program. This paper describes the successful implementation of CenteringPregnancy in a public health setting with no prior prenatal services; assesses the program's first 5-year perinatal outcomes; and discusses several key lessons learned. Methods Prenatal and hospital medical records of patients were reviewed for the time period from October 2009 through October 2014. Descriptive analyses were conducted to examine demographic and clinical characteristics of women initiating prenatal care and to assess perinatal outcomes among patients with singleton live births who attended at least three CenteringPregnancy sessions or delivered prior to attending the third session. Results Six hundred and six low-income women initiated prenatal care; 55.4 and 36.4% self-identified as non-Hispanic black and Hispanic, respectively. The median age was 23 years (IQR 20, 28). Nearly 69% initiated prenatal care in the first trimester. Perinatal outcomes were examined among 338 singleton live births. The 2010-2014 preterm birth rate (% of births &lt; 37 weeks gestation at delivery) and low birth weight rate (% of births &lt; 2500 g) were 9.1 and 8.9%, respectively. Nearly 77% of women initiated breastfeeding. Discussion CenteringPregnancy administered via public-private partnership may improve access to prenatal care and perinatal outcomes for medically underserved women in low-resource settings. abstract_id: PUBMED:36349928 Exploring differences between public and private providers in primary care: findings from a large Swedish region. This study contributes to the sparse literature on differences between public and private primary care practices (PCCs). The purpose was to explore if differences in performance and characteristics between public and PCCs persist over time in a welfare market with patient choice and provider competition, where public and private providers operate under similar conditions. The analysis is based on data from a national patient survey and administrative registries in a large Swedish region, covering PCC observations in 2010 and 2019, i.e., the year after and 10 years after introducing choice and competition in the region. The findings suggest that differences across owner types tend to decrease over time in welfare markets. Differences in patients' experiences, PCC size, patient mix and the division of labour have decreased or disappeared between 2010 and 2019. There were small but significant differences in process measures of quality in 2019; public PCCs complied better with prescription guidelines. While the results demonstrate a convergence between public and private PCCs in regards to their characteristics and performance, differences in patients' experiences in regards to socioeconomic conditions persisted. Such unwarranted variation calls for continued attention from policy makers and further research about causes. abstract_id: PUBMED:33230826 National data on the early clinical use of non-invasive prenatal testing in public and private healthcare in Denmark 2013-2017. Introduction: In Denmark, non-invasive prenatal testing (NIPT) has been used since 2013. We aimed to evaluate the early clinical use of NIPT in Danish public and private healthcare settings before NIPT became an integrated part of the national guidelines on prenatal screening and diagnosis in 2017. Material And Methods: NIPT data were collected between March 2013 and June 2017 from national public registries and private providers. Results from follow-up samples (chorionic villi, amniotic fluid, postnatal blood or fetal tissue) were included from The Danish Cytogenetics Central Registry and indications and outcome from The Danish Fetal Medicine Database. Results: A total of 3936 NIPT results were included in the study from public hospitals (n = 3463, 88.0%) and private clinics (n = 473, 12.0%). The total number of prenatal tests was 19 713 during the study period: 20% were NIPT analyses (n = 3936) and 80% invasive procedures (n = 15 777). Twenty-five percent of NIPTs in the private clinics were performed before gestational week 11+0 , whereas NIPT in public settings was used only after combined first trimester screening (P &lt; .001). Regardless of indication, the national public sensitivity was 96.9% (95% CI 82.0%-99.8%) for trisomy 21, 100% (95% CI 46.3%-100%) for trisomy 18, 100% (95% CI 5.5%-100%) for trisomy 13, and 87.0% (95% CI 74.5%-92.4%) for any fetal chromosomal aberration. Forty-seven true-positive NIPT results included cases of common aneuplodies (trisomy 21, n = 31; trisomy 18, n = 5; and trisomy 13, n = 1), sex chromosomal aberrations (n = 7) and atypical chromosomal aberrations (n = 3). One false-negative NIPT result occurred (trisomy 21). Of 47 cases, 21 (45%) cases with a true-positive NIPT result resulted in live births by choice; 11 of these children had Down and 4 had Edwards syndrome. Conclusions: The total number of NIPT analyses was low compared with the number of invasive procedures in the implementation period. In contrast to the generally high termination rate after a positive result following invasive testing in Denmark, a high proportion of true-positive NIPT results from the public setting resulted in live births. NIPT may be an important risk-free alternative to invasive testing for a minority of women in the public setting who wish to use prenatal genetic testing for information only and not for reproductive decision-making. abstract_id: PUBMED:25636645 Assessing the effect on outcomes of public or private provision of prenatal care in Portugal. To evaluate whether public and private prenatal care users experience similar outcomes, taking into consideration maternal pre-pregnancy social and clinical risk. We studied 7,325 women who delivered single newborns at five public maternity units in Porto, Portugal. Health behaviors and prenatal care were self-reported; pregnancy complications and delivery data were retrieved from medical files. The odds of inadequate weight gain, continuing to smoke, gestational hypertension, gestational diabetes, caesarean section, preterm birth, low birthweight, and small- and large-for-gestational-age were estimated for public and private providers using logistic regression, stratified by pre-pregnancy risk profile, adjusted for maternal characteristics. 38 % of women used private prenatal care. Among low-risk women, public care users were more likely to gain excessive weight (OR 1.26; 95 % CI 1.06-1.57) and be diagnosed with gestational diabetes (OR 1.37; 95 % CI 1.01-1.86). They were less likely to have a caesarean (OR 0.63; 95 % CI 0.51-0.78) and more likely to deliver small-for-gestational-age babies (OR 1.48; 95 % CI 1.19-1.83). Outcomes were similar in high-risk women although preterm and pre-labor caesarean were less frequent in public care users (OR 0.64 95 % CI 0.45-0.91; OR 0.69 95 % CI 0.49-0.97). The amount of care was not significantly related to risk profile in either case. Public care users experienced similar outcomes to those using private care, despite higher pre-pregnancy disadvantage. Low-risk women need further attention if narrowing inequalities in birth outcomes remains a priority. Answer: Yes, there are differences in the information given to private and public prenatal patients. A study comparing the type of information provided to women seeking prenatal care in a public clinic versus those seen in private practice found that women in the public sector received more information than those who were cared for privately. This difference was statistically significant for 25 of the 38 topics examined. Surprisingly, satisfaction with the information learned was highly correlated with the information received during prenatal care, but not with the patient's interest level at the first visit. Fewer than 50% of private patients reported having received information about important topics such as acquired immunodeficiency syndrome, sexually transmitted diseases, preterm birth prevention, family planning, and family violence (PUBMED:8333444). Another study analyzing prenatal care in public and private services found significant differences in the quality of prenatal care provided by the two sectors. The quality of private care was rated as excellent and good, while public care was rated as regular for certain aspects such as ultrasonography and blood type/Rh factor, and poor for others like urine tests and weight. The study concluded that there is a need for improvement in the prenatal care provided by public services (PUBMED:24337056). In urban Mexico, women received significantly more prenatal care procedures in public clinical settings compared to private ones. The quality of care from medical doctors was also associated with significantly more procedures compared to non-medical doctors, independent of individual, household, and community characteristics (PUBMED:16675474). In India, a study found that with the exception of measles vaccination, predicted probabilities of the receipt of vaccinations and prenatal care did not differ based on the type of provider at which children and women sought curative care. The study suggested that Indian households are able to navigate the coexisting public and private systems and obtain services selectively from each (PUBMED:15544642). In Serbia, the private sector has been found to offer continuity of personalized care to private clients/patients once they enter the public system, although there is evidence that personalized continuity of care does not actually shield women from mistreatment in hospitals (PUBMED:35293633). Overall, these studies indicate that there are indeed differences in the information and quality of prenatal care provided to patients in public versus private settings.
Instruction: Do quantitative magnetic resonance imaging parameters correlate with the clinical presentation and functional outcomes after surgery in cervical spondylotic myelopathy? Abstracts: abstract_id: PUBMED:24859570 Do quantitative magnetic resonance imaging parameters correlate with the clinical presentation and functional outcomes after surgery in cervical spondylotic myelopathy? A prospective multicenter study. Study Design: A prospective multicenter cohort study. Objective: To establish the relationship between preoperative quantitative magnetic resonance imaging (MRI) parameters and clinical presentation and postoperative outcomes in patients with cervical spondylotic myelopathy. Summary Of Background Data: Correlation of magnetic resonance imaging with clinical presentation and outcomes in cervical spondylotic myelopathy is poorly understood. Methods: A total of 134 magnetic resonance imaging scans were reviewed from 12 sites across North America. The transverse area (TA) of the spinal cord at the site of maximal compression was computed, and spinal cord signal intensity (SI) changes on T1-/T2-weighted imaging (WI) were evaluated. Detailed clinical assessments--neurological signs, symptoms, Nurick grade, modified Japanese Orthopaedic Association, segmental-tract score, and long-tract score of modified Japanese Orthopaedic Association, 30-m walk test, Short-Form 36 questionnaire, and neck disability index were performed at admission, 6 months, and 12 months postoperatively. Results: The total number of neurological signs in a patient correlated with TA (P = 0.01) and SI changes on T1-/T2WI (P = 0.05). Pre- and postoperative Nurick grade (P = 0.03, P = 0.02), modified Japanese Orthopaedic Association score (P = 0.005, P = 0.001), segmental-tract score (P = 0.05, P = 0.006), and long-tract score (P = 0.006, P = 0.002), 30-m walk test (P = 0.002, P = 0.01) correlated with TA. There was no significant difference in pre- and postoperative clinical scores in patients with/without SI changes. Patients with severe cord compression showed SI changes on T1-/T2WI more frequently (r =-0.27, r =-0.38). Pyramidal signs--plantar response, Hoffmann reflex and hyper-reflexia correlated with TA (P = 0.003, P = 0.0004, P = 0.024, respectively) and SI changes on T1/T2WI (P = 0.02). Conclusion: TA closely mirrors the clinical presentation of cervical spondylotic myelopathy and may be used in predicting surgical outcomes. Pyramidal signs correlated with TA and/or SI changes on T1-/T2WI. The total number of neurological signs in a patient correlated with TA. There was no significant relationship between TA, age and duration of symptoms. Level Of Evidence: 3. abstract_id: PUBMED:25349665 Application of magnetic resonance imaging in cervical spondylotic myelopathy. Cervical spondylotic myelopathy (CSM) is the most common cause of spinal cord dysfunction and is caused by static or dynamic repeated compression of the spinal cord resulting from degenerative arthritis of the cervical spine and some biological injuries to the cervical spine. The T2 signal change on conventional magnetic resonance imaging (MRI) is most commonly associated with neurological deficits. Diffusion tensor imaging and MR spectroscopy show altered microstructure and biochemistry that reflect patient-specific pathogenesis and can be used to predict neurological outcome and response to intervention. Functional MRI can help to assess the neurological functional recovery after decompression surgery for CSM. abstract_id: PUBMED:34919463 Comparison of Cervical Sagittal Parameters Between Radiographs and Magnetic Resonance Images in Patients With Cervical Spondylotic Myelopathy. Study Design: Observational study. Objective: As an important consideration of surgery, cervical sagittal balance is believed to be better assessed using standing radiograph than supine magnetic resonance imaging (MRI). However, few studies have researched this. Our study aimed to observe the correlations and differences in cervical sagittal parameters between radiograph and MRI in patients with cervical spondylotic myelopathy (CSM), and evaluate whether the change of position affects them. Methods: We analyzed 84 patients, measuring Cobb angle (CA), T1 slope (T1S), neck tilt (NT), and thoracic inlet angle (TIA). Inter- and intra-parameter analyses were performed to identify any difference between standing radiograph and supine MRI. Statistical correlations and differences between the parameters were compared. Results: There were excellent inter-observer agreement for each parameter (interclass correlation coefficient &gt;.75), and significant differences were observed in each parameter between radiograph and magnetic resonance imaging (P &lt; .05). Strong correlations were noted between the same parameters in radiograph and MRI. Cobb angle, T1S, and neck tilt were significantly correlated with thoracic inlet angle on both radiograph and MRI, and CA was significantly correlated with T1S on both radiograph and MRI (r: -1.0 to -.5 or .5 to 1.0). Conclusion: Supine MRI obviously underestimated the value of CA, T1S, and TIA. Therefore, standing cervical radiographs should be obtained in CSM patients to assess and determine surgical strategy, not only supine MRI. Moreover, we observed that NT and TIA were not constant morphological parameters. abstract_id: PUBMED:10084183 Can intramedullary signal change on magnetic resonance imaging predict surgical outcome in cervical spondylotic myelopathy? Study Design: A retrospective study evaluating magnetic resonance imaging, computed tomographic myelography, and clinical parameters in patients with cervical spondylotic myelopathy. Objectives: To investigate whether magnetic resonance imaging can predict the surgical outcome in patients with cervical spondylotic myelopathy. Summary Of Background Data: No previous studies have established whether areas of high signal intensity in T2-weighted magnetic resonance images can be a predictor of surgical outcomes. Methods: Fifty patients with cervical spondylotic myelopathy were examined by magnetic resonance imaging and computed tomographic myelography before surgery and by delayed computed tomographic myelography after surgery. The correlation between the recovery rate and the clinical and imaging parameters was analyzed. Results: The best prognostic factor was the transverse area of the spinal cord at maximum compression (correlation coefficient, R = 0.58). The presence of high signal intensity areas on T2-weighted magnetic resonance images correlated poorly with the recovery rate (R = -0.29). However, patients with multisegmental areas of high signal intensity on T2-weighted magnetic resonance images tended to have poor surgical results associated with muscle atrophy in the upper extremities. Postoperative delayed computed tomographic myelography showed that multisegmental areas of high signal intensity on T2-weighted magnetic resonance images probably represent cavitation in the central spinal cord. Conclusions: Patients with multisegmental areas of high signal intensity on T2-weighted magnetic resonance images tended to have poorer surgical results. However, the transverse area of the spinal cord at the level of maximum compression was a better prognostic indicator. abstract_id: PUBMED:24952255 Association of postoperative outcomes with preoperative magnetic resonance imaging for patients with concurrent multiple sclerosis and cervical stenosis. Background Context: Differentiating between multiple sclerosis (MS) and cervical stenosis (CS) can be difficult because of their overlapping symptoms. Although studies have shown preoperative imaging criteria that are predictive of outcomes in either MS or CS individually, no studies have investigated these factors in patients that have concurrent MS and CS. Purpose: To investigate the associations between preoperative magnetic resonance imaging (MRI) findings and postoperative outcomes in patients with concurrent MS and CS with myelopathy. Study Design: A retrospective review. Patient Sample: All patients presenting with myelopathy who underwent cervical decompression surgery at a single tertiary-care institution between January 1996 and July 2011, diagnosed with concurrent MS and CS. Outcome Measures: Pre- and postoperative severity of myelopathy was assessed using the modified Japanese Orthopaedic Association (mJOA) scale. Methods: Preoperative imaging was assessed for stenosis, lesions, signal intensity (graded low, intermediate, or high), extent of lesion (focal or diffuse), and cord atrophy. Imaging was then correlated with postoperative myelopathy outcomes. Results: Forty-eight patients with MS and CS were reviewed for an average follow-up of 53 months. In the short term after surgery, there were 24 patients (50%) who showed improvement in the mJOA myelopathy score and 24 (50%) who did not improve. Significantly greater percentage of patients in the improvement group had high-intensity lesions on preoperative MRI as compared with the no-improvement group (p=.03). At long-term follow-up, there were 18 patients (37.5%) who showed postoperative improvement and 30 patients (62.5%) with no improvement. No significant differences were identified on preoperative imaging between those who improved postoperatively and those who did not. Conclusions: Although certain characteristic preoperative MRI findings are associated with postoperative outcomes in cohorts of either MS or CS patients, we did not find this to be the case in patients with concurrent MS and CS. Accordingly, the treatment of the MS/CS patient population should be unique as their outcomes may not be as good as those with CS but no MS. abstract_id: PUBMED:29566440 Transcranial magnetic motor evoked potentials and magnetic resonance imaging findings in paraplegic dogs with recovery of motor function. Background: Transcranial magnetic motor evoked potentials (TMMEP) are associated with severity of clinical signs and magnetic resonance imaging (MRI) findings in dogs with spinal cord disease. Hypothesis: That in initially paraplegic dogs with thoracolumbar intervertebral disc herniation (IVDH), MRI findings before surgery and TMMEPs obtained after decompressive surgery are associated with long-term neurological status and correlate with each other. Animals: Seventeen client-owned paraplegic dogs with acute thoracolumbar IVDH. Methods: Prospective observational study. TMMEPs were obtained from pelvic limbs and MRI (3T) of the spinal cord was performed at initial clinical presentation. Follow-up studies were performed ≤ 2 days after reappearance of motor function and 3 months later. Ratios of compression length, intramedullary hyperintensities' length (T2-weighted hyperintensity length ratio [T2WLR]), and lesion extension (T2-weighted-lesion extension ratio) in relation to the length of the 2nd lumbar vertebral body were calculated. Results: TMMEPs could be elicited in 10/17 (59%) dogs at 1st and in 16/17 (94%) dogs at 2nd follow-up. Comparison of TMMEPs of 1st and 2nd follow-up showed significantly increased amplitudes (median from 0.19 to 0.45 mV) and decreased latencies (from 69.38 to 40.26 ms; P = .01 and .001, respectively). At 2nd follow-up latencies were significantly associated with ambulatory status (P = .024). T2WLR obtained before surgery correlated with latencies at 2nd follow-up (P = .04). Conclusions: TMMEP reflect motor function recovery after severe spinal cord injury. abstract_id: PUBMED:30241007 Clinical and magnetic resonance imaging predictors of the surgical outcomes of patients with cervical spondylotic myelopathy. Objective: To determine whether clinical characteristics and signal and morphologic changes on magnetic resonance (MR) images of the spinal cord (SC) are associated with surgical outcomes for cervical spondylotic myelopathy (CSM). Patients And Methods: A total of 113 consecutive patients with cervical myelopathy underwent cervical decompression surgery in our hospital from January 2015 to January 2018. All patients with preoperative MR images available for review were recruited for this study. Research data included patient sex, age, duration of symptoms, surgical approach, compression level, preoperative mJOA (modified Japanese Orthopaedic Association) score, postoperative mJOA recovery rate, and complications. Imaging data included signal changes on T2-weighted MRI images (grade and extension on sagittal images, four types of signal changes on axial images according to the Ax-CCM system), SC compression, transverse area of the SC, and compression ratio. The t-test, Mann-Whitney U-test, Kruskal-Wallis H - test, analysis of variance, and regression analysis were used to evaluate the effects of individual predictors on surgical outcomes. Results: The study cohort included 85 males and 27 females with a mean age of 60.92 ± 8.93 years. The mean mJOA score improved from 10.24 ± 1.69 preoperatively to 15.11 ± 2.05 at the final follow-up (p &lt; 0.001). Patients in the poor outcome group were more likely to present with a longer duration of symptoms (p &lt; 0.001) and smaller transverse area of the SC (p &lt; 0.001). Bright T2-weighted high signal changes (T2HSCs), multisegmental high signal changes on sagittal MR images, and fuzzy focal T2HSCs on axial MR images were associated with a poor outcome (p &lt; 0.001, p = 0.005, p &lt; 0.001, respectively). The maximum SC compression and compression ratio were not reliable predictors of surgical outcomes (p = 0.375, p = 0.055, respectively). The result of multivariate stepwise logistic regression showed that a longer duration of symptoms, multisegmental T2HSCs on sagittal MR images and fuzzy focal T2HSCs on axial MR images were significant risk factors of poor outcomes (p &lt; 0.001, p = 0.049, p = 0.016, respectively). Conclusion: A longer duration of symptom, multisegmental T2HSCs on sagittal MR images, and fuzzy focal T2HSCs on axial MR images were highly predictive of a poor surgical outcome for CSM. Smaller transverse area of the SC and bright T2HSCs were also associated with the prognosis of CSM. abstract_id: PUBMED:9076887 Epidural spinal infection. Variability of clinical and magnetic resonance imaging findings. Study Design: This study evaluates the magnetic resonance characteristics of spinal epidural abscesses and their associated disc space infections. Objectives: The results were correlated with history, clinical, and laboratory findings to provide guidelines for early and appropriate diagnosis of epidural spinal infections. Summary Of Background Data: Imaging signs of spinal infections have been reported before, but not with special attention to early clinical and imaging findings. Methods: Thirteen patients (10 men, 3 women; age range, 32-64 years) with progressive sensorimotor deficit were studied. All patients had a neurologic examination after admission and a magnetic resonance imaging scan done within the first 48 hours. In all cases, T1-weighted images before and after administration of gadolinium were obtained. T2-weighted images were acquired in eight cases as well. Ten patients subsequently underwent open surgery; in three cases, a percutaneous biopsy and drainage was performed. Results: Cervical discitis was found in five patients, and thoracic discitis was seen in another five cases. Three patients had an epidural infection without a concomitant discitis. Neurologic and clinical findings varied considerably. Despite clinical signs of spinal cord involvement, a spinal cord lesion was demonstrated only once. Signal change in T2-weighted images may be the first sign of disc space infection. Because a neurologic deficit may occur before any change is visible, follow-up examinations may be required if epidural infection is suspected on clinical grounds. Conclusions: Magnetic resonance imaging is the appropriate method for diagnostic work-up of progressive neurologic deficit resulting from epidural infection. abstract_id: PUBMED:9682310 Magnetic resonance imaging of experimental subacute spinal cord compression. Study Design: Subacute compression of the spinal cord was applied to rats. The animals were chronologically observed using magnetic resonance imaging for more than 8 weeks after surgery and were killed for histopathologic examination. Objectives: To investigate the correlation of changes in signal intensity on magnetic resonance images with those observed in histopathologic study and with the degree of spinal cord compression and paralysis. Summary Of Background Data: No consensus has been reached concerning the correlation of magnetic resonance images to clinical symptoms of compressive myelopathy. Few reports are available in which magnetic resonance imaging findings are compared with histopathologic features in chronic or subacute experiments. Methods: In rats under general anesthesia, the T11 lamina was thinned and a slow increase in volume was applied. Hind limb paralysis appeared 1 week after the procedure and spontaneously subsided thereafter. The degree of spinal compression and signal intensity was observed chronologically using magnetic resonance imaging. The signal intensity on the final MR images was rated on a four-point scale and compared with histopathologic findings. Results: As spinal compression increased, the incidence of high signal intensity on long spin-echo images became higher. Low signal intensities on short spin-echo images were visible in animals in which compression and paralysis were the most severe. In these animals, cavitation and a dilated central canal were visible. High signal intensities on long spin-echo images reflected various pathologic changes. Conclusions: Changes in signal intensity on MR images are visible after the induction of myelopathy by high-pressure compression. These signal intensities may be useful in predicting the outcome of compressive myelopathy. abstract_id: PUBMED:7993244 A review of magnetic resonance imaging in spinal trauma. One hundred and ninety-five magnetic resonance (MR) images (1.5 Tesla) of 167 patients with neurological impairment following spinal trauma were reviewed. Acute cord injury produces central haemorrhagic necrosis that extends transversely and longitudinally with time and increased injury severity. Oedoma is more homogeneous, extensive and dominant in minimal lesions. Magnetic resonance appearances correlate with neurological status and outcome. Patients with MR evidence of cord blood had severe clinical lesions and failed to show useful clinical improvement. Patients with homogeneous 'oedema' improved to useful function. Lesion signal inhomogeneity relates to a worse prognosis. The clinical level correlates closely with cord blood or signal in homogeneity but imprecisely with homogeneous oedema. Disc herniations require differentiation from epidural blood and venous engorgement, which are prominent with bone displacement. Magnetic resonance is recommended in incomplete cord syndromes and in cord injuries with no apparent fracture, particularly if clinically deteriorating. Chronic injury consists of cavitation, extensive gliosis, cord atrophy and leptomeningeal fibrosis. Progressive myelopathy may result from cystic or non-cystic intramedullary lesions. Cord cysts are common and cyst fluid signal should closely follow cerebrospinal fluid. Turbulent cyst fluid motion is commoner in larger cysts and may predict those cysts more prone to propagate. Progressive syrinxes show typical appearances, usually with transverse septa. Atrophy and propagating syrinxes usually take years to develop. Leptomeningeal cysts and spinal stenosis caused by bone displacement and accelerated adjacent disc disease may cause late deterioration. In progressive myelopathy following injury, surgically drainable cysts are clearly differentiable from cord gliosis and atrophy. Answer: Yes, quantitative magnetic resonance imaging (MRI) parameters do correlate with the clinical presentation and functional outcomes after surgery in cervical spondylotic myelopathy (CSM). A prospective multicenter study found that the transverse area (TA) of the spinal cord at the site of maximal compression closely mirrors the clinical presentation of CSM and may be used in predicting surgical outcomes. The total number of neurological signs in a patient correlated with TA and spinal cord signal intensity (SI) changes on T1-/T2-weighted imaging (WI). Pre- and postoperative clinical scores such as the Nurick grade, modified Japanese Orthopaedic Association score, segmental-tract score, long-tract score, and the 30-m walk test correlated with TA. Pyramidal signs also correlated with TA and/or SI changes on T1-/T2WI (PUBMED:24859570). Additionally, the presence of high signal intensity areas on T2-weighted MRI images correlated poorly with the recovery rate after surgery. However, patients with multisegmental areas of high signal intensity on T2-weighted MRI images tended to have poorer surgical results, which may represent cavitation in the central spinal cord (PUBMED:10084183). Furthermore, a study on patients with concurrent multiple sclerosis and cervical stenosis found that certain characteristic preoperative MRI findings associated with postoperative outcomes in cohorts of either condition individually did not apply to patients with concurrent conditions, indicating that treatment and outcomes may differ in this patient population (PUBMED:24952255). In another study, bright T2-weighted high signal changes (T2HSCs), multisegmental high signal changes on sagittal MR images, and fuzzy focal T2HSCs on axial MR images were associated with a poor outcome after surgery for CSM. The study concluded that a longer duration of symptoms, multisegmental T2HSCs on sagittal MR images, and fuzzy focal T2HSCs on axial MR images were highly predictive of a poor surgical outcome for CSM (PUBMED:30241007). Overall, these studies indicate that quantitative MRI parameters, particularly the transverse area of the spinal cord and signal intensity changes, have a significant correlation with clinical presentation and can be predictive of functional outcomes following surgery for CSM.
Instruction: Is the national nutrition survey in Japan representative of the entire Japanese population? Abstracts: abstract_id: PUBMED:16039832 Is the national nutrition survey in Japan representative of the entire Japanese population? Objective: We examined the representativeness of the sample used in the National Nutrition Survey in Japan (J-NNS). Methods: The distribution of the J-NNS sample from 1995 to 2000 was compared with that of the national census with respect to gender, age group, place of residence, and number of household members. Results: In the survey sample, as compared with the national census distributions, the proportion of females was 1% to 2% larger, the proportion of individuals who were 20 to 39 y of age was a few percentages smaller, and a slight bias toward rural areas was seen. Among individuals who were 20 to 59 y of age, the proportion of individuals who lived in single-person households was 50% to 66% of that found in the national census. Observed disparities between the sample and the population were minimal except in the case of the proportion of individuals who lived alone. Conclusions: The sample used in the J-NNS is generally representative of the entire Japanese population with the exception of individuals in single-person households. abstract_id: PUBMED:31478392 Nutrition intake among the Japanese elderly: an intergenerational comparison based on national health and nutrition survey scores. Context: The average Japanese lifespan became the longest in the world in 1986. What factors give the Japanese this longevity? Washoku, or the traditional Japanese diet, is respected globally for its nutritionally-balanced and healthy eating habits. This uniquely Japanese way of eating may be one factor that helps extend the Japanese lifespan. Objective: To explain the nutrition intake characteristics of today's Japanese elderly compared with their children's generation and to discuss the association between nutrition intake and various diseases and health issues in the general adult population. Methods: This study compared the characteristics of nutrition status and nutrition intake among today's elderly and their children's generation by using National Health and Nutrition Survey scores. Results: Japanese elderly had high adequacy in all nutrients as well as a high intake of potatoes, pulses, vegetables, fruits, algae, and fish and shellfish compared with their children's generation. Conclusion: Nutrition intake among the Japanese elderly had the characteristics of washoku, but these characteristics were not passed on to the next generation. Extension of the average lifespan and improved health could be achieved by modifying nutrition intake, even after reaching the age of onset of lifestyle- and age-related diseases, typically the 50s. abstract_id: PUBMED:31736456 Diet quality of Japanese adults with respect to age, sex, and income level in the National Health and Nutrition Survey, Japan. Objective: Although several studies in Western countries show that higher socioeconomic status is associated with higher diet quality, no study has observed this association in Japan. In the current study, we examined the association between diet quality and the combinations of age, sex, and household income, and also compared the dietary intake between diet quality levels according to household income. Design: Cross-sectional study. Setting: National Health and Nutrition Survey, Japan in 2014. Participants: 2785 men and 3215 women. Results: Higher Japanese Food Guide Spinning Top scores (better diet quality) were observed in older women, especially those with higher household income, whereas lower scores were observed in younger men with lower household income. Those having low quality diet, especially in low income households, had higher odds of not meeting the recommended amounts of the Japanese dietary guidelines, than those having high quality diet. Conclusions: Diet quality in Japanese adults differed by age and sex as well as by household income level. A different approach to diet quality improvement is needed according to population characteristics including not only age and sex but also social economic status. abstract_id: PUBMED:32513347 Trends in BMI among elderly Japanese population: findings from 1973 to 2016 Japan National Health and Nutrition Survey. Objective: To examine 44-year trends in height, weight and BMI, and the prevalence of overweight/obesity and underweight in the elderly Japanese population because Japan, the country with the longest life expectancy, has the highest ageing population. Design: Using the nationally representative cumulative data from the Japan National Health and Nutrition Survey 1973-2016, sex-specific and age-adjusted mean height, weight and BMI, and the prevalence of overweight/obesity and underweight were calculated for each year. Trend analyses were performed using the Joinpoint Regression Program. Next, changes in height, weight and BMI, and the prevalence of overweight/obesity and underweight were estimated for each age group (65-69, 70-79 and &gt;80 years) according to the birth year groups. Setting: Japan. Participants: Individuals aged ≥65 years with complete data on height and body weight measurements (N 94 508). Results: Trends in average height and weight increased over the four decades in both men and women. The prevalence of overweight/obesity dramatically increased and nearly tripled in men, accompanied with significantly decreased underweight prevalence. Furthermore, the increasing trend in average BMI and overweight/obesity prevalence in women reached a peak in 2002 and showed a decreasing trend thereafter, while a downward trend in underweight prevalence showed a gradual increase in women since 2003. Conclusions: Trends in height, weight and BMI among the elderly population should continue to be closely monitored in parallel with the effects of dietary changes, energy intake and physical activity (step counts). abstract_id: PUBMED:8800293 National Nutrition Survey in Japan. The National Nutrition Survey in Japan has played an important role for nutrition policy in this country, by monitoring nutrition and health status of the Japanese since 1946. During those 50 years, diet of the Japanese has remarkably changed, with the proportion of fat intake in total energy being more than 25% and salt intake gradually decreased. The aim of this survey also has shifted from the policy making for food supplies to monitoring the over intake for prevention of diet-related chronic diseases and health promotion. abstract_id: PUBMED:32162042 Association between free sugars intake and nutrient dilution among Japanese adults: the 2016 National Health and Nutrition Survey, Japan. Purpose: The association between free sugars and nutrient intake is unclear in Japanese adults with relatively low free sugars intake. This cross-sectional study aimed to elucidate this relationship using data from the 2016 National Health and Nutrition Survey, Japan, and according to the current World Health Organization (WHO) guidelines for the prevention of nutrient dilution [&lt; 5% or &lt; 10% of energy (%E)]. Methods: Dietary intake of Japanese adults (aged ≥ 20 years; n = 16,652) was assessed using a 1-day weighed dietary record. Mean intakes for selected nutrients and food groups were compared among categories of free sugars intake (i.e., &lt; 2.5%E, 2.5 to &lt; 5%E, 5 to &lt; 10%E, and ≥ 10%E) by adjusting for sex, age, weight status, smoking status, habitual drinking, and occupation. Results: Free sugars intake was inversely associated with the intake of 16 of 24 micronutrients investigated. Compared to its lower categories of free sugars intake, significant reductions in intake were identified for almost all micronutrients at ≥ 10%E, whereas ten micronutrients were reduced at 5 to &lt; 10%E. The intake of dietary fibre, sodium, potassium, calcium, and iron was lower at &lt; 2.5%E than at 2.5 to &lt; 5%E and/or 5 to &lt; 10%E. Free sugars intake was also positively associated with sugars and jams, confectionaries, fruit and vegetable juices, and soft drinks and inversely with pulses and nuts and vegetables. Conclusions: This study identified nutrient dilution among Japanese adults with higher free sugars intake and confirmed the significance of the WHO guidelines for preventing nutrient dilution in Japanese. abstract_id: PUBMED:30837027 Dietary patterns and abnormal glucose tolerance among Japanese: findings from the National Health and Nutrition Survey, 2012. Objective: Previous studies have associated dietary patterns with diabetes risk in Western countries, but such studies among the Japanese population are scarce. The present study aimed to investigate dietary patterns associated with abnormal glucose tolerance determined by elevated glycated Hb (HbA1c) levels. Design: The 2012 National Health and Nutrition Survey (NHNS) database was used for the cross-sectional study. Dietary patterns were analysed by factor analysis of twenty-five food items from the dietary intake survey and household-based semi-weighed dietary records. OR and 95 % CI for elevated HbA1c levels (≥6·5 %) according to dietary patterns were estimated using logistic regression models. Setting: Japan. Participants: The study population comprised 9550 Japanese aged ≥40 years registered in the nationwide NHNS. Results: Three dietary patterns were identified: (i) high-bread and low-rice; (ii) high-meat and low-fish; and (iii) vegetable. The high-bread and low-rice pattern, characterised by high frequent consumption of bread, milk and dairy products, and fruits, and low rice intake, was associated with marginally decreased prevalence of elevated HbA1c levels (Ptrend=0·047). The vegetable pattern, characterised by vegetables, mushrooms, soyabeans and soybean products, was significantly inversely associated with elevated HbA1c levels (4th v. 1st quartile: multivariable OR=0·68; 95 % CI 0·49, 0·95; Ptrend=0·007). Conclusions: Our findings suggest that the vegetable pattern is associated with decreased prevalence of elevated HbA1c levels among Japanese. abstract_id: PUBMED:34776478 A Validation Study of the National Database of Health Insurance Claims and Specific Health Checkups of Japan Regarding the Annual Number of Tooth Extractions Performed Across the Entire Japanese Population. This report describes a validation study of data in the National Database of Health Insurance Claims and Specific Health Checkups of Japan (NDB) obtained by nationwide surveys on tooth extractions. The following 3 data sources on tooth extractions in Japan were compared: (1) the Nationwide Survey of the Reasons for Permanent Tooth Extractions in Japan (a previous survey conducted by the present authors); (2) the Statistics on Medical Care Activities in Public Health Insurance; and (3) the 4th NDB Open Data Japan. Source 1 was a nationwide survey conducted in 2018; sources 2 and 3 comprised data that are freely available for use by the public. In Source 1, 2,345 of 5,250 dentists approached responded to the questionnaire (recovery rate: 44.8%). The number of extracted teeth among younger age groups (aged 25-50 years) reported in Source 1 was lower than that in the other two sources. In contrast, the number of extracted teeth among older age groups reported in Source 1 was higher than that reported in the other two sources. However, when stratified by age group, all differences across the 3 sources regarding the mean annual number of tooth extractions were &lt;0.05 teeth/year. The present results suggest that the NDB is a reliable resource for data on tooth extractions performed across the entire Japanese population. abstract_id: PUBMED:29356253 Current protein and amino acid intakes among Japanese people: Analysis of the 2012 National Health and Nutrition Survey. Aim: To evaluate protein and amino acid intakes and distribution among Japanese people in terms of prevention of sarcopenia or frailty. Methods: We carried out a cross-sectional study using data from Japan's 2012 National Health and Nutrition Survey. From the original dataset, we selected individuals aged ≥30 years, who were not pregnant or lactating, and for whom dietary intake, bodyweight and body height data were available. Protein intake was calculated using exclusive software developed for the survey. Amino acid intake was calculated using the database created by the National Institute for Longevity Sciences. Results: More than 95% of the participants met recommended levels of protein suggested by the Dietary Reference Intake in Japan. However, nearly half of the participants did not meet recommended levels in terms of prevention of sarcopenia. Leucine intake was not sufficient for each sex and age group, probably because of decreased intake of animal protein, especially protein from milk and dairy products. Distribution of protein and leucine in each meal tended toward dinner. Conclusions: The Japanese population might have insufficient quality, quantity and distribution of protein intake required to prevent sarcopenia or frailty. Geriatr Gerontol Int 2018; 18: 723-731. abstract_id: PUBMED:33255814 Association of Free Sugars Intake with Cardiometabolic Risk Factors among Japanese Adults: The 2016 National Health and Nutrition Survey, Japan. The relationship between free sugars intake and cardiometabolic risk factors is unclear in Japanese adults. This cross-sectional study aimed to investigate this association using data from the 2016 National Health and Nutrition Survey, Japan. The percentage of energy intake from free sugars was estimated based on the 1-day weighed dietary record data of Japanese men (n = 4071) and women (n = 5794) aged ≥ 20 years. Associations between free sugars intake and cardiometabolic risk factors, including body mass index (BMI), waist circumference (WC), systolic and diastolic blood pressures, glycated haemoglobin (HbA1c) level and levels of serum total, low-density lipoprotein (LDL), and high-density lipoprotein (HDL) cholesterol, were investigated using linear regression and Dunnett's test, with the lowest category of quartiles as a reference. After adjustment for potential confounding factors, free sugars intake was inversely associated with blood pressures (men only) and HDL-cholesterol level (both sexes) and positively associated with total-cholesterol level (women only) and LDL-cholesterol level (both sexes), whereas no association was observed for BMI, WC, and HbA1c level. This study identified both positive and inverse associations of free sugars intake with cardiometabolic risk factors in Japanese adults. Answer: The National Nutrition Survey in Japan (J-NNS) is generally representative of the entire Japanese population, with some exceptions. The distribution of the J-NNS sample from 1995 to 2000 was compared with that of the national census with respect to gender, age group, place of residence, and number of household members. The results showed that the proportion of females in the survey sample was 1% to 2% larger, the proportion of individuals aged 20 to 39 years was a few percentages smaller, and there was a slight bias toward rural areas. Among individuals aged 20 to 59 years, the proportion of individuals who lived in single-person households was 50% to 66% of that found in the national census. The observed disparities between the sample and the population were minimal except in the case of the proportion of individuals who lived alone. Therefore, the sample used in the J-NNS is generally representative of the entire Japanese population with the exception of individuals in single-person households (PUBMED:16039832).
Instruction: Can brain natriuretic peptide predict outcome after cardiac arrest? Abstracts: abstract_id: PUBMED:17451863 Can brain natriuretic peptide predict outcome after cardiac arrest? An observational study. Background: No accurate, independent biomarker has been identified that could reliably predict neurological outcome early after cardiac arrest. We speculated that brain natriuretic peptide (BNP) measured at hospital admission may predict patient outcome. Methods: BNP-levels were measured in 155 comatose cardiac arrest survivors (108 male, 58 years [IQR 49-68]) (median time to ROSC 11min; IQR 20-30) during a 6-year study period. Cardiovascular co-morbidities and resuscitation history were assessed according to the Utstein-style and patients were followed for 6-month neurological outcome measured by cerebral performance category (CPC) and survival. Results: Seventy patients (45%) suffered from unfavourable neurological outcome and 79 deaths (51%) occurred during the first 6 months. BNP was significantly associated with an adverse neurological outcome and mortality, independent of the prearrest health condition and cardiac arrest characteristics (median 60 pg/ml; IQR 10-230). Adjusted odds ratios for poor neurological outcome at 6 months were 1.14 (95% CI 0.51-2.53), 1.76 (95% CI 0.80-3.88) and 2.25 (95% CI 1.05-4.81), for increasing quartiles of BNP as compared to the lowest quartile. Adjusted odds ratios for mortality until 6 months were 1.09 (95% CI 0.35-3.40), 2.81 (0.80-9.90) and 4.7 (1.27-17.35) compared to the lowest quartile, respectively. Conclusion: Brain natriuretic peptide levels on admission predict neurological outcome at 6 months and survival after cardiac arrest. abstract_id: PUBMED:20362714 Ischemia-modified albumin levels predict long-term outcome in patients with acute myocardial infarction. The French Nationwide OPERA study. Background: Little is known about the capacity of ischemia-modified albumin (IMA) plasma concentration to predict long-term cardiac outcome in patients with established acute myocardial infarction (AMI). Because IMA is a marker of ischemia rather than myocardial cell damage, we hypothesized that IMA plasma levels could provide additional prognostic value to classic clinical and biological risk markers in patients with AMI. Therefore, we investigated the predictive value of plasma IMA in patients with AMI enrolled in the French Nationwide OPERA study. Methods: Plasma concentrations of IMA and other cardiac biomarkers (troponin, C-reactive protein, B-type natriuretic peptide) were measured within 24 hours of hospital admission in 471 patients hospitalized with an AMI (defined using European Society of Cardiology/American College of Cardiology criteria). Patients' characteristics, cardiovascular risk factors and treatments, and clinical outcomes were recorded. Univariate and multivariable predictors of cardiac outcome in-hospital and at 1 year were identified. Results: The primary composite end point (death, resuscitated cardiac arrest, recurrent myocardial infarction or ischemia, heart failure, stroke) occurred in 75 (15.6%) patients in-hospital and in 144 (30.6%) at 1 year: 40% of patients in the highest IMA quartile (&gt;104 IU/mL) reached the end point compared with 20% in the lowest (&lt;83 IU/mL) by 1 year. Multivariable logistic regression analysis identified 4 independent predictors of composite end point at 1 year: plasma concentrations of IMA (P = .01), brain natriuretic peptide (P = .001), heart failure (P = .005), and age (P = .003). Conclusions: In patients with AMI, IMA measured within 24 hours is a strong and independent predictor of cardiac outcome at 1 year and may help identify those requiring more aggressive medical management. abstract_id: PUBMED:21314784 Relationship between B-type natriuretic peptide and adverse outcome in patients with clinical evidence of sepsis presenting to the emergency department. Objectives: Myocardial dysfunction is an important aspect of sepsis pathophysiology. B-type natriuretic peptide (BNP) is a neurohormone released from the ventricles in response to myocardial stretch and volume overload. The authors hypothesized that an elevated BNP in patients presenting to the emergency department (ED) with suspected sepsis are at increased risk for development of adverse events. Methods: This was a prospective, observational, multicenter cohort study in 10 EDs. Patients were eligible if they were older than 18 years, had two or more systemic inflammatory response syndrome (SIRS) criteria, and had suspected infection or a serum lactate level &gt; 2.5 mmol/L. Patients were excluded if they were pregnant, had do-not-attempt-resuscitation status, sustained a cardiac arrest prior to hospital arrival, had known chronic renal insufficiency, or were on dialysis. BNP levels were obtained at arrival. The primary outcome was a composite of severe sepsis, septic shock within 72 hours, or in-hospital mortality. Results: There were 825 patients enrolled (mean ± standard deviation [SD] age = 53.5 ± 19.6 years; 51% were female and 37% were African American). The area under the curve (AUC) for BNP to predict the triple composite outcome was 0.69, and the optimal cut-point of BNP was 49 pg/mL. Patients with a BNP &gt; 49 pg/mL had a greater mortality rate (11.6% vs. 2.1%; p = 0.0001), a greater risk of development of severe sepsis (67.7% vs. 36.8%; p = 0.0001) and septic shock (51.7% vs. 26.4%; p = 0.0001), and a higher rate of the triple composite outcome (69% vs. 37%; unadjusted odds ratio [OR] = 1.9, 95% confidence interval [CI] = 1.6 to 2.1; p &lt; 0.001). The sensitivity was 63% (95% CI = 58% to 67%), specificity was 69% (95% CI = 65% to 73%), negative predictive value (NPV) was 63% (95% CI = 58% to 67%), and positive predictive value (PPV) was 69% (95% CI = 65% to 74%). In multivariate modeling, after adjusting for age, sex, heart rate, white blood cell count, and creatinine, an elevated BNP was associated with increased odds of having the composite outcome. The outcome was similar in the subset of patients who did not have severe sepsis or septic shock upon arrival. Conclusions: In patients who present to the ED with SIRS criteria and suspected infection, an elevated BNP is associated with a worse prognosis but has limited diagnostic utility. abstract_id: PUBMED:34620086 A machine learning model to predict critical care outcomes in patient with chest pain visiting the emergency department. Background: Currently, the risk stratification of critically ill patient with chest pain is a challenge. We aimed to use machine learning approach to predict the critical care outcomes in patients with chest pain, and simultaneously compare its performance with HEART, GRACE, and TIMI scores. Methods: This was a retrospective, case-control study in patients with acute non-traumatic chest pain who presented to the emergency department (ED) between January 2017 and December 2019. The outcomes included cardiac arrest, transfer to ICU, and death during treatment in ED. In the randomly sampled training set (70%), a LASSO regression model was developed, and presented with nomogram. The performance was measured in both training set (70% participants) and testing set (30% participants), and findings were compared with the three widely used scores. Results: We proposed a LASSO regression model incorporating mode of arrival, reperfusion therapy, Killip class, systolic BP, serum creatinine, creatine kinase-MB, and brain natriuretic peptide as independent predictors of critical care outcomes in patients with chest pain. Our model significantly outperformed the HEART, GRACE, TIMI score with AUC of 0.953 (95%CI: 0.922-0.984), 0.754 (95%CI: 0.675-0.832), 0.747 (95%CI: 0.664-0.829), 0.735 (95%CI: 0.655-0.815), respectively. Consistently, our model demonstrated better outcomes regarding the metrics of accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F1 score. Similarly, the decision curve analysis elucidated a greater net benefit of our model over the full ranges of clinical thresholds. Conclusion: We present an accurate model for predicting the critical care outcomes in patients with chest pain, and provide substantial support to its application as a decision-making tool in ED. abstract_id: PUBMED:37583382 Emergency Department Point-of-Care Tests during Cardiopulmonary Resuscitation to Predict Cardiac Arrest Outcomes. Introduction: This study evaluated the role of point-of-care tests (POCT) such as blood lactate, anion gap (AG), base deficit, pH, N-terminal pro B-type natriuretic peptide (NT-proBNP), and troponin as the predictors of cardiac arrest outcomes in the emergency department (ED). Methods: We conducted a prospective, observational study in the ED of a tertiary care hospital in India. All the adult patients who received cardiopulmonary resuscitation (CPR) in the ED were included in the study. Blood samples were collected within 10 min of initiation of CPR for assay of POCTs. Outcomes assessed were the return of spontaneous circulation (ROSC), 24-h survival, survival to hospital discharge (STHD), survival at 7 days, and favorable neurological outcome (FNO) at day 7 of admission. Results: One hundred and fifty-one patients were included in the study (median age: 50 years, 65% males). Out of 151 cases, ROSC, survival at 7 days, STHD, and FNO was observed in 86 patients, six patients, five patients, and two patients, respectively. "No-ROSC" could be significantly predicted by raised lactate (odds ratio [OR]: 1.14, 95% confidence interval: 1.07-1.22) and NT-proBNP (OR: 1.05, 1.01-1.09) values at the time of cardiac arrest. "24-h mortality" could be significantly predicted by the raised lactate (OR: 1.14, 1.01-1.28), low arterial pH (OR: 0.05, 0.01-0.52), raised AG (OR: 1.08, 1.01-1.15), and lower base deficit (&lt;-15) (OR: 1.07, 1.01-1.14). None of the other POCTs was found to be a predictor of other cardiac arrest outcomes. Conclusion: Among various POCTs, raised lactate assayed within 10 min of cardiac arrest can predict poor outcomes like "no-ROSC" and 24-h mortality. abstract_id: PUBMED:25308138 Predictive value of the heart-type fatty acid-binding protein and the Pulmonary Embolism Severity Index in patients with acute pulmonary embolism in the emergency department. Objectives: Heart-type fatty acid-binding protein (h-FABP), sensitive troponins, natriuretic peptides, and clinical scores such as the Pulmonary Embolism Severity Index (PESI) are candidates for risk stratification of patients with acute pulmonary embolism (PE). The aim was to compare their respective prognostic values to predict an adverse outcome at 1 month. Methods: The authors prospectively included 132 consecutive patients with confirmed acute PE. On admission to the emergency department (ED), plasma concentrations of h-FABP, sensitive cardiac troponin I-Ultra (cTnI-Ultra), and brain natriuretic peptide (BNP) were measured and the PESI calculated in all patients. The combined 30-day outcomes of interest were death, cardiac arrest, mechanical ventilation, use of catecholamines, and recurrence of acute PE. Results: During the first 30 days, 14 (10.6%) patients suffered complications. Among the biomarkers, h-FABP above 6 μg/L was a stronger predictor of an unfavorable outcome (odds ratio [OR] = 17.5, 95% confidence interval [CI] = 4.2 to 73.3) than BNP &gt; 100 pg/mL (OR = 5.7, 95% CI = 1.6 to 20.4) or cTnI-Ultra &gt; 0.05 μg/L (OR = 3.4, 95% CI = 1.1 to 10.9). The PESI classified 83 of 118 patients (70.3%) with favorable outcomes and only one of 14 (7%) with adverse outcomes in low class I or II (OR = 30.8, 95% CI = 3.2 to 299.7). The areas under the receiver operating characteristic (ROC) curves (AUCs) were 0.90 (95% CI = 0.81 to 0.98) for h-FABP, 0.89 (95% CI = 0.82 to 0.96) for PESI, 0.79 (95% CI = 0.67 to 0.90) for BNP, and 0.76 (95% CI = 0.64 to 0.87) for cTnI-Ultra. The combination of h-FABP with PESI was a particularly useful prognostic indicator because none of the 79 patients (59.8%) with h-FABP &lt; 6 ng/mL and PESI class &lt; III had an adverse outcome. Conclusions: h-FABP and the PESI are superior to BNP and cTnI-Ultra as markers for risk stratification of patients with acute PE. The high sensitivity of their combination identified a large number of low-risk patients in the ED. abstract_id: PUBMED:19766754 Relation of high concentrations of plasma carboxy-terminal telopeptide of collagen type I with outcome in acute myocardial infarction. Acute myocardial infarction (AMI) is associated with matrix metalloproteinase activation. The plasma concentrations of carboxy-terminal telopeptide of collagen type I (CITP) reflect collagen type I degradation due to matrix metalloproteinase activation. We assessed the role of CITP as an early marker of outcome in AMI. Plasma CITP was measured 72 hours after hospital admission in 432 patients presenting with AMI. The 2 composite end points of the study (death, resuscitated cardiac arrest, recurrent AMI or ischemia, and heart failure or stroke; and death, resuscitated cardiac arrest, or heart failure) and mortality were assessed at 1 year in 4 patient groups stratified by the CITP quartiles. Patients with ST-segment elevation MI represented 75.7% of the population. In-hospital percutaneous coronary intervention was performed in 70.4% of the patients. The mean left ventricular ejection fraction was 53.9 +/- 12.5%. At 1 year of follow-up, high levels of CITP were associated with the occurrence of both composite end points and mortality (p &lt;0.01 for all). Stepwise logistic regression analysis identified CITP as an independent predictor of both composite end points (odds ratio 2.14, 95% confidence interval 1.34 to 3.42, p = 0.001; and odds ratio 3.19, 95% confidence interval 1.50 to 6.81, p = 0.003), along with the Killip class and brain natriuretic peptide levels. In conclusion, high hospital levels of CITP, a marker of collagen degradation and ventricular remodeling, are associated with late mortality and other serious clinical events after AMI. abstract_id: PUBMED:17322638 Resuscitative value of B-type natriuretic peptide in comatose survivors treated with hypothermia after out-of-hospital cardiac arrest due to cardiac causes. Background: Two randomized studies have shown a neurological benefit of therapeutic hypothermia in comatose survivors after out-of-hospital cardiac arrest, but there are no studies of the cardiac neurohormone of B-type natriuretic peptide (BNP) in patients treated with hypothermia. Methods And Results: A prospective study was conducted of 109 comatose patients who were treated with mild hypothermia after out-of-hospital sudden cardiac arrest due to cardiac causes and whose BNP level was measured on arrival at the emergency room. The primary endpoint was a favorable neurological outcome at the time of hospital discharge. A total of 45 of the 109 patients had a favorable neurological outcome. The unadjusted rate of a favorable neurological outcome decreased in a stepwise fashion among patients in increasing quartiles of BNP level (p&lt;0.001) and this association remained significant in subgroups of patients. The BNP cutoff value of 80 pg/ml for a favorable neurological outcome had an accuracy of 87.2%. In the multiple logistic-regression analysis, a BNP level of 80 pg/ml or less was an independent predictor of favorable neurological outcome. Conclusions: The measurement of BNP was found to provide valuable information regarding the neurological outcome of comatose survivors treated with mild hypothermia after out-of-hospital cardiac arrest due to cardiac causes. abstract_id: PUBMED:28597791 Value of latent outflow obstruction to predict clinical course of patients with hypertrophic cardiomyopathy. Objective The frequency, significance and prognostic value of left ventricle obstruction (LVO) induced with provocation (latent LVO) is controversial for hypertrophic cardiomyopathy (HC) patients. This study was designed to assess the value of latent LVO in predicting the clinical course in 101 patients with HC. Methods and results Patients were followed for a mean of 82 ± 48 months (range 2 to 148 months) for clinical end points defined as a composite of cardiovascular death resuscitated cardiac arrest, appropriate defibrillator shock or hospitalization due to worsening of heart failure symptoms. Presence of LVO (hazard ratio 3.63; 95% confidence interval, 1.85 to 7.12; P = 0.0001) and log NT-proBNP levels (hazard ratio, 1.40; 95% confidence interval, 1.14 to 1.72; P = 0.001) were the independent variables associated with an increased risk of experiencing clinical end points. HC patients with latent LVO have a trend toward decreased survival when compared with HC patients without LVO (log rank P = 0.027), but better survival than patients with resting LVO (log rank P = 0.007). HC patients with NT-proBNP levels &lt;1,000 pg/ml had also better survival. LVO and NT-proBNP levels are the major determinants of clinical end points in patients with HC. Conclusions Evaluation of patients without resting LVO to demonstrate latent obstruction is of critical importance in respect of outcome and selection of patients for septal reduction therapies, so routine provocative testing with physiological exercise and measurement of NT-proBNP is recommended in this patient population for risk stratification. abstract_id: PUBMED:15474693 Comparison of echocardiography and plasma B-type natriuretic peptide for monitoring the response to treatment in acute heart failure. Aims: Comparison of the value of echocardiography and B-type natriuretic peptide (BNP) in monitoring response to treatment in patients admitted for acute heart failure (HF). Methods And Results: Ninety-five consecutive patients admitted with acute HF underwent bedside Doppler echocardiography and BNP measurements on admission, after 24 h of intravenous treatment, and at day 7. We then studied the association between the clinical status, the Doppler echocardiographic findings, the BNP measurements and subsequent 60-day adverse outcome (death, resuscitated cardiac arrest, urgent heart transplantation, readmission). On admission and during hospitalisation, relationships were found between plasma BNP and Doppler echocardiographic findings, and between their changes. During a 60 day follow-up, 37 events occurred. Multivariable analysis taking into account clinical factors, Doppler echocardiography and BNP showed that the two best models to predict outcome were (1) early evaluation at day 2 (previous CHF treatment, dobutamine use, relative BNP change during first 24 h) and (2) late evaluation at day 7 (previous CHF treatment, dobutamine use, BNP at day 7). Patients with a decrease in plasma BNP &gt;10% at day 2, or with plasma BNP &lt;300 pg/ml at day 7 had a better outcome than the others (19% versus 65% and 16% versus 72% events, respectively, p&lt;0.0001). Conclusions: Serial BNP measurements during the treatment of acute HF provide incremental prognostic information over clinical presentation and repetitive echocardiographic examination. Answer: Yes, brain natriuretic peptide (BNP) can predict outcome after cardiac arrest. An observational study found that BNP levels measured at hospital admission were significantly associated with adverse neurological outcome and mortality after cardiac arrest, independent of prearrest health condition and cardiac arrest characteristics. The study reported that higher quartiles of BNP were associated with increased adjusted odds ratios for poor neurological outcome at 6 months and for mortality until 6 months compared to the lowest quartile (PUBMED:17451863). Additionally, in comatose survivors treated with hypothermia after out-of-hospital cardiac arrest due to cardiac causes, a BNP level of 80 pg/ml or less was an independent predictor of favorable neurological outcome (PUBMED:17322638). These findings suggest that BNP is a valuable biomarker for predicting outcomes in patients following cardiac arrest.
Instruction: Are biochemical values predictive of adenoma's weight in primary hyperparathyroidism? Abstracts: abstract_id: PUBMED:17007616 Are biochemical values predictive of adenoma's weight in primary hyperparathyroidism? Background: Preoperative laboratory findings may carry some predictive value about the size of the abnormal parathyroid tissue that needs to be removed in primary hyperparathyroidism. Methods: In a retrospective study from 1988 to 2003, records of 71 patients with parathyroid adenoma were reviewed. The correlation between preoperative serum calcium, phosphate and parathyroid hormone (PTH) with adenoma's weight was analysed separately. Results: There was a significant correlation between preoperative serum PTH and calcium with adenoma's weight (P &lt; 0.001 and P = 0.03, respectively). The correlation between preoperative serum phosphate and adenoma's weight was not significant (P = 0.1). Conclusion: Preoperative PTH level cannot be used as a definite guide to the parathyroid adenoma's weight. Large parathyroid adenomas seem to secrete less PTH per unit weight than small adenomas. Calcium and phosphate do not seem to be of much value in predicting adenoma's weight in primary hyperparathyroidism. abstract_id: PUBMED:24426402 Correlation of biochemical markers of primary hyperparathyroidism with single adenoma weight and volume. Preoperative laboratory markers of primary hyperparathyroidism including serum parathormone (PTH), calcium and phosphate level may have some predictive value about the size and volume of the abnormal parathyroid gland tissue which needs to be resected in primary hyperparathyroidism. In a Prospective study from 2003 to 2010, 69 patients with parathyroid adenoma were enrolled. The correlation between preoperative serum PTH, calcium and phosphate level with adenoma's weight and volume was analyzed separately. Adenoma volume was calculated via an equation for the volume of a spheroid object. The data were analyzed via a multiple analysis of variance, and a correlation coefficient was calculated. The level of significance was set at p _ .05. There was a significant correlation between adenoma weight and serum calcium and parathormone levels (p = .0001 and p = .0001, respectively). There was no significant correlation between adenoma weight and serum phosphate. With respect to adenoma weight, there was a significant relationship with parathormone levels and serum calcium (p = .0001 and p = .0001, respectively). There was no significant relationship between serum phosphate and aden2oma weight. Preoperative serum PTH and calcium levels may be valuable in predicting parathyroid adenoma volume and weight in primary hyperparathyroidism for a single adenoma. abstract_id: PUBMED:34353709 Association of biochemical and clinical parameters with parathyroid adenoma weight. Turkish-Bulgarian endocrine and breast surgery study group, hyperparathyroidism registry study. Background: Primary hyperparathyroidism (pHPT) caused by a single benign parathyroid adenoma is a common endocrine disorder that is affected by regional differences. Living in different geographical regions reveals differences in the laboratory results and pathological findings, but studies on this subject are not sufficient. The article focuses on biochemical and pathological effects of geographical differences in parathyroid adenoma. In addition, the present study seeks to elaborate on treatment methods and effectiveness of screening in geographical area of Bulgaria and Turkey. Method: In this prospective study, 159 patients were included from 16 centres. Demographic characteristics, symptoms, biochemical markers and pathologic characteristics were analysed and compared between 8 different regions. Results: Patients from Turkish Black Sea had the highest median serum calcium (Ca) level, whereas patients from Eastern Turkey had the lowest median serum phosphorus (P) level. On the other hand, there was no significant difference between Ca, parathormone (PTH) and P levels according to regions. Patients from Eastern Turkey had the highest adenoma weight, while patients from Bulgaria had the lowest adenoma weight. The weight of adenoma showed statistically significant differences between regions (p &lt; 0.001). There was a correlation between adenoma weight and serum PTH level (p = 0.05) and Ca level (p = 0.035). Conclusion: This study has provided a deeper insight into the effect of the regional differences upon clinicopathological changing and biochemical values of pHTP patients with adenoma. Awareness of regional differences will assist in biochemical screening and treatment of this patient group. abstract_id: PUBMED:17550505 Primary hyperparathyroidism: do perioperative biochemical variables correlate with parathyroid adenoma weight or volume? Objectives: To investigate the relationship between clinical variables and biochemical markers of calcium homeostasis and parathyroid adenoma size, in patients undergoing surgery for primary hyperparathyroidism. To determine the potential clinical utility of the findings in preoperative planning and prediction of postoperative hypocalcaemia. Design: Retrospective data analysis. Participants: Ninety-two patients treated surgically for primary hyperparathyroidism caused by a solitary parathyroid adenoma between 1996 and 2006. Main Outcome Measures: Complete data was obtained for 77 participants including patient demographics, perioperative calcium, parathyroid hormone and phosphate levels, adenoma weight and histological dimensions. Multiple and binary logistic regression analyses were used to investigate the relationship between clinical and biochemical variables and adenoma weight and volume. Similar analysis was used to identify predictors of postoperative hypocalcaemia. Setting: University Teaching Hospital Otolaryngology Department. Results: The mean age at presentation was 59 years. The mean preoperative calcium and PTH levels were 2.86 +/- 0.2 mmol/L and 17.2 +/- 12.3 ng/L, respectively, falling to 2.3 +/- 0.01 and 4.1 +/- 2.7 postoperatively. The mean adenoma weight was 1.71 +/- 1.41 g. No correlation existed between clinical variables and preoperative biochemical markers of calcium homeostasis and adenoma weight or volume. There was a weak correlation between the preoperative to postoperative change in calcium and PTH levels {[deltaCa(2)+] vs. [deltaPTH]} (r = 0.24; P &lt; 0.05) but no predictors of postoperative hypocalcaemia could be identified. Conclusions: Biochemical parameters will not accurately predict the size of a parathyroid adenoma. The size of a parathyroid adenoma does not correlate with postoperative calcium levels and is therefore not useful as a predictor of postoperative hypocalcaemia. abstract_id: PUBMED:31699571 Can we predict expected adenoma weight preoperatively with reference to the correlation of preoperative biochemical tests with parathyroid adenoma weight? Background: Primary hyperparathyroidism is a prevalent disease with proven benefits for appropriately selected patients who undergo parathyroidectomy. The ability to accurately predict expected single adenoma gland weight as the cause based on preoperative biochemical tests could improve cure rates in a minimally invasive approach. Objective: To assess the correlation between parathyroid weight and preoperative parathyroid hormone and calcium levels in patients with primary hyperparathyroidism with a solitary adenoma and determine if these could be used to predict expected parathyroid weight. Methods: Patients with primary hyperparathyroidism who underwent curative parathyroidectomy from 2013 to 2018 was retrospectively analysed. Results: There is a strong positive correlation r = 0.602 between preoperative PTH levels with respect to parathyroid weight (p &lt; 0.001). There was a moderate correlation r = 0.474 between preoperative adjusted Calcium and PTH weight (p &lt; 0.001). An algorithm was developed to calculate predicted weight of a single adenoma but when tested against cases with hyperplasia and double adenomas during the period, the variability of predicted weight meant it was impossible to differentiate between the causes. Hyperplasia was excluded and 95% of double adenomas excluded however, when parathyroid weight exceeded 1200 mg CONCLUSION: There is a strong correlation between preoperative PTH levels and calcium levels with parathyroid weight. The large variability of predicted parathyroid weight however, precludes the use of biochemical tests alone preoperatively in being able to differentiate between a single adenoma, double adenoma or hyperplasia as the cause in primary hyperparathyroidism. At parathyroid predicted weights above 1200 mg however, all cases of hyperplasia, and 95% of double adenomas excluded. abstract_id: PUBMED:33256695 Correlation of perioperative biochemical variables with single adenoma weight in patients with primary hyperparathyroidism. Background: Single parathyroid adenoma is the main cause of primary hyperparathyroidism (PHPT), with surgery remaining the gold standard for its treatment. The ability to preoperatively predict the parathyroid adenoma size and could facilitate the decision about the extent of surgical exploration. It is reasonable to hypothesize that the perioperative levels of PHPT-related variables (i.e. calcium, parathormone, phosphate) may predict the adenoma weight or/and demonstrate whether the adenoma is successfully removed or not. Aim of this study is to explore the relationship between perioperative biochemical values and adenoma weight. Secondarily, we investigated the relationship between adenoma weight and uni-/bilateral neck exploration. Methods: Retrospective study of all patients undergone surgery for primary hyperparathyroidism due to single adenoma in a tertiary university hospital in Germany during a 6-year period. Following variables were analyzed: preoperative serum calcium, phosphorus and parathormone, intraoperative parathormone before and after adenoma excision, intraoperative PTH decrease, postoperative serum calcium and parathormone (PTHpostop-pg/ml), calcium and PTH decrease. Bivariate correlations were calculated by the Spearman's correlation test at the 95% significance level. Results: A total of 339 patients were included in the study. The median age of the patients was 60 years (range 21-90) and 77% were females. The median adenoma weight was 1 g (range 0.1-11). Adenoma weight correlated strong with maximum adenoma diameter (r = 0.72, p &lt; 0.05), moderate with preoperative parathormone (r = 0.44) and parathormone decrease (r = 0.27), whereas there was no correlation with the intraoperative PTH decrease (r = 0.02). There was also a borderline (moderate to weak) correlation with pre- and postoperative calcium levels (r = 0.21 and r = 0.23 respectively) and a negative borderline correlation with phosphorus (r = - 0.21). Patients who required bilateral neck exploration, had significantly lighter adenomas (median weight 0.8 g vs 1.1 g, p = 0.005). Conclusions: We conclude that preoperative PTH levels may only serve as an approximate guide to adenoma weight, as direct preoperative prediction is not possible. Serum calcium levels, PTH and calcium decrease correlate only weak with adenoma weight. Patients who require bilateral neck exploration, have significantly (20-25%) lighter adenomas. abstract_id: PUBMED:37021921 Adenoma weight: the only predictive factor for multiple gland disease in primary hyperparathyroidism. Background: Predicting a multiple gland disease (MGD) in primary hyperparathyroidism (pHPT) remains challenging. This study aimed to evaluate predictive factors for MGD. Methods: A retrospective chart review was performed of 1211 patients with histologically confirmed parathyroid adenoma or hyperplasia between 2007-2016. Localization diagnostics, laboratory parameters, and the weight of the resected parathyroid glands were evaluated concerning their predictive value of a multiple-gland disease. Results: A number of 1111 (91.7%) had a single-gland disease (SGD), and 100 (8.3%) a multiple-gland disease (MGD). US and MIBI scans were comparable for either negative or positive adenoma localization and suspected MGD. While the PTH level was similar, the calcium level was higher in SGD (2.8 mmol/L versus 2.76 mmol/L, P=0.034). MGD had a significantly lower gland weight (0.78 g versus 0.31 g; P&lt;0.001). A gland weight of 0.418 grams was a predictive factor for MGD with a sensitivity of 72% and a specificity of 66%. Conclusions: Only the weight of the resected parathyroid adenoma was meaningful in predicting MGD. A cut-off value of 0.418 g can differentiate SGD from MGD. abstract_id: PUBMED:11848628 The predictive value of laboratory findings in patients with primary hyperparathyroidism. Background: Despite extensive preoperative imaging and intraoperative "gadgetry" to facilitate localization of abnormal parathyroid glands, the onus of identification and resection remains with the surgeon in the operating room. We pondered the relative usefulness of routine laboratory studies to the surgeon as predictive guides to the intraoperative findings in patients with primary hyperparathyroidism (HPT). Study Design: Pre- and postoperative laboratory data were analyzed in 166 consecutive patients (1998 to 1999) undergoing successful cervical exploration for sporadic primary HPT and were correlated with the pathologic findings. Patients with secondary HPT, multiple endocrine neoplasia, familial HPT, and parathyroid carcinoma were not included in this study. Results: One hundred eighteen women and 48 men (mean age = 63 years) with parathyroid adenoma (n = 155) and sporadic hyperplasia (n = 11) were evaluated. Mean parathormone (PTH) levels varied little with the pathology: adenoma = 9.6 pmol/L and hyperplasia = 10.2 pmol/L (p &gt; 0.05). In patients with parathyroid adenoma, analysis of preoperative measures showed a positive correlation (r = 0.48, p &lt; 0.0001) with PTH and gland weight. The correlation appeared to be the strongest in the tails of the distribution; in 22 of 23 (96%) cases with PTH levels lower than 6 pmol/L, the offending lesion or lesions were less than 400 mg; in all six cases with PTH levels higher than 18.0 pmol/L, the abnormal gland or glands weighed more than 800 mg. PTH levels between 6 and 18 pmol/L revealed mean adenoma weight of 757 mg (median = 420 mg; range = 50 to 5,500 mg). Conclusions: Extreme values of PTH in patients with single-gland parathyroid disease alert the surgeon to the likelihood of small or large parathyroid adenomas. Laboratory studies do not differentiate adenoma from hyperplasia, nor do they pinpoint the size of abnormal glands with moderate-range PTH values. abstract_id: PUBMED:35963762 Positive predictive values of ultrasound-guided fine-needle aspiration with parathyroid hormone assay and Tc-99m sestamibi scintigraphy in sporadic primary hyperparathyroidism. Objectives: Ultrasonography (US) and Tc-99m sestamibi scintigraphy (SS) are usually performed as preoperative imaging modalities for patients with sporadic primary hyperparathyroidism (SPHP). Fine-needle aspiration (FNA) under US guidance for parathyroid hormone (PH) assay can enhance the diagnostic accuracy of US. The main aim of the present study was to compare positive predictive values between US-FNA and SS. The secondary aim was to draw up a decision-tree for FNA and SS. Methods: The single-center retrospective study included patients with previously non-operated SPHP. They underwent US-guided FNA, PET-CT SS, and 18 F-choline scintigraphy if the first two methods were inconclusive. PH washout level was considered pathological when greater than the serum PH level. Postoperative histology data were correlated to imaging data. Results: In total, 117 patients were included, with a mean age of 64 years (range, 26-89 years). A single adenoma was identified on pathology in 101 patients (89%). FNA findings were pathologic for 64 patients (55%), with a mean PH level of 2,604ng/L [range, 585-9,074ng/L], higher than the serum level of 179ng/L [range, 60-1,000ng/L]. US-guided FNA showed sensitivity, specificity, and positive and negative predictive values (PPV, NPV) of respectively 72%, 92%, 99% and 28%, compared to 89%, 42%, 93% and 32% for SS. The rate of error in locating the parathyroid gland was 8% for US-guided FNA versus 12% for SS. Comparison of the two methods showed better sensitivity for SS (P=0.0052) and better specificity for FNA (P=0.0143), with no significant difference in PPV or NPV. 18 F-choline scintigraphy detected the hyperfunctioning parathyroid in 11 out of the 15 patients. Conclusion: PPV did not significantly differ between FNA and SS. US-guided FNA performed by an experienced operator could be a simple first-line method for more than 50% of patients with non-operated SPHP, with 99% PPV. In case of negative or doubtful FNA findings, SS can be performed in second line. abstract_id: PUBMED:21814399 Delayed serum calcium biochemical response to successful parathyroidectomy in primary hyperparathyroidism. Background: Parathyroidectomy is considered the standard treatment for primary hyperparathyroidism, however, though the onset of biochemical response is variable and is different from one patient to the other. Aim: To evaluate the onset of systemic response and the biochemical normalization of serum calcium levels to a successful surgery. Patients And Methods: In a retrospective fashion, we collected clinical data from 303 patients admitted to our hospital between 2005 and 2008, with a diagnosis of primary hyperparathyroidism after sestamibi localization. The pathology reports, parathyroidectomy results, the preoperative and postoperative parathormone (PTH) and serum calcium levels were reviewed. Response of each patient to the surgery was studied and all the data were analyzed to determine how fast the serum calcium levels drop. Results: The majority of patients (72.9%, 221/303) showed a decrease in their serum calcium levels to normal values within 48 h. While in 40 patients it took 72 hours and 42 patients (13.8%) had a delayed normalization for more than 72 h. The pathology in the PH group was predominantly of a single adenoma 80.9% vs.19.1 with hyperplasia with a P of 0.03. Preoperative parathyroid hormone PTH elevation was not significantly deferent between the two groups (PH and EN) with a mean of 7.9±5.36 vs. 7.41±14.5 pmol/L respectively with a P of 0.43. Conclusion: The majority of patients with primary hyperparathyroidism (PHP) respond to parathyroidectomy in the form of normalization of their serum calcium levels and PTH within 48 h; however, a certain group of patients will need more than 3 days. Answer: Biochemical values have been studied to determine if they can predict the weight of an adenoma in primary hyperparathyroidism, with varying results. Several studies have found a significant correlation between preoperative serum parathyroid hormone (PTH) and calcium levels with adenoma's weight (PUBMED:17007616, PUBMED:24426402, PUBMED:34353709, PUBMED:31699571, PUBMED:33256695, PUBMED:37021921). For instance, one study reported a strong positive correlation between preoperative PTH levels and parathyroid weight (PUBMED:31699571), while another found a significant correlation between adenoma weight and serum calcium and PTH levels (PUBMED:24426402). Additionally, a study indicated that the weight of adenoma showed statistically significant differences between regions and correlated with serum PTH level and calcium level (PUBMED:34353709). However, other studies have shown that biochemical parameters may not accurately predict the size of a parathyroid adenoma (PUBMED:17550505). One study concluded that extreme values of PTH in patients with single-gland parathyroid disease alert the surgeon to the likelihood of small or large parathyroid adenomas, but laboratory studies do not pinpoint the size of abnormal glands with moderate-range PTH values (PUBMED:11848628). Another study found that only the weight of the resected parathyroid adenoma was meaningful in predicting multiple gland disease, with a cut-off value of 0.418 g differentiating single-gland disease from multiple-gland disease (PUBMED:37021921). In summary, while there is evidence that preoperative serum PTH and calcium levels may have some predictive value regarding the weight of a parathyroid adenoma in primary hyperparathyroidism, the relationship is not definitive, and the predictive accuracy can vary. Some studies suggest that biochemical tests alone may not be sufficient to differentiate between single adenoma, double adenoma, or hyperplasia as the cause in primary hyperparathyroidism (PUBMED:31699571). Therefore, while biochemical values can provide some guidance, they should not be solely relied upon for predicting adenoma weight.
Instruction: Measuring disparities in sanitation access: does the measure matter? Abstracts: abstract_id: PUBMED:24851256 Measuring disparities in sanitation access: does the measure matter? Objective: Initiatives to monitor progress in health interventions like sanitation are increasingly focused on disparities in access. We explored three methodological challenges to monitoring changes in sanitation coverage across socio-economic and demographic determinants: (i) confounding by wealth indices including water and sanitation assets, (ii) use of individual urban and rural settings versus national wealth indices and (iii) child-level versus household-level analyses. Methods: Sanitation coverage by wealth for children and households across settings was estimated from recent Demographic and Health Surveys in six low-income countries. Household assignment to wealth quintiles was based on principal components analyses of assets. Concordance in household quintile assignment and estimated distribution of improved sanitation was assessed using two wealth indices differing by inclusion or exclusion of water and sanitation assets and independently derived for each setting. Improved sanitation was estimated using under five children and households. Results: Wealth indices estimated with water, and sanitation assets are highly correlated with indices excluding them but can overstate disparities in sanitation access. Independently, derived setting wealth indices highly correlate with setting estimates of coverage using a single national index. Sanitation coverage and quintile disparities were consistently lower in household-level estimates. Conclusions: Standard asset indices provide a reasonably robust measure of disparities in improved sanitation, although overestimation is possible. Separate setting wealth quintiles reveal important disparities in urban areas, but analysis of setting quintiles using a national index is sufficient. Estimates and disparities in household-level coverage of improved sanitation can underestimate coverage for children under five. abstract_id: PUBMED:29470121 Disparities in Distribution of Particulate Matter Emission Sources by Race and Poverty Status. Objectives: To quantify nationwide disparities in the location of particulate matter (PM)-emitting facilities by the characteristics of the surrounding residential population and to illustrate various spatial scales at which to consider such disparities. Methods: We assigned facilities emitting PM in the 2011 National Emissions Inventory to nearby block groups across the 2009 to 2013 American Community Survey population. We calculated the burden from these emissions for racial/ethnic groups and by poverty status. We quantified disparities nationally and for each state and county in the country. Results: For PM of 2.5 micrometers in diameter or less, those in poverty had 1.35 times higher burden than did the overall population, and non-Whites had 1.28 times higher burden. Blacks, specifically, had 1.54 times higher burden than did the overall population. These patterns were relatively unaffected by sensitivity analyses, and disparities held not only nationally but within most states and counties as well. Conclusions: Disparities in burden from PM-emitting facilities exist at multiple geographic scales. Disparities for Blacks are more pronounced than are disparities on the basis of poverty status. Strictly socioeconomic considerations may be insufficient to reduce PM burdens equitably across populations. abstract_id: PUBMED:18836391 Measuring disparities in the incidence of sexually transmitted diseases. The Centers for Disease Control and Prevention (CDC) defines a health disparity as a "[health] difference that occurs by gender, race or ethnicity, education or income, disability, geographic location, or sexual orientation." Health equity is achieved by eliminating health disparities or inequalities. Measuring health disparities is a critical first step toward reducing differences in health outcomes. To determine the methods to be used in measuring a health disparity, several decisions must be made, which include: (1) selecting a reference group for the comparison of 2 or more groups; (2) determining whether a disparity should be measured in absolute or in relative terms; (3) opting to measure health outcomes or health indicators expressed as adverse or favorable events; (4) selecting a method to monitor a disparity over time; and (5) choosing to measure a disparity as a pair-wise comparison between 2 groups or in terms of a summary measure of disparity among all groups for a particular characteristic. Different choices may lead to different conclusions about the size and direction of health disparities at a point in time and changes in disparities over time.The objective of this article is to review the methods for measuring health disparities, provide examples of their use, and make specific recommendations for measuring disparities in the incidence of sexually transmitted diseases (STDs). abstract_id: PUBMED:28539190 Water, sanitation, and hygiene in schools: Status and implications of low coverage in Ethiopia, Kenya, Mozambique, Rwanda, Uganda, and Zambia. Adequate access to water, sanitation, and hygiene (WaSH) in schools impacts health, educational outcomes, and gender disparities. Little multi-country research has been published on WaSH in rural schools in Sub-Saharan Africa. In this multi-national cross-sectional WaSH study, we document WaSH access, continuity, quality, quantity, and reliability in 2270 schools that were randomly sampled in rural regions of six Sub-Saharan African countries: Ethiopia, Kenya, Mozambique, Rwanda, Uganda, and Zambia. Data collection included: school WaSH surveys containing internationally established WaSH indicators, direct observation, and field- and laboratory-based microbiological water quality testing. We found 1% of rural schools in Ethiopia and Mozambique to 23% of rural schools in Rwanda had improved water sources on premises, improved sanitation, and water and soap for handwashing. Fewer than 23% of rural schools in the six countries studied met the World Health Organization's recommended student-to-latrine ratios for boys and for girls. Fewer than 20% were observed to have at least four of five recommended menstrual hygiene services (separate-sex latrines with doors and locks, water for use, waste bin). The low access to safe and adequate WaSH services in rural schools suggest opportunities for WaSH interventions that could have substantive impact on health, education, and gender disparities. abstract_id: PUBMED:22059384 Measuring healthcare disparities and racial segregation in Missouri nursing homes. Measuring and, ultimately, addressing disparities in long-term care quality continue to be a challenge. Although literature suggests that disparities in healthcare quality exist and nursing homes remain relatively segregated, healthcare professionals and policymakers stand to benefit from improvements in measuring both racial segregation and healthcare disparities. This paper quantifies the relationships between healthcare disparities and racial segregation using the disparities quality index and dissimilarity index. Results suggested that the more segregated the nursing homes, the greater the observed disparities. Multivariate regression analysis indicated that the proportion of Black residents in nursing homes is the variable that best predicts disparities. abstract_id: PUBMED:31244633 Disparities in Diffuse Cortical White Matter Integrity Between Socioeconomic Groups. There is a growing literature demonstrating a link between lower socioeconomic status (SES) and poorer neuroanatomical health, such as smaller total and regional gray and white matter volumes, as well as greater white matter lesion volumes. Little is known, however, about the relation between SES and white matter integrity. Here we examined the relation between SES and white matter integrity of the brain's primary cortical regions, and evaluated potential moderating influences of age and self-identified race. Participants were 192 neurologically intact, community-dwelling African American and White adults (mean age = 52 years; 44% male, 60% White, low SES = 52%) from the Healthy Aging in Neighborhoods of Diversity across the Life Span (HANDLS) SCAN study. Participants underwent 3.0-T cranial magnetic resonance imaging. Diffusion tensor imaging was used to estimate regional fractional anisotropy (FA) to quantify the brain's white matter integrity and trace to capture diffusivity. Multiple regression analyses examined independent and interactive associations of SES, age, and race with FA of the frontal, temporal, parietal, and occipital lobes bilaterally. Sensitivity analyses assessed the influence of several biopsychosocial risk factors on these associations. Exploratory analyses examined these relations with trace and using additional SES indicators. Results indicated there were no significant interactions of SES, age, and race for any region. Individuals with low SES had lower FA in all regions, and higher trace in the right and left frontal, right and left temporal, and left occipital lobes. Findings remained largely unchanged after inclusion of sensitivity variables. Older age was associated with lower FA and greater trace for all regions, except for the right temporal lobe with FA. No main effects were found for race in FA, and Whites had higher trace values in the parietal lobes. Novel findings of this study indicate that relative to the high SES group, low SES was associated with poorer white matter integrity and greater diffusivity. These results may, in part, reflect exposures to various biopsychosocial risk factors experienced by those of lower SES across the lifespan, and may help explain the preponderance of cognitive and functional disparities between socioeconomic groups. abstract_id: PUBMED:32816545 Water and Sanitation in Urban America, 2017-2019. Objectives. To estimate the population lacking at least basic water and sanitation access in the urban United States.Methods. We compared national estimates of water and sanitation access from the World Health Organization/United Nations Children's Fund Joint Monitoring Program with estimates from the US Department of Housing and Urban Development on homelessness and the American Community Survey on household water and sanitation facilities.Results. We estimated that at least 930 000 persons in US cities lacked sustained access to at least basic sanitation and 610 000 to at least basic water access, as defined by the United Nations.Conclusions. After accounting for those experiencing homelessness and substandard housing, our estimate of people lacking at least basic water equaled current estimates (n = 610 000)-without considering water quality-and greatly exceeded estimates of sanitation access (n = 28 000).Public Health Implications. Methods to estimate water and sanitation access in the United States should include people experiencing homelessness and other low-income groups, and specific policies are needed to reduce disparities in urban sanitation. We recommend similar estimation efforts for other high-income countries currently reported as having near universal sanitation access. abstract_id: PUBMED:18605547 Water and nonwater-related challenges of achieving global sanitation coverage. Improved sanitation is considered equally important for public health as is access to improved drinking water. However, the world has been slower to meet the challenge of sanitation provision for the world's poor. We analyze previously cited barriers to sanitation coverage including inadequate investment poor or nonexistent policies, governance, too few resources, gender disparities, and water availability. Analysis includes investigation of correlation between indicators of the mentioned barriers and sanitation coverage, correlations among the indicators themselves, and a geospatial assessment of the potential impacts of sanitation technology on global water resources under six scenarios of sanitation technology choice. The challenges studied were found to be significant barriers to sanitation coverage, but water availability was not a primary obstacle at a global scale. Analysis at a 0.5 degrees grid scale shows, however, that water availability is an important barrier to as many as 46 million people, depending on the sanitation technology selected. The majority of these people are urban dwellers in countries where water quality is already poor and may be further degraded by sewering vast populations. Water quality is especially important because this vulnerable population primarily resides in locations that depend on environmental income associated with fish consumption. abstract_id: PUBMED:26837519 Health inequalities by gradients of access to water and sanitation between countries in the Americas, 1990 and 2010. Objective: To explore distributional inequality of key health outcomes as determined by access coverage to water and sanitation (WS) between countries in the Region of the Americas. Methods: An ecological study was designed to explore the magnitude and change-over-time of standard gap and gradient metrics of environmental inequalities in health at the country level in 1990 and 2010 among the 35 countries of the Americas. Access to drinking water and access to improved sanitation facilities were selected as equity stratifiers. Five dependent variables were: total and healthy life expectancies at birth, and infant, under-5, and maternal mortality. Results: Access to WS correlated with survival and mortality, and strong gradients were seen in both 1990 and 2010. Higher WS access corresponded to higher life expectancy and healthy life expectancy and lower infant, under-5, and maternal mortality risks. Burden of life lost was unequally distributed, steadily concentrated among the most environmentally disadvantaged, who carried up to twice the burden than they would if WS were fairly distributed. Population averages in life expectancy and specific mortality improved, but whereas absolute inequalities decreased, relative inequalities remained mostly invariant. Conclusions: Even with the Region on track to meet MDG 7 on water and sanitation, large environmental gradients and health inequities among countries remain hidden by Regional averages. As the post-2015 development agenda unfolds, policies and actions focused on health equity-mainly on the most socially and environmentally deprived-will be needed in order to secure the right for universal access to water and sanitation. abstract_id: PUBMED:34653204 Unequal geographic distribution of water and sanitation at the household and school level in Sudan. The Sudanese Government launched the National SDG-6 Plan and commences its implementation to achieve and sustain universal and equitable access to basic WASH services by 2030. It is critical to understand the geographical heterogeneity of Sudan and patterns in the inequality of access to safe drinking water and sanitation. Through such research, the disease control strategy can be optimized, and resource allocation can be prioritized. We explored spatial heterogeneity and inequality in access to improved water and sanitation across Sudan by mapping the coverage at both the state and district levels. We decomposed the inequality across Sudan into within-state, between-state, within-district, and between-state inequalities using the Theil L and Theil T indices. We calculated the Gini coefficient to assess the inequality of access to improved water and sanitation, based on the deviation of the Lorenz curve from the line of perfect equality. The study population was 105,167 students aged 8-13 at 1,776 primary schools across the country. Geographical heterogeneity was prominent in the Central Darfur, South Darfur, East Darfur, Kassala, West Kordofan, and Blue Nile States, all of which showed severe inequality in access to an improved latrine at the household level in terms of the Theil T or Theil L index. The overall inequality in the coverage of improved sanitation went beyond the warning limit of 0.4 for the Gini coefficient. The inequality in terms of the Theil L and Theil T indices, as well as the Gini coefficient, was always higher for improved sanitation than for improved water at the household level. Within-state inequality accounted for 66% or more of national inequalities in the distribution of improved sanitation and drinking water for both the Theil L and Theil T indices. This is the first study to measure geographical heterogeneity and inequalities in improved water and sanitation coverage across Sudan. The study may help to prioritize resource allocation to areas with the greatest water and sanitation needs. Answer: Yes, the measure used to assess disparities in sanitation access does matter. Different methodological approaches can yield varying results and interpretations of disparities in sanitation coverage across socio-economic and demographic determinants. For instance, wealth indices that include water and sanitation assets can overstate disparities in sanitation access compared to indices that exclude them (PUBMED:24851256). Additionally, the use of individual urban and rural settings versus national wealth indices can reveal important disparities, particularly in urban areas. However, analysis of setting quintiles using a national index is generally sufficient (PUBMED:24851256). Moreover, the level of analysis, whether it is child-level or household-level, can affect the estimated coverage and perceived disparities. Sanitation coverage and quintile disparities are consistently lower in household-level estimates, which can underestimate coverage for children under five (PUBMED:24851256). Therefore, the choice of measure can lead to different conclusions about the extent and nature of disparities in sanitation access. It is crucial to select appropriate measures that accurately reflect the disparities in order to inform effective policy and intervention strategies aimed at reducing these disparities and achieving health equity.
Instruction: Does location of the second internal thoracic artery graft influence outcome of coronary artery bypass grafting? Abstracts: abstract_id: PUBMED:32305200 Long-term outcome of the in situ versus free internal thoracic artery as the second arterial graft. Objectives: The study objective was to determine the impact on outcome associated with using the second internal thoracic artery as a free compared with an in situ graft among patients who received the first internal thoracic artery as an in situ internal thoracic artery to the left anterior descending artery. Methods: Among 2600 patients who underwent bilateral internal thoracic artery with an in situ internal thoracic artery to the left anterior descending artery, the second internal thoracic artery was used as a free graft bilateral internal thoracic artery in 136 patients and as an in situ graft (in situ bilateral internal thoracic artery) in 2464 patients. One-to-many propensity score matching was performed to produce a cohort of 134 patients with a second free graft internal thoracic artery matched to 2359 patients with a second in situ internal thoracic artery. Early and long-term outcomes including survival, hospital readmission, and repeat revascularization up to a maximum of 25.8 years were compared. Results: There were no differences between the 2 matched groups' preoperative baseline characteristics and early adverse events. Long-term survival at 5, 10, and 15 years was significantly higher among patients with an in situ bilateral internal thoracic artery compared with patients with a free graft bilateral internal thoracic artery (hazard ratio free graft bilateral internal thoracic artery vs in situ bilateral internal thoracic artery, 1.53; 95% confidence interval, 1.14-2.10; P = .004). However, the long-term risk of readmission to the hospital for cardiovascular reasons and need for repeat revascularization were not significantly different between the 2 matched groups. Conclusions: In a small, propensity-matched cohort of patients undergoing coronary artery bypass grafting, the use of a second in situ internal thoracic artery was associated with an increase in late survival compared with the use of a second internal thoracic artery as a free graft. However, the risk of late hospital readmission and the need for repeat revascularization were similar. abstract_id: PUBMED:18824756 Does location of the second internal thoracic artery graft influence outcome of coronary artery bypass grafting? Background: We sought to determine whether location of the second internal thoracic artery (ITA) graft used for bilateral ITA grafting affects mortality and morbidity of patients with 3-system coronary artery disease and to identify factors associated with second ITA location. Methods And Results: From January 1972 to June 2006, 3611 patients with 3-system coronary artery disease underwent bilateral ITA grafting with one graft anastomosed to the left anterior descending system and the second to either the circumflex (n=2926) or right coronary artery (n=685) system. Follow-up was 9.2+/-7.2 years. Propensity score methodology was used to obtain risk-adjusted outcome comparisons between patients with the second ITA to circumflex versus right coronary artery. Hospital mortality (0.34% versus 0.58%; P=0.4), stroke (0.96% versus 0.88%; P=0.8), myocardial infarction (1.3% versus 0.73%; P=0.2), renal failure (0.44% versus 0.29%; P=0.6), respiratory insufficiency (3.5% versus 3.8%; P=0.7), and reoperation for bleeding (3.4% versus 3.2%; P=0.8) were similar in patients who received the second ITA to circumflex or right coronary artery and remained similar after propensity score adjustment. Late survival (86% versus 87% at 10 years) was also similar. Despite this, there was a gradual decline in ITA to right coronary artery grafting. Conclusions: Contrary to prevailing wisdom that the second ITA graft should be anastomosed to the next most important left-sided coronary artery in 3-system coronary artery disease, it may be placed to either the circumflex or right coronary artery system with similar early and late outcomes. abstract_id: PUBMED:31638700 Bilateral internal thoracic artery grafting: propensity analysis of the left internal thoracic artery versus the right internal thoracic artery as a bypass graft to the left anterior descending artery. Objectives: To compare different configurations of the bilateral internal thoracic arteries for the left coronary system and examine early and late outcomes, including mid-term graft patency. Methods: We reviewed 877 patients who underwent primary isolated coronary artery bypass grafting using in situ bilateral internal thoracic arteries [in situ right internal thoracic artery (RITA)-to-left anterior descending artery (LAD) grafting, n = 683; in situ left internal thoracic artery (LITA)-to-LAD grafting, n = 194]. We compared mid-term patency between the grafts. Propensity score matching was performed to investigate early and long-term outcomes. Results: The 2-year patency rate for RITA-to-LAD and LITA-to-LAD grafts were similar. Multivariate analysis revealed that RITA-to-non-LAD anastomosis (P = 0.029), postoperative length of stay (P = 0.003) and chronic obstructive pulmonary disease (P = 0.005) were associated with graft failure. After statistical adjustment, 176 propensity-matched pairs were available for comparison. RITA-to-LAD grafting enabled a more distal anastomosis. Kaplan-Meier analysis revealed that the incidences of death, repeat revascularization and myocardial infarction were significantly higher in the LITA-to-LAD group among both the unmatched and matched samples (P = 0.045 and 0.029, respectively). Conclusions: The mid-term patency and outcomes of RITA-to-LAD grafting are good and reduces future cardiac event, in contrast to LITA-to-LAD grafting. abstract_id: PUBMED:26907619 Management of a Left Internal Thoracic Artery Graft Injury during Left Thoracotomy for Thoracic Surgery. There have been some recent reports on the surgical treatment of lung cancer in patients following previous coronary artery bypass graft surgery. Use of internal thoracic artery graft is a gold standard in cardiac surgery with superior long-term patency. Left internal thoracic artery graft is usually patent during left lung resection in patients who present to the surgeon with an operable lung cancer. We have presented our institutional experience with left-sided thoracic surgery in patients who have had previous coronary artery surgery with a patent internal thoracic artery graft. abstract_id: PUBMED:23566513 Is the second internal thoracic artery better than the radial artery in total arterial off-pump coronary artery bypass grafting? A propensity score-matched follow-up study. Objective: The aim of our study was to evaluate the long-term outcome of patients exclusively undergoing total arterial revascularization off-pump coronary artery bypass grafting and to compare the performance of the radial artery and the right internal thoracic artery as a second conduit. Methods: We studied a consecutive series of 1700 patients undergoing off-pump coronary artery bypass grafting, receiving a radial artery or right internal thoracic artery as a second graft for total arterial revascularization, between 2003 and 2010. A total of 1447 patients (85.11%) received bilateral internal thoracic artery grafting, and 253 patients (14.89%) received left internal thoracic artery and radial artery grafting. A propensity score-matched analysis was performed to compare the 2 groups, bilateral internal thoracic artery and left internal thoracic artery and radial artery, relative to overall survival, morbidity, and combined end points event-free survival. Hazard ratios (HRs) and their 95% confidence intervals (CIs) were estimated by Cox regression. Results: In the full unmatched patient population, the postoperative survival (HR, 0.59; 95% CI, 0.38-0.92; P = .021), incidence of reintervention/readmission (HR, 0.42; 95% CI, 0.28-0.61; P &lt; .001), and combined end points (HR, 0.47; 95% CI, 0.35-0.63; P &lt; .001) were significantly better in the bilateral internal thoracic artery group. In the propensity score-matched patient population, the incidence of reintervention/readmission (HR, 0.40; 95% CI, 0.18-0.88; P = .02) and combined end points (HR, 0.54; 95% CI, 0.32-0.92; P = .02) were significantly better in the bilateral internal thoracic artery group compared with the left internal thoracic artery-radial artery group. Conclusions: The results of our study provide evidence for the superiority of the right internal thoracic artery graft compared with the radial artery as a second conduit in total arterial revascularization off-pump coronary artery bypass grafting. abstract_id: PUBMED:25762396 Equipoise between radial artery and right internal thoracic artery as the second arterial conduit in left internal thoracic artery-based coronary artery bypass graft surgery: a multi-institutional study†. Objectives: Multiple arterial coronary artery grafting (MABG) improves long-term survival compared with single arterial CABG (SABG), yet the best second arterial conduit to be used with the left internal thoracic artery (LITA) remains undefined. Outcomes in patients grafted with radial artery (RA-MABG) versus right internal thoracic artery (RITA-MABG) as the second arterial graft were compared with SABG. Methods: Multi-institutional, retrospective analysis of non-emergent isolated LITA to left anterior descending coronary artery CABG patients was performed using institutional Society of Thoracic Surgeon National Adult Cardiac Surgery Databases. 4484 (54.5%) SABG [LITA ± saphenous vein grafts (SVG)], 3095 (37.6%) RA-MABG (RA ± SVG) and 641 (7.9%) RITA-MABG (RITA ± SVG) patients were included. The RITA was used as a free (68%) or in situ (32%) graft. RA grafts were principally anastomosed to the ascending aorta. Long-term survival was ascertained from US Social Security Death Index and institutional follow-up. Triplet propensity matching and covariate-adjusted multivariate logistic regression were used to adjust for baseline differences between study cohorts. Results: Compared with the SABG cohort, the RITA-MABG cohort was younger (58.6 ± 10.2vs65.9 ± 10.4, P &lt; 0.001), had a higher prevalence of males (87% vs 65%, P &lt; 0.001) and was generally healthier (MI: 36.7% vs 56.7%, P &lt; 0.001, smoking: 56.8% vs 61.1%, IDDM: 3.0% vs 14.4%, CVA: 2.6% vs 10.0%). The RA-MABG cohort was generally characterized by a risk profile intermediate to that of SABG and RlTA-MABG. Unadjusted 5-, 10- and 15-year survival rates were best in RITA-MABG (95.2%, 89% and 82%), intermediate in RA-MABG (89%, 74%, 57%) and worst in SABG (82%, 61% and 44%) cohorts (all P &lt; 0.001). Propensity matching yielded 551 RA-MABG, RITA-MABG and SABG triplets, which showed similar 30-day mortality. Late survival (16 years) was equivalent in the RA-MABG and RITA-MABG cohorts [68.2% vs 66.7%, P = 0.127, hazard ratio (HR) = 1.28 (0.96-1.71)] and both significantly better than SABG (61.1%). The corresponding SABG versus RITA-MABG and SABG versus RA-MABG HRs (95% confidence interval) were 1.52 (1.18-1.96) and 1.31 (1.01-1.69) with P &lt; 0.002 and P = 0.038, respectively. Conclusions: RA-MABG or RITA-MABG equally improve long-term survival compared with SABG and thus should be embraced by the Heart Team as the therapy of choice in LITA-based coronary artery bypass surgery. abstract_id: PUBMED:17992305 Immediate results of right internal thoracic artery and radial artery as the second arterial graft in myocardial revascularization. Objective: We sought to compare early clinical outcomes in patients receiving a right internal thoracic artery or a radial artery as the second arterial graft in myocardial revascularization. Methods: We retrospectively studied 58 consecutive patients who underwent coronary artery bypass surgery and received both a left internal thoracic artery graft and either a right internal thoracic artery (n=20) or a radial artery graft (n=38), between January 2004 and March 2006. Hospital mortality, pleural drainage, operative time and postoperative complications were analyzed. Results: There were no significant preoperative differences between groups. There was only one (1.7%) in-hospital death which occurred in the Radial Group. Operative times was significantly higher in the Right Internal Thoracic Group (p-value = 0.0018), but were not associated with increased Intensive Care Unit stays, mechanical ventilation or other postoperative complications. We were able to perform significantly more distal anastomosis using the radial artery than the right internal thoracic artery (1.57 versus 1.05: p-value =0.003). Conclusion: In our group of patients, the use of a right internal thoracic artery as a second arterial graft was associated with a prolonged operative time, but had no interference with the immediate clinical outcomes. abstract_id: PUBMED:36328116 Long-term outcomes of patients receiving right internal thoracic artery or radial artery as a second arterial conduit. A propensity score matching study. Objective: This study aimed to compare the long-term outcomes in propensity matched patients receiving right internal thoracic artery(RITA) or radial artery(RA) as second arterial conduit during coronary artery bypass grafting(CABG) with internal thoracic artery to the left anterior descending artery(LAD). Methods: In this retrospective study, propensity score matching was performed including 1198 patients from 3 centers resulting in 389-pairs who received either RITA or RA. Results: In the matched cohort, median follow-up time was 7.53 years(interquartile range, 4.35-11.81). Survival probabilities at 5, 10 and 15-years were 93.8% versus 94.5%, 81.2% versus 76.2% and 63.2% vs 62.5% in the RITA and RA groups, respectively(HR: 1.11; 95%CI;0.80-1.53; P = 0.533) Freedom from MACCE in the matched cohort at 5, 10 and 15-years were 92.0% versus 93.7%, 75.0% versus 73.8%, 72.2% and 46.9% vs 47.2% in the RITA and RA groups, respectively(HR: 0.96; 95%CI;0.74-1.26; P = 0.774). Subgroup analyses of the matched cohort showed comparable long-term outcomes in terms of MACCE at follow-up in patients with age older than 65-years, obese patients, diabetics, female patients and with impaired EF. As for target vessel revascularization, RITA and RA had comparable outcomes in terms of MACCE when the conduit was used to graft either the left coronary system or the right coronary system. Conclusions: The use of RITA or RA as second arterial conduit during CABG with internal thoracic artery to the LAD is safe and associated with comparable long-term clinical outcomes. The choice of optimal second arterial conduit should be guided mainly by patients' characteristics and surgeons' preferences. abstract_id: PUBMED:38126056 Flow characteristics of in-situ internal thoracic artery graft with competitive flow. We report flow characteristics of an in-situ internal thoracic artery (LITA) graft with angiographically competitive flow to the left anterior descending artery (LAD), based upon intraoperative transit-time flow measurement (TTFM) during coronary artery bypass grafting with aortic valve replacement (AVR) and during re-AVR seven years later. Although intraoperative TTFM of the graft showed lower mean flow and higher pulsatility index, suggesting inadequate anastomosis, fast Fourier transform (FFT) analysis of TTFM waveforms presented gradual waning of the amplitude, as shown in patent grafts. FFT analysis of the TTFM waveforms is helpful to judge the patency of LITA to LAD, even with competitive flow. Learning Objective: The internal thoracic artery (LITA) graft to left anterior descending artery (LAD) with angiographically competitive flow shows gradual waning of the amplitude on fast Fourier transform (FFT) analysis of the transit-time flow measurement (TTFM) waveforms, although lower mean graft flow, higher pulsatility index, and higher systolic reversal flow may suggest inadequate anastomosis. FFT analysis of the TTFM waveforms is useful to judge the patency of LITA to LAD, even with competitive flow. abstract_id: PUBMED:25700706 Retroaortic right internal thoracic artery grafting of circumflex artery targets. Background: Retroaortic right internal thoracic artery grafting has failed to gain popularity. We aimed to delineate patient correlates of eligibility, distribution of targets, and effects of target remoteness on outcome and risk of retroaortic bleeding. Methods: Data of 861 patients undergoing skeletonized bilateral internal thoracic artery grafting (2007-2012) were analyzed according to retroaortic (n = 300) or T-graft configuration. Retroaortic graft subgroups were categorized according to proximal (first obtuse marginal, ramus) or distal (2nd, 3rd, or sequential 1st-2nd obtuse marginal) circumflex artery targets. Results: LOESS curve analyses revealed that taller patients had a higher likelihood of retroaortic grafting. The distribution of 337 retroaortic graft targets (300 patients) was first obtuse marginal in 74.5%, 2nd marginal in 12.4%, ramus in 11.2%, 3rd marginal in 1.9% and sequential 1st-2nd marginal in 12.3%. The success rate in reaching proximal and distal circumflex artery targets was 97% and 30%, respectively, 5-year survival (92.2%) and freedom from major adverse cardiac and cerebrovascular events (85.8%) were comparable between proximal and distal retroaortic graft subgroups. Distal circumflex artery targets had no effect on the occurrence of major adverse cardiac and cerebrovascular events. The incidence of retroaortic bleeding from skeletonized retroaortic grafts was 0.6%. Conclusions: Taller patients have a greater likelihood of qualifying for retroaortic grafting. This technique is highly reproducible for proximal but not distal circumflex artery targets. Distal circumflex artery targets are not independent correlates of early or late adverse outcomes. The risk of retroaortic bleeding is low despite retroaortic right internal thoracic artery skeletonization. Answer: The location of the second internal thoracic artery (ITA) graft in coronary artery bypass grafting (CABG) does influence outcomes to some extent. A study comparing the use of the second ITA as a free graft versus an in situ graft found that long-term survival at 5, 10, and 15 years was significantly higher among patients with an in situ bilateral ITA compared with those with a free graft bilateral ITA. However, the long-term risk of readmission to the hospital for cardiovascular reasons and the need for repeat revascularization were not significantly different between the two groups (PUBMED:32305200). Another study sought to determine whether the location of the second ITA graft, whether to the circumflex or right coronary artery system, affects mortality and morbidity. The results indicated that early and late outcomes were similar regardless of whether the second ITA was anastomosed to the circumflex or right coronary artery system (PUBMED:18824756). A propensity analysis comparing the use of the left versus right ITA as a bypass graft to the left anterior descending artery (LAD) revealed that the mid-term patency and outcomes of right ITA-to-LAD grafting are good and reduce future cardiac events, in contrast to left ITA-to-LAD grafting (PUBMED:31638700). In terms of the choice between the radial artery and the right ITA as the second arterial conduit, one study found that the right ITA graft was superior to the radial artery as a second conduit in total arterial revascularization off-pump CABG (PUBMED:23566513). However, another multi-institutional study found that both radial artery and right ITA as the second arterial conduit improved long-term survival compared with single arterial CABG, and the outcomes were similar between the two (PUBMED:25762396). In summary, while the location of the second ITA graft can influence outcomes, particularly in terms of long-term survival, the differences in outcomes may not be significant in terms of hospital readmission and repeat revascularization. The choice between the right ITA and radial artery as the second arterial conduit seems to offer similar benefits in terms of long-term survival, suggesting that either can be used effectively as a second arterial graft in CABG.
Instruction: Do the reticulocyte maturation fractions and bone marrow reticulocyte count further help the classification of anemias? Abstracts: abstract_id: PUBMED:8039039 Do the reticulocyte maturation fractions and bone marrow reticulocyte count further help the classification of anemias? Background: Reticulocyte count plays a major role in anemic evaluation. The conventional method done by the manual supravital staining cannot subclassify the group of less than 2% of corrected reticulocyte count. The newly-developed flow cytometer provides different maturation fractions by measuring its fluorescent intensity. The reticulocytes are believed to shift to the circulation from the bone marrow earlier in more severe anemia. Therefore, the purpose of this study is to evaluate the role of reticulocyte maturation fractions and bone marrow reticulocyte in anemia classification. Methods: By using a fully automated counter, the roles of the reticulocyte with maturation and their shifting from bone marrow were evaluated in anemias. Different groups of subjects (243 in total) including aplastic, nutritional, and infiltrative anemias and anemia due to excess destruction and blood loss were studied. Each subject had bone marrow examination for morphologic diagnosis and reticulocyte evaluation. Results: Both the absolute count and the maturation fractions of reticulocytes showed significant difference among marrow infiltration, aplastic anemia, and hemolytic anemia. Both the absolute reticulocyte count and less mature fractions were lowest in aplastic group. The marrow reticulocyte counts and shift ratio to circulating blood added little benefit in the classification of anemias. Conclusions: The automated reticulocyte count with maturation fractions helps classify anemias, particularly for those with low reticulocyte count by the manual method. abstract_id: PUBMED:5773082 Characteristics of marrow production and reticulocyte maturation in normal man in response to anemia. Erythropoiesis in normal man was studied during periods of phlebotomy-induced anemia of varying severity. This study permitted a comparison of marrow production measurements over a wide range of marrow production levels. As long as the serum iron remained above 50 mug/100 ml, measurements of plasma iron turnover provided an excellent index of marrow production at all levels of red cell production. In contrast, the absolute reticulocyte count demonstrated a poor correlation with the other measurements. This was shown to be the result of a prolongation of the time required for circulating reticulocytes to lose their reticulum, which correlated with the severity of the anemia. For the clinical application of the reticulocyte count as a measurement of marrow production, an adjustment must be made for this alteration in the circulating reticulocyte maturation time. abstract_id: PUBMED:7306023 Separation of haemopoietic cells for biochemical investigation. Preparation of erythroid and myeloid cells from human and laboratory-animal bone marrow and the separation of erythroblasts according to their state of maturation. The separation of haemopoietic bone-marrow cells by centrifugation through discontinuous density gradients of Percoll is described. This method was used to prepare fractions enriched in erythroblasts, myeloid blast cells or reticulocytes from bone marrow of anaemic and non-anaemic rabbits, from the marrow of other anaemic laboratory animals and from human samples. It is a simple, rapid, reproducible and inexpensive technique that can be readily adapted to suit individual requirements. Secondly, a convenient method is presented for the separation of large quantities of bone-marrow cells into fractions enriched in erythroblasts at different stages of maturation, by velocity sedimentation through a linear gradient of 1-2% sucrose at unit gravity. In vitro, erythroblasts adhere together strongly via a mechanism almost certainly involving a beta-galactoside-specific surface lectin termed erythroid developmental agglutinin. Since the efficiency of cell-separation techniques depends heavily on the maintenance of a single cell suspension in which each unit can move independently, the presence of an adhesive molecule at the cell surface is of considerable significance. The effect of washing the marrow with a lactose-containing medium, which has been shown to remove the agglutinin, was therefore investigated in relation to both methods. The separation on Percoll gradients is considerably enhanced by this treatment. In addition, the unit-gravity sedimentation gradient can be loaded with 5-10 times more cells after lactose extraction in comparison with intact marrow. Although enrichment is less, a useful fractionation according to maturation is still obtained. abstract_id: PUBMED:14578739 Bone marrow failure following severe injury in humans. Background: Hematopoietic failure has been observed in experimental animals following shock and injury. In humans, bone marrow dysfunction has been observed in the red cell component and characterized by a persistent anemia, low reticulocyte counts, and the need for repeated transfusions despite adequate iron stores. While a quantitative defect in white blood cell count has not been noted, an alteration in white blood cell function manifesting as an increased susceptibility to infection is well established. Since the etiology of this anemia remains unknown and the bone marrow has been rarely studied following injury, we measured various parameters of hematopoiesis directly using bone marrow from trauma patients and tested the hypothesis that trauma results in profound bone marrow dysfunction, which could explain both the persistent anemia and the alteration in white blood cell function. Methods: Bone marrow aspirates and peripheral blood were obtained between day 1 and 7 following injury from 45 multiple trauma patients. Normal volunteers served as controls. Peripheral blood was assayed for hemoglobin concentration, reticulocyte count, erythropoietin levels, white blood cell count, and differential. Peripheral blood and bone marrow were cultured for hematopoietic progenitors (CFU-GM, BFU-E, and CFU-E colonies). Results: Bone marrow CFU-GM, BFU-E, and CFU-E colony formation was significantly reduced while peripheral blood CFU-GM, BFU-E, and CFU-E was increased in the trauma patients compared with normal volunteers. Bone marrow stroma failed to grow to confluence by day 14 in &gt;90% of trauma patients. In contrast, bone marrow stroma from volunteers always reached confluence between days 10 and 14 in culture. The mean hemoglobin concentration and reticulocyte counts of the trauma patients were 8.6 +/- 1.0 g/dL and 2.75 +/- 0.7% respectively, while their plasma erythropoietin levels were 2 to 10 times greater than control values. Conclusions: Release of immature white blood cells into the circulation may also contribute to a failure to clear infection and an increased propensity to organ failure. Concomitantly, profound changes occur within the bone marrow, which include the increased release of erythroid and myeloid progenitors into the circulation, a decrease in progenitor cell growth within the bone marrow, and an impaired growth of the bone marrow stroma. Erythropoietin levels are preserved following trauma, implying that the persistent anemia of injury is related to the failure of the bone marrow to respond to erythropoietin. abstract_id: PUBMED:35381671 Characteristics of bone marrow compensatory erythropoiesis in hereditary spherocytosis Objective: To reveal the compensatory features of bone marrow (BM) erythropoiesis in hereditary spherocytosis (HS) and to explore the effect of diferent hemoglobin levels on this compensation. Methods: Clinical and laboratory data of patients with HS were collected, and the peripheral blood absolute reticulocytes counts value was taken as the surrogate parameter to evaluate the ability of erythropoiesis compensation. BM erythropoiesis compensation in HS with diferent degrees of anemia were evaluated. Results: ①Three hundred and two patients were enrolled, including 115 with compensated hemolytic disease, 74 with mild anemia, 90 with moderate anemia, and 23 with severe anemia. ②Hemoglobin (HGB) was negatively correlated with serum erythropoietin in the decompensated hemolytic anemia group (EPO; rs=-0.585, P&lt;0.001) . ③The median absolute reticulocyte count (ARC) of HS patients was 0.34 (0.27, 0.44) ×10(12)/L, up to 4.25 times that of normal people. The maximum ARC was 0.81×10(12)/L, about 10 times that of normal people. The median ARC of patients with compensated hemolytic disease was 0.29 (0.22, 0.38) ×10(12)/L, up to 3.63 times that of normal people. The median ARC of patients with hemolytic anemia was 0.38 (0.30, 0.46) ×10(12)/L, which was significantly higher than the patients with compensated hemolytic disease, up to 4.75 times that of normal people (z=4.999, P=0.003) . ④ ARC was negatively correlated with HGB in the compensated hemolytic disease group (rs=-0.177, P=0.002) and positively correlated with HGB in the decompensated hemolytic anemia group (rs=0.191, P=0.009) . There was no significant difference in the ARC among patients with mild, moderate, and severe anemia (χ(2)=4.588, P=0.101) . ⑤The median immature reticulocyte production index of the mild, moderate, and severe anemia groups was 13.1% (9.1%, 18.4%) , 17.0% (13.4%, 20.8%) , and 17.8% (14.6%, 21.8%) , respectively; the mild anemia group had lower index values than the moderate and severe anemia groups (P(adj) values were both&lt;0.05) , but there was no significant difference between the latter groups (P(adj)=1.000) . The median immature reticulocyte count of patients in the mild, moderate, and severe groups was 5.09 (2.60, 7.74) ×10(10)/L, 6.24 (4.34, 8.83) ×10(10)/L, and 7.00 (3.07, 8.22) ×10(10)/L, respectively; there was no significant difference among the groups (χ(2)=3.081, P=0.214) . Conclusion: HGB can be maintained at a normal level through bone marrow erythropoiesis, while red blood cells are reduced in HS. However, once anemia develops, the bone marrow exerts its maximum erythropoiesis capacity and does not increase, regardless of anemia aggravation or serum EPO increase. abstract_id: PUBMED:4073571 Equine bone marrow: a quantitative analysis of erythroid maturation. The equine bone marrow responds to blood loss by increased erythropoiesis, only releasing reticulocytes into the peripheral circulation in severe chronic anemia. We have used morphometric analysis based on electron microscopy of the equine marrow to examine the maturation and release of reticulocytes. Developing red cells in the bone marrows of normal and chronically anemic horses were divided into four stages: early, intermediate, late-stage erythroblasts, and reticulocytes. Morphometric analysis of each stage included volume density of mitochondria per micron3 of cytoplasm, surface area of the outer mitochondrial membrane per unit volume of mitochondria, and the number of ribosomes per unit volume of cytoplasm (total, clustered, single). Matched t tests between normal and anemic animals showed significant differences (P less than or equal to .001) for volume density of mitochondria and numbers of ribosomes only at the reticulocyte stage. The large reticulocyte produced and released in chronic anemia may be best explained by a skipped mitotic division. abstract_id: PUBMED:3986679 An appraisal of bone marrow biopsy in assessment of sick dogs. Dogs were classified into a number of disease categories according to hematological, cytological and serochemical changes. Aspiration and core bone marrow biopsies were examined in 128 dogs in the various disease categories and compared to marrow samples in 36 dogs which appeared clinically normal. Differential cell counts on bone marrow smears were examined in relation to the blood variables in all animals. Blood and bone marrow data (group means) were compared among the normal and disease groups. Anemia, responsive and poorly responsive was the most frequent blood abnormality. Most dogs in the thrombocytopenia group had increased numbers of megakaryocytes in the marrow but two dogs had a marked decrease. The frequency of serious alteration of marrow production of the erythroid, myeloid and megakaryocytic series was less than anticipated. Marrow hemopoiesis was not significantly compromised in dogs with lymphoma or in dogs with other types of cancer. Bone marrow examination was necessary for the diagnosis of myelofibrosis and pancytopenia and was very helpful in the groups with insufficient change in the blood to permit a definitive diagnosis to be made. The myeloid-erythroid ratio was a useful indicator of marrow response while the erythroid maturation index and the myeloid maturation index were useful for identification of altered patterns of maturation (ineffective hemopoiesis). The reticulocyte response in absolute numbers is the most efficient and clinically relevant measure of erythroid response. abstract_id: PUBMED:1348380 A light and electron microscopic study of changes in blood and bone marrow in acute hemorrhagic Trypanosoma vivax infection in calves. Eleven 6-month-old calves were tsetse fly challenged with a stock of Trypanosoma vivax (IL 2337) that causes hemorrhagic infection. The calves were randomly euthanatized every 4 to 6 days; two other calves served as controls. Peripheral blood changes included anemia, thrombocytopenia, and an initial leukopenia. Later in the course of infection, leukocytosis associated with lymphocytosis and neutropenia developed. Moderate reticulocytosis (highest mean count 3.6 +/- 3.7%, maximum count 9.4%) accompanied the first wave of parasitemia, but poor response (highest mean 0.4 +/- 0.0%) occurred during the second wave, despite the persistence of severe anemia. Light microscopic examination of bone marrow samples showed a drop in the myeloid: erythroid ratio with a decrease in granulocytes, particularly metamyelocytes, bands, and segmenters. Increase in lymphocyte counts corresponded with the appearance of lymphoid nodules within the marrow. Megakaryocytic volume increased significantly in infected animals, and some megakaryocytes showed emperipolesis of red cells, neutrophils, and lymphocytes. Transmission electron microscopic examination of the bone marrow revealed that trypanosomes had crossed the sinusoidal endothelium into the hematopoietic compartment as early as the second day of parasitemia. Macrophages proliferated in the bone marrow; and from the second day of parasitemia until the end of the experimental infection, on day 46, the macrophages had phagocytosed normoblasts, eosinophil and neutrophil myelocytes, metamyelocytes, bands, and segmenters, as well as reticulocytes, erythrocytes, and thrombocytes. Therefore, dyserythropoiesis and dysgranulocytopoiesis were responsible, in part, for the observed anemia and granulocytopenia, respectively. abstract_id: PUBMED:16227676 Automated analyzer evaluation of reticulocytes in bone marrow and peripheral blood of hematologic disorders. The R-3000 reticulocyte analyzer uses flow cytometry with an argon laser as its light source. This analyzer stains residual RNA with auramine O to provide a reticulocyte maturation differential. Using the R-3000, we analyzed 119 samples of bone marrow (BM) and peripheral blood (PB) from 111 patients with hematologic disorders. Parameters were reticulocytes, immature reticulocyte fraction (IRF) percentage in BM and PB, BM/PB reticulocyte ratio, and BM/PB IRF ratio. Reticulocytes and IRF percentage in BM were significantly higher than in PB (p &lt; 0.01). There was also a good correlation between reticulocyte percentages in BM and in PB (r = 0.81). Patients were classified into a normal group (without anemia) and an anemia group. Furthermore, the anemia group was classified into three groups: group 1: cases with hematopoietic dysfunction; group 2: cases in bone marrow recovery phase after chemotherapy and hematopoietic stem cell transplantation, and hematologic disorders with bone marrow accelerative phase, and group 3: cases with ineffective hematopoiesis (myelodysplastic syndrome). The mean reticulocyte percentage of the normal group was 2.3 +/- 1.1%, which was close to the normal value in BM. The BM/PB reticulocyte ratio of group 3 was statistically higher than that of groups 1 and 2. This indicates that group 3 had ineffective erythropoiesis and that the BM/PB ratio is a useful indicator for the diagnosis of myelodysplastic syndrome. abstract_id: PUBMED:8192858 Transferrin-binding capacity by rat bone marrow populations containing different proportions of erythroid cells. The transferrin-binding capacity of rat bone marrow cells and different erythroid-enriched populations from rat bone marrow has been studied and compared with that previously reported for pure reticulocyte populations. Two components with different transferrin-binding capacities were found. The one with higher affinity was present mainly in erythroid cells and showed an association constant similar to that observed in reticulocyte populations. The component with a lower transferrin affinity was observed mainly in bone marrow fractions containing low proportions of erythroid cells. Answer: Yes, the reticulocyte maturation fractions and bone marrow reticulocyte count do help in the further classification of anemias. The study by PUBMED:8039039 found that the automated reticulocyte count with maturation fractions is particularly helpful in classifying anemias, especially for those with low reticulocyte counts by the manual method. The absolute count and the maturation fractions of reticulocytes showed significant differences among various types of anemia, such as marrow infiltration, aplastic anemia, and hemolytic anemia. However, the marrow reticulocyte counts and shift ratio to circulating blood added little benefit in the classification of anemias. Additionally, PUBMED:5773082 demonstrated that the absolute reticulocyte count had a poor correlation with marrow production measurements due to a prolongation of the time required for circulating reticulocytes to lose their reticulum, which correlated with the severity of the anemia. This suggests that for clinical application, an adjustment must be made for the alteration in the circulating reticulocyte maturation time. Furthermore, PUBMED:35381671 highlighted that in hereditary spherocytosis, the bone marrow can maintain hemoglobin at a normal level through erythropoiesis, but once anemia develops, the bone marrow exerts its maximum erythropoiesis capacity and does not increase, regardless of anemia aggravation or serum erythropoietin increase. In summary, reticulocyte maturation fractions and bone marrow reticulocyte counts provide valuable information for the classification of anemias, although their utility may vary depending on the type and severity of the anemia.
Instruction: Programmable shunts and headphones: Are they safe together? Abstracts: abstract_id: PUBMED:9454985 Breakage of shunt devices (Sophy programmable pressure valve) following implantation in the hypochondriac region. We report on three cases in which shunt devices (Sophy programmable pressure valve) broke while implanted in the hypochondriac region. None of these patients had an apparent history of trauma related to the shunt valve. The outlet connector was broken at the same site in each case. The dissociation between the movement of the thorax and that of the abdomen may have caused the formation of a crack in the outlet connector. abstract_id: PUBMED:21082054 A comparative result of ventriculoperitoneal shunt, focusing mainly on gravity-assisted valve and programmable valve. Objective: Despite rapid evolution of shunt devices, the complication rates remain high. The most common causes are turning from obstruction, infection, and overdrainage into mainly underdrainage. We investigated the incidence of complications in a consecutive series of hydrocephalic patients. Methods: From January 2002 to December 2009, 111 patients underwent ventriculoperitoneal (VP) shunting at our hospital. We documented shunt failures and complications according to valve type, primary disease, and number of revisions. Results: Overall shunt survival time was 268 weeks. Mean survival time of gravity-assisted valve (GAV) was 222 weeks versus 286 weeks for other shunts. Survival time of programmable valves (264 weeks) was longer than that of pressure-controlled valves (186 weeks). The most common cause for shunt revision was underdrainage (13 valves). The revision rate due to underdrainage in patients with GAV (7 of 10 patients) was higher than that for other valve types. Of 7 patients requiring revision for GAV underdrainage, 6 patients were bedridden. The overall infection rate was 3.6%, which was lower than reported series. Seven patients demonstrating overdrainage had cranial defects when operations were performed (41%), and overdrainage was improved in 5 patients after cranioplasty. Conclusion: Although none of the differences was statistically significant, some of the observations were especially notable. If a candidate for VP shunting is bedridden, GAV may not be indicated because it could lead to underdrainage. Careful procedure and perioperative management can reduce infection rate. Cranioplasty performed prior to VP shunting may be beneficial. abstract_id: PUBMED:27041631 Shunt Devices for the Treatment of Adult Hydrocephalus: Recent Progress and Characteristics. Various types of shunt valves have been developed during the past 50 years, most of which can be classified into the following categories: (1) fixed differential pressure valves; (2) fixed differential pressure (DP) valves with an antisiphon mechanism; (3) programmable DP valves; (4) programmable DP valves with an antisiphon mechanism; and (5) programmable antisiphon valves. When considering the myriad of possible postoperative condition changes, such as the onset of accidental non-related diseases or trauma in adults, and changes in normal physiological development or anticipation of future shunt removal in children, it has become standard to use the programmable valve as a first choice for cerebrospinal fluid shunting. However, it is still unclear what type of shunt valve is suitable for each individual case. Based on the results of SINPHONI and more recently SINPHONI 2 trials, the programmable DP valve is recommended as the first line shunt valve. The programmable DP valve with an antisiphon mechanism is thought to be beneficial for tall, slender patients, who have a tendency for easily developing complications of overdrainage, however, this type of valve must be used cautiously in obese patients because of the increased risk of underdrainage. Although the current evidence is still insufficient, the programmable antisiphon valve, which costs the same as the programmable DP valve, is also thought to be the first line shunt valve. The quick reference table is applicable for most shunt valves, and for patients with either the ventriculoperitoneal or the lumboperitoneal shunt. abstract_id: PUBMED:21989777 Cochlear implantation for total deafness after ipsilateral ventriculoperitoneal shunt surgery: technical report. Background: Cochlear implantation (CI) becomes more challenging when a programmable ventriculoperitoneal (VP) shunt is present on the same side of the head. We report a successful ipsilateral CI in a patient who suffered bilateral hearing loss after shunt surgery. Method: We describe the CI case of a 49-year-old woman with an implanted VP shunt device, and perform a retrospective review of the clinical, audiological, and radiological features, surgical techniques, and postoperative functional outcomes. Findings: During CI, the magnet in the core of internal antenna of the CI device was positioned approximately 8 cm from the shunt valve so that it would not be affected by the magnetic field of the programmable valve. Although the CI antenna tip and shunt catheter overlapped, we were very careful not to pull the shunt catheter out of position while the musculoperiosteal flap was elevated. Conclusions: The present result suggests that ipsilateral CI is an option for deaf adult patients who have a programmable shunt. abstract_id: PUBMED:32542676 Ventriculo-peritoneal shunting devices for hydrocephalus. Background: Hydrocephalus is a common neurological disorder, caused by a progressive accumulation of cerebrospinal fluid (CSF) within the intracranial space that can lead to increased intracranial pressure, enlargement of the ventricles (ventriculomegaly) and, consequently, to brain damage. Ventriculo-peritoneal shunt systems are the mainstay therapy for this condition, however there are different types of shunt systems. Objectives: To compare the effectiveness and adverse effects of conventional and complex shunt devices for CSF diversion in people with hydrocephalus. Search Methods: We searched the Cochrane Central Register of Controlled Trials (2020 Issue 2); Ovid MEDLINE (1946 to February 2020); Embase (Elsevier) (1974 to February 2020); Latin American and Caribbean Health Science Information Database (LILACS) (1980 to February 2020); ClinicalTrials.gov; and World Health Organization International Clinical Trials Registry Platform. Selection Criteria: We selected randomised controlled trials or quasi-randomised trials of different types of ventriculo-peritoneal shunting devices for people with hydrocephalus. Primary outcomes included: treatment failure, adverse events and mortality. Data Collection And Analysis: Two review authors screened studies for selection, assessed risk of bias and extracted data. Due to the scarcity of data, we performed a Synthesis Without Meta-analysis (SWiM) incorporating GRADE for the quality of the evidence. Main Results: We included six studies with 962 participants assessing the effects of standard valves compared to anti-syphon valves, other types of standard valves, self-adjusting CSF flow-regulating valves and external differential programmable pressure valves. All included studies started in a hospital setting and offered ambulatory follow-up. Most studies were conducted in infants or children with hydrocephalus from diverse causes. The certainty of the evidence for most comparisons was low to very low. 1. Standard valve versus anti-syphon valve Three studies with 296 randomised participants were included under this comparison. We are uncertain about the incidence of treatment failure in participants with standard valve and anti-syphon valves (very low certainty of the evidence). The incidence of adverse events may be similar in those with standard valves (range 0 to 1.9%) and anti-syphon valves (range 0 to 2.9%) (low certainty of the evidence). Mortality may be similar in those with standard valves (0%) and anti-syphon valves (0.9%) (RD 0.01%, 95% CI -0.02% to 0.03%, low certainty of the evidence). Ventricular size and head circumference may be similar in those with standard valves and anti-syphon valves (low certainty of the evidence). None of the included studies reported the quality of life of participants. 2. Comparison between different types of standard valves Two studies with 174 randomised participants were included under this comparison. We are uncertain about the incidence of treatment failure in participants with different types of standard valves (early postoperative period: RR 0.41, 95% CI 0.13 to 1.27; at 12 months follow-up: RR 1.17, 95% CI 0.72 to 1.92, very low certainty of the evidence). None of the included studies reported adverse events beyond those included under "treatment failure". We are uncertain about the effects of different types of standard valves on mortality (range 2% to 17%, very low certainty of the evidence). The included studies did not report the effects of these interventions on quality of life, ventricular size reduction or head circumference. 3. Standard valve versus self-adjusting CSF flow-regulating valve One study with 229 randomised participants addressed this comparison. The incidence of treatment failure may be similar in those with standard valves (42.98%) and self-adjusting CSF flow-regulating valves (39.13%) (low certainty of the evidence). The incidence of adverse events may be similar in those with standard valves (range 0 to 1.9%) and those with self-adjusting CSF flow-regulating valves (range 0 to 7.2%) (low certainty of the evidence). The included study reported no deaths in either group in the postoperative period. Beyond the early postoperative period, the authors stated that nine patients died (no disaggregated data by each type of intervention was available, low certainty of the evidence). The included studies did not report the effects of these interventions on quality of life, ventricular size reduction or head circumference. 4. External differential programmable pressure valve versus non-programmable valve One study with 377 randomised participants addressed this comparison. The incidence of treatment failure may be similar in those with programmable valves (52%) and non-programmable valves (52%) (RR 1.02, 95% CI 0.84 to 1.24, low certainty of the evidence). The incidence of adverse events may be similar in those with programmable valves (6.19%) and non-programmable valves (6.01%) (RR 0.97, 95% CI 0.44 to 2.15, low certainty of the evidence). The included study did not report the effect of these interventions on mortality, quality of life or head circumference. Ventricular size reduction may be similar in those with programmable valves and non-programmable valves (low certainty of the evidence). Authors' Conclusions: Standard shunt valves for hydrocephalus compared to anti-syphon or self-adjusting CSF flow-regulating valves may cause little to no difference on the main outcomes of this review, however we are very uncertain due to the low to very low certainty of evidence. Similarly, different types of standard valves and external differential programmable pressure valves versus non-programmable valves may be associated with similar outcomes. Nevertheless, this review did not include valves with the latest technology, for which we need high-quality randomised controlled trials focusing on patient-important outcomes including costs. abstract_id: PUBMED:18402072 Development of shunt technology especially for idiopathic normal pressure hydrocephalus Outcome of the treatment on hydrocephalus especially in idiopathic normal pressure hydrocephalus (iNPH) shows the better outcome after shunting operation according to the evolution of the shunt system and the better management of shunt complications, and also, comprehension of the pathophysiology of NPH. The history of shunt system is benefit for the understanding of the now available shunt system. The first generation system is emerge as the typical standard differential pressure valves. The second generation valves was developed for the management of overdrainage problems and siphoning control. Japanese guidelines of iNPH recommended the use of the programmable valve for shunt system. Now, various shunt system have been developed for the treatment of hydrocephalus. After millennium, the new third generation shunt systems that are consisted with programmable valve and anti-siphon devices, are now line up for the management of iNPH. The new recognition of NPH and the familiarity with these various shunt devices are enable to the new stage of the clinical sciene of the diagnosis and the management of iNPH. abstract_id: PUBMED:28243998 Shunt Devices for Neurointensivists: Complications and Management. Cerebrospinal fluid diversion has become the mainstay treatment in hydrocephalus for over 50 years. As the number of patients with ventricular shunt systems increases, neurointensivists are becoming the first-line physicians for many of these patients. When symptoms of a shunt malfunction are suspected and access to a neurosurgeon is limited or delayed, workup and temporizing measures must be initiated. The article highlights the functional nuances, complications, and management of current programmable shunt valves and their MRI sensitivity. abstract_id: PUBMED:6338302 Simplified method of heterotopic cardiac transplantation for left ventricular assist. A method of heterotopic cardiac transplantation has been developed which is performed through a left thoracotomy and requires only two vascular anastomoses. Systemic heparinization and cardiopulmonary bypass are not necessary. Preliminary experiments demonstrated that this heterotopic transplant technique was effective for left ventricular assist; in some experiments, a multimode atrioventricular programmable pacemaker was used to provide sequential pacing for "counterpulsation." This technique or similar techniques may have potential clinical benefit for patients with intractable left ventricular failure. abstract_id: PUBMED:2595781 The pharmacokinetic advantage of local 6-mercaptopurine infusion in a canine renal transplant model. In light of recent technologic advances, we developed a canine renal allograft model utilizing implantable, programmable infusion pumps and biocompatible catheters to reexplore the concept of local immunosuppression. Thirteen mongrel dogs underwent bilateral nephrectomy and autotransplantation of 1 kidney via end-to-end renal-iliac artery and end-to-side renal-iliac vein anastomoses. The proximal end of an infusion catheter directed into the iliac artery was tunneled to a subcutaneously placed programmable pump. A second, sampling catheter was placed with its tip in the iliac vein just proximal to the venous anastomosis. During a period of i.a. infusion of heparinized saline ranging from 19 to 63 days, serum creatinine remained normal in all but 1 animal, which developed pyelonephritis and catheter-tip perforation of the iliac artery. No cases of arterial thrombosis or catheter migration were observed at necropsy. In 7 additional autotransplanted dogs, simultaneous iliac vein and systemic (jugular vein) concentrations of 6-mercaptopurine (6-MP), the major immunosuppressive metabolite of azathioprine, were determined during a continuous 24-hr i.a. infusion (10 mg/kg/24 hr). Following termination of the infusion, 10 mg/kg 6-MP was administered to the same 7 dogs as an i.v. bolus, and systemic drug concentrations were determined over a 4-hr period. Mean +/- SE total-body clearance and elimination half-life were 887 +/- 159 ml/min and 1.4 +/- 0.2 hr, respectively, in the i.v. bolus study, indicating that 6-MP is rapidly cleared from the systemic circulation. Unexpectedly, the kidney removed as much as 60-95% of locally infused 6-MP, reducing the amount of active drug entering the systemic circulation to 5-40% of that which would be present during an i.v. infusion of the same dose. According to the principles governing the advantages of i.a. infusions, these data demonstrate that 6-MP can be infused intrarenally to produce both a 4-fold increase in drug concentration within the kidney and an 80% decrease in systemic drug delivery when compared to same-dose i.v. administration. The overall result is the presence of a 30-fold gradient between local and systemic drug concentrations during intrarenal 6-MP infusion. We conclude that i.a. infusion of an immunosuppressive agent is technically feasible with preservation of renal function, and that 6-MP can be delivered locally in a canine model with great pharmacokinetic and potential therapeutic advantage. abstract_id: PUBMED:17392667 Right coronary system grafts: alone or together with left system grafts--angiographic results. Purpose: The aim of this study is to compare the long-term graft patency between patients who had sequential or individual right posterior descending artery (RPDA) anastomosis. Materials And Methods: Two hundred and forty-two patients underwent coronary artery bypass grafting (CABG) between June 1994 and December 2003. They were examined retrospectively with respect to coronary angiograghic data. [Group 1] Individually right system grafts in RPDA position (n=139). [Group 2] RPDA anastomosis sequentially with left system (n=103). Patency rates for posterior descending arteries in each group were separately calculated for each vessel quality category. Results: The mean interval from operation to angiography was 50.6+/-48.9 months in group 1 vs 57.5+/-39.2 months in group 2 respectively. The overall patency rate was 66.2% (92/139) in group 1 and 78.6% (81/103) in group 2 (p=0.04). When the RPDA has good run-off capacity, the patency rate was 69.1% in group 1 and 85.2% in group 2. Conclusion: When the RPDA has good run-off capacity, snake grafts show excellent results and right and left coronary systems could be anastomosed sequentially. Answer: The safety of using headphones with programmable shunts is not directly addressed in the provided abstracts. However, there are some considerations that can be inferred from the information about programmable shunts and their sensitivity to magnetic fields. Programmable shunts, such as the Sophy programmable pressure valve, are designed to allow adjustments in the pressure settings to accommodate the changing needs of patients with hydrocephalus (PUBMED:9454985, PUBMED:27041631). These devices often contain components that can be affected by strong magnetic fields, which can inadvertently alter their settings (PUBMED:28243998). In one of the abstracts, during cochlear implantation, which involves the use of a device with a magnet, care was taken to position the magnet in the cochlear implant approximately 8 cm away from the programmable shunt valve to avoid interference from the magnetic field (PUBMED:21989777). This suggests that there is a concern about the proximity of magnets to programmable shunt devices. While headphones typically contain magnets, these are generally much weaker than those used in medical devices like cochlear implants. However, without specific information on the magnetic field strength that can affect programmable shunts, it is not possible to definitively say whether using headphones is safe for patients with these devices. It would be prudent for patients with programmable shunts to consult with their healthcare provider or the manufacturer of their shunt device for guidance on the use of headphones or exposure to other magnetic fields.
Instruction: Do differences exist between chronic hepatitis C genotypes 2 and 3? Abstracts: abstract_id: PUBMED:24861286 Do differences exist between chronic hepatitis C genotypes 2 and 3? Introduction: Six genotypes of the hepatitis C virus (HCV) have been identified thus far, and their distribution is well defined. Genotype 1, which is the most prevalent worldwide, is always compared to genotypes 2 and 3, particularly in terms of treatment response. However, little is known about the differences between genotypes 2 and 3 because these genotypes are analyzed together in most studies. Therefore, the aim of this study was to evaluate differences in the clinical, epidemiological, laboratory, and histological parameters between HCV-2 and HCV-3. Methods: Patients with chronic hepatitis C infected with genotypes 2 and 3 were studied retrospectively and compared according to clinical, laboratory, and histological aspects. Hepatitis C virus-ribonucleic acid (HCV-RNA) was analyzed quantitatively by TaqMan® real-time PCR, and the HCV genotype was determined by sequencing the 5'-untranslated region. Results: A total of 306 patients with chronic HCV-2 (n=50) and HCV-3 (n = 256) were studied. Subtype 2b (n=17/50) and subtype 3a (n=244/256) were the most prevalent among patients infected with HCV-2 and HCV-3, respectively. The mean age was 47 ± 10 years, and there was a predominance of men in the group studied (61%). Comparative analysis between HCV-2 and HCV-3 showed a younger age (p=0.002), less prevalence of arterial hypertension (p=0.03), higher serum albumin levels (p=0.01), more advanced stage of liver fibrosis (p=0.03), and higher frequency of steatosis in patients with HCV-3 (p=0.001). After multivariate regression analysis, all the variables, except serum albumin, remained as variables associated with HCV-3 in the final model. Conclusions: Clinical and histological differences exist between HCV-2 and HVC-3, which suggests the need for separate analyses of these genotypes. abstract_id: PUBMED:26600223 Hepatitis C Virus: Current and Evolving Treatments for Genotypes 2 and 3. Hepatitis C virus (HCV) genotypes 2 and 3 have previously been classified as easy-to-treat genotypes, because sustained virologic responses (SVRs) up to 80% have been achieved with 24-week peginterferon and ribavirin. More detailed studies have shown differences between HCV genotypes 2 and 3, indicating that genotype 3 has become the most difficult-to-treat genotype. With new drugs, new challenges are emerging regarding relapse rates, the role of ribavirin, and optimal duration of therapy. Sofosbuvir remains the backbone of genotype 3 therapy, whereas this drug is not an option in patients with creatinine clearance lower than 30 mL/min. abstract_id: PUBMED:22729189 Therapeutic effectiveness of biosimilar standard interferon versus pegylated interferon for chronic hepatitis C genotypes 2 or 3. Background: Pegylated interferon (Peg-IFN) and standard interferon (IFN) play a significant role in the treatment of hepatitis C virus (HCV) infection. Biosimilar standard IFN is widely available in Brazil for the treatment of HCV infection genotypes 2 or 3, but its efficacy compared to Peg-IFN is unknown. Objective: To compare the sustained virological response (SVR) rates following treatment with biosimilar standard IFN plus ribavirin (RBV) versus Peg-IFN plus RBV in patients with HCV genotypes 2 or 3 infection. Methods: A retrospective cohort study was conducted in patients with HCV genotypes 2 or 3 infection treated with biosimilar standard IFN plus RBV or with Peg-IFN plus RBV. SVR rates of the two treatments were compared. Results: From January 2005 to December 2010, 172 patients with a mean age of 44 +/- 9.3 years were included. There were eight (4.7%) patients with HCV genotype 2 infections. One hundred fourteen (66.3%) were treated with biosimilar standard IFN plus RBV, whist 58 (33.7%) patients were treated with Peg-IFN plus RBV. Between the two groups, there were no significant differences regarding age, gender, glucose level, platelet count, hepatic necroinflammatory grade, and hepatic fibrosis stage. Overall, 59.3% (102/172) patients had SVR. In patients treated with Peg-IFN plus RBV, 79.3% (46/58) had SVR compared to 49.1% (56/114) among those treated with biosimilar standard IFN plus RBV (p = 0.0001). Conclusion: In patients with HCV genotypes 2 or 3 infection, a higher SVR was observed in patients receiving Peg-IFN plus RBV related to patients treated with biosimilar standard IFN plus RBV. abstract_id: PUBMED:32735876 Hepatitis C virus genotypes and associated risk factors in the state of Pará, Northern Brazil. Background: Despite the emergence of more effective therapies, hepatitis C virus (HCV) infection remains a serious public health problem at the global level. Currently, this virus is classified into seven genotypes and 67 subgenotypes, which in turn are distributed heterogeneously in Brazil and worldwide. Studies have shown that this genetic divergence results in differences in the progression of chronic disease associated with HCV infection and its treatment. Objective: The aim of this study was to report the frequency of HCV genotypes in the state of Pará, Northern Brazil, and to assess the association between genotype and different clinical and laboratory characteristics, as well as risk factors for infection. Method: Data from 85 medical records of untreated patients who had chronic hepatitis C infection were analyzed; the patients were evaluated at two hospitals in Belem, Pará, Brazil. Results: Circulation of genotypes 1 and 3 was detected, with a higher prevalence of genotype 1 (75.3%) than genotype 3 (24.7%). In addition, there was a predominance of subgenotype 1b (60.34%) compared to 1a (20.69%) and 3a (18.97%). Reuse of needles and/or glass syringes was significantly associated with infection by HCV genotype 1 than genotype 3; however, the small number of patients infected with genotype 3 may have biased the results. No associations between genotype and the evaluated clinical and laboratory characteristics were observed. Conclusion: This study reinforces the differences in the distribution of HCV genotypes in Brazil and showed no association between HCV genotype and progression of chronic hepatitis C in the studied group. abstract_id: PUBMED:17319933 Treatment of chronic hepatitis C virus in African Americans with genotypes 2 and 3. Previous studies have documented that sustained virologic response (SVR) is significantly reduced in African Americans with chronic HCV genotype 1 following treatment with interferon and ribavirin when compared with Caucasians. The specific aim of the present retrospective study was to assess virologic response to interferon and ribavirin in African Americans with HCV genotypes 2 and 3. A review of our database identified 42 African Americans and 334 Caucasians with HCV genotypes 2 and 3. Patients coinfected with hepatitis B or human immunodeficiency virus, chronic renal failure, and recipients of an organ transplant were excluded. Thirty of the African Americans were treated with either standard interferon or peginterferon and ribavirin as initial treatment for chronic HCV. Ninety of the 334 Caucasians were matched to the African Americans with regards to genotype, cirrhosis, treatment regimen, sex, age, and body weight for comparison of virologic response. The proportion of patients with HCV genotype 2 was significantly greater (P &lt; 0.001) in African Americans compared with Caucasians (81%vs 52%). End-of-treatment virologic response was observed in 94% of Caucasians compared with 80% in African Americans (P= 0.036). SVR was observed in 82% and 57% of Caucasians and African Americans, respectively (P= 0.012). Similar results were observed when patients who had been treated with only peginterferon and ribavirin were assessed. These results suggest that African Americans have a global defect in their ability to eradicate HCV infection following treatment with interferon and ribavirin which transcends across all genotypes. abstract_id: PUBMED:37766298 Hepatitis C and Human Pegivirus Coinfection in Patients with Chronic Hepatitis C from the Brazilian Amazon Region: Prevalence, Genotypes and Clinical Data. Coinfection of HPgV-1 with hepatitis C virus (HCV) is common due to shared modes of transmission, with a prevalence of HPgV-1 viremia of approximately 20% among individuals with chronic HCV infection. The aim of the present study was to estimate the prevalence of HPgV-1 RNA and circulating genotypes in patients with hepatitis C from a health service located in the city of Belém, in the state of Pará, Northern Brazil. A total of 147 samples were included in the study from February to December 2019. Among the participants, 72.1% (106/147) were monoinfected with HCV, with detectable HCV viral RNA, and 27.9% (41/147) were coinfected with HCV/HPgV-1. The most frequently found genotypes were HPgV-1 genotypes 1 and 2 (36.6% and 63.4%), respectively. While for HCV there was a predominance of genotypes 1 and 3 (58.5% and 41.5%). No significant differences were found when comparing any risk, sociodemographic, or clinical factors between groups. Also, there was no statistically significant difference when relating the viral genotypes of both agents. This study indicated that the prevalence of infection by HPgV-1 is high in HCV carriers in Belém, Pará, and probably does not change the clinical course of HCV infection, however, further studies are still needed. abstract_id: PUBMED:30419615 Distribution of hepatitis C virus genotypes and subtypes in Croatia: 2008-2015. Objective: Hepatitis C virus (HCV) genotyping is an important part of pre-treatment diagnostic algorithms as it guides the choice of therapeutic regimens. The aim of this study was to analyse the distribution of HCV genotypes in patients with chronic hepatitis C from Croatia in the period 2008-2015. Methods: The study enrolled 3,655 anti-HCV positive patients with available results of HCV genotyping from the three largest national HCV genotyping laboratories. Results: The majority of HCV-infected individuals enrolled in the study were male (70.7%). Analysis of age distribution in a subset of 2,164 individuals showed a mean age of 40.9 years (SD 11.77 years). Croatian patients were mostly infected with HCV genotype 1 (56.6%), followed by genotype 3 (37.3%), genotype 4 (4.2%) and genotype 2 (1.8%). Genotype 1 subtyping in a subset of 1,488 patients showed 54% (803/1,488) of 1b infections and 46% (685/1,488) of 1a infections. Percentages of genotype 1 were the highest in Central/Northwestern and Eastern Croatia and the lowest in the Central/Southern Adriatic Region. Genotype 3 was most frequently found in the Central/Southern Adriatic Region (49.1%) but represented only 17.5% of infections in Eastern Croatia (p &lt; 0.001). Conclusions: The results of this nine-year retrospective analysis on the distribution of HCV genotypes and subtypes in 3,655 HCV-infected individuals from Croatia showed that the majority of infections can be attributed to genotypes 1 and 3 with absence of major changes in the molecular epidemiology of the two most frequent HCV genotypes infection in Croatia in the past 20 years. abstract_id: PUBMED:19950804 Is HCV genotyping cost-effective even when the prevalences of genotypes 2 and 3 are low? Background/aims: The management of chronic hepatitis C virus (HCV) infection is costly. Genotyping determines the indication, probability of response, and duration of treatment and the dose of ribavirin. Although genotyping is accepted cost-effective, the cost of genotyping in all of the patients to find out a minority may offset the gain. The present study aimed; (1) to determine the frequency rate of HCV genotypes and (2) to compare the cost of HCV treatment tailored according to the genotype versus that planned supposing it to be genotype 1. Methodology: Six centers were included into the study. Name, age, genotype, and serotype of each patient were entered. For genotyping, HCV-RNA was extracted by acid-guanidium-phenol-chloroform method. Cost of genotyping, HCV-RNA studies and the treatment with pegylated interferon and ribavirin was estimated. The cost was determined according to two scenarios: (A) To manage patients as if all had genotypes other than 2-3. (B) To manage them after determining the geno type. The management was assumed to be made by current guidelines. Results: The data of 834 patients were analyzed: Genotypel was predominant: 730 (87.5%). The rest was composed of G2:26 (3.1%), G3:26 (3.1%), G4:14 (1.7%), mixed: 13 (1.6%), undetermined: 25(3%). The cost of approach A (for 100 patients) was 1,718,200 USD; that of approach B (for 100 patients) was 1,671,900 USD. With genotype targeted therapy, every 100 patient would save 46,300 USD. Conclusions: The prevalent genotype in our country is genotypel. The sum of genotypes 2 and 3 corresponds to 6%. Genotyping HCV and tailoring the treatment thereafter are cost-effective even in the countries where prevalence of these genotypes is low. abstract_id: PUBMED:19207964 Optimizing the treatment of chronic hepatitis due to hepatitis C virus genotypes 2 and 3: a review. Recently several randomized trials involving exclusively HCV 2 and 3 patients have explored the possibility of reducing the duration of therapy with PEG IFNs and ribavirin to 12-16 weeks. Among these, the largest studies (ACCELERATE, NORTH-C and NORDynamIC) have failed to demonstrate, by intention-to-treat analysis, that short treatment is non-inferior to the standard duration of 24 weeks originated by phase 3 trials. Even though obtaining univocal conclusions from these studies are difficult to obtain due to some critical differences (trial design, genotypes 2/3 ratio, rate of cirrhosis at baseline, ribavirin dose, assays to detect HCV-RNA etc), all have proved that a rapid virological response (HCV-RNA negative at 4 weeks) is the strongest predictor of SVR. Therefore, excluding risk factors for virological relapse at baseline, and identifying in the early phase of treatment, features related to a sustained response, the decision to reduce the duration of treatment to less than 24 weeks in HCV-2 and 3 patients can be response-guided appropriately. Ongoing studies will assess whether extended 48 week regimens can benefit non-RVR patients with HCV 2 or 3, especially those with more severe fibrosis. abstract_id: PUBMED:20032543 Optimal length of antiviral therapy in patients with hepatitis C virus genotypes 2 and 3: a meta-analysis. Background: Current guidelines recommend a duration of 24 weeks of treatment with pegylated interferon and ribavirin for patients infected with chronic hepatitis C virus (HCV) genotypes 2 and 3. Several trials investigated whether shorter treatment duration is equally effective in achieving sustained virological response (SVR). Our aim was to determine the optimal length of treatment in patients with HCV genotypes 2 and 3. Methods: Systematic literature identified eight randomized controlled trials (RCTs). Meta-analyses were carried out on SVR data from three studies randomized at baseline and five studies randomized at rapid virological response (RVR) to either 12-16 weeks or a 24-week course. Results: Pooled SVR data were higher in standard treatment in RCTs that randomized at baseline, with a relative risk (RR) of 0.88 (95% confidence interval [CI] 0.76-1.01). The pooled proportion of SVR rates of RCTs that randomized at RVR were similar in the short treatment group (82%) as in the standard treatment (83%), with the pooled effect given by a RR of 1.00 (95% CI 0.92-1.09). Conclusions: A shorter course (12-16 weeks) of combination therapy does not impair efficacy compared with a 24-week course in HCV genotypes 2 and 3 patients who achieve an RVR. HCV patients without RVR should consider 24 weeks of treatment. Answer: Yes, differences do exist between chronic hepatitis C genotypes 2 and 3. A study that evaluated the clinical, epidemiological, laboratory, and histological parameters between HCV-2 and HCV-3 found that patients with HCV-3 were younger, had a lower prevalence of arterial hypertension, higher serum albumin levels, more advanced stage of liver fibrosis, and a higher frequency of steatosis compared to those with HCV-2. After multivariate regression analysis, all the variables except serum albumin remained associated with HCV-3 (PUBMED:24861286). Moreover, it has been indicated that HCV genotype 3 has become the most difficult-to-treat genotype, despite both genotypes 2 and 3 previously being classified as easy-to-treat. This is due to new challenges emerging with new drugs, including relapse rates, the role of ribavirin, and the optimal duration of therapy. Sofosbuvir remains the backbone of genotype 3 therapy (PUBMED:26600223). In terms of treatment effectiveness, a study comparing the sustained virological response (SVR) rates following treatment with biosimilar standard interferon plus ribavirin versus pegylated interferon plus ribavirin in patients with HCV genotypes 2 or 3 infection found a higher SVR in patients receiving Peg-IFN plus RBV compared to those treated with biosimilar standard IFN plus RBV (PUBMED:22729189). These findings suggest that there are indeed differences between chronic hepatitis C genotypes 2 and 3, which can have implications for the clinical management and treatment of these infections.