input
stringlengths
6.82k
29k
Instruction: Does Emotion Dysregulation Mediate the Association Between Sluggish Cognitive Tempo and College Students' Social Impairment? Abstracts: abstract_id: PUBMED:24691529 Does Emotion Dysregulation Mediate the Association Between Sluggish Cognitive Tempo and College Students' Social Impairment? Objective: Studies demonstrate an association between sluggish cognitive tempo (SCT) and social impairment, although no studies have tested possible mechanisms of this association. This study aimed to (a) examine SCT in relation to college students' social functioning; (b) test if SCT is significantly associated with emotion dysregulation beyond depressive, anxious, and ADHD symptoms; and (c) test if emotion dysregulation mediates the association between SCT symptoms and social impairment. Method: College students (N = 158) completed measures of psychopathology symptoms, emotion dysregulation, and social functioning. Results: Participants with elevated SCT (12%) had higher ADHD, depressive, and anxious symptoms in addition to poorer emotion regulation and social adjustment than participants without elevated SCT. Above and beyond other psychopathologies, SCT was significantly associated with social impairment but not general interpersonal functioning. SCT was also associated with emotion dysregulation, even after accounting for the expectedly strong association between depression and emotion dysregulation. Further analyses supported emotion dysregulation as a mediator of the association between SCT and social impairment. Conclusion: These findings are important for theoretical models of SCT and underscore the need for additional, longitudinal research. abstract_id: PUBMED:36576055 Sluggish Cognitive Tempo (SCT), Comorbid Psychopathology, and Functional Impairment in College Students: The Clinical Utility of SCT Subfactors. Background: Sluggish cognitive tempo (SCT) has been proposed to be either its own distinct disorder or a transdiagnostic process. Objective: To examine SCT within ADHD (and its specific presentations) and internalizing disorders and its relationship with functional impairment, particularly when considered from a multidimensional perspective. Method: Undergraduate students (N = 2,806) completed self-report scales measuring SCT, ADHD, anxiety, depression, and functional impairment. The SCT scale consisted of three subfactors identified in prior research. Results: Students with internalizing disorders were equally as likely as those with ADHD to report clinically significant SCT, and having multiple other disorders predicted especially high levels of SCT symptoms. Only sleepy/sluggish symptoms incrementally predicted impairment. Conclusions: These findings provide more support for SCT as a transdiagnostic process than as a distinct disorder. All areas of SCT symptoms are associated with ADHD, anxiety, and depression, but the sleepy/sluggish symptoms may be uniquely associated with problems in everyday living. abstract_id: PUBMED:31904293 Differential Diagnosis of Sluggish Cognitive Tempo Symptoms in College Students. Objective: Sluggish cognitive tempo (SCT) refers to a set of symptoms that prior research has found to be related to several different psychological disorders, especially the predominantly inattentive presentation of ADHD. This study collected evidence relevant to the question of whether SCT is a distinct disorder. Method: College students (N = 910) completed measures of SCT, ADHD, depression, anxiety, sleep quality, and substance misuse. Results: Students reporting clinically high SCT (reporting at least five symptoms often or very often) had significantly higher levels and rates of other types of psychopathology. Moreover, when students reporting clinically significant levels of ADHD, depression, and anxiety symptoms, poor sleep quality, or hazardous levels of alcohol or cannabis use were removed, very few students reporting high SCT remained (only 4.8% of the original high-SCT group). Conclusion: SCT may be best thought of as a symptom set common to many types of psychopathology, and it may be caused by sleep problems or substance misuse as well. abstract_id: PUBMED:30570445 The role of emotion dysregulation in the association between subjective social status and eating expectancies among college students. Objective: Research suggests that college is a risky period for changes in eating behavior and beliefs. Although social health determinants relate to health behavior changes, research has not explored subjective social status, one's societal standing, in terms of eating expectancies among college students. The present study examined the emotion dysregulation in association between subjective social status and eating expectancies among college students. Participants: Participants were a diverse sample of 1,589 college students (80.4% females; Mage = 22.2 years, SD = 5.27) from an urban university. Results: Results showed a significant indirect association of subjective social status via emotion dysregulation in relation to expectancies of eating to help manage negative affect, to alleviate boredom, and to lead to feeling out of control. Conclusion: These findings provide evidence that college students with lower subjective social status may have a higher risk for dysregulated emotions, and consequently, expressing maladaptive eating expectancies. abstract_id: PUBMED:27764528 Sluggish Cognitive Tempo is Associated With Poorer Study Skills, More Executive Functioning Deficits, and Greater Impairment in College Students. Objectives: Few studies have examined sluggish cognitive tempo (SCT) in college students even though extant research suggests a higher prevalence rate of SCT symptoms in this population compared to general adult or youth samples. The current study examined SCT symptoms in relation to two domains related to college student's academic success, study skills and daily life executive functioning (EF), as well as specific domains of functional impairment. Method: 158 undergraduate students (Mage = 19.05 years; 64% female) completed measures of psychopathology symptoms, study skills, daily life EF, and functional impairment. Results: After controlling for demographics and symptoms of attention-deficit/hyperactivity disorder (ADHD), anxiety, and depression, SCT remained significantly associated with poorer study skills, greater daily life EF deficits, and global impairment and with greater functional impairment in the specific domains of educational activities, work, money/finances, managing chores and household tasks, community activities, and social situations with strangers and friends. In many instances, ADHD inattentive symptoms were no longer significantly associated with study skills or impairment after SCT symptoms were added to the model. Conclusion: SCT is associated with poorer college student functioning. Findings highlight the need for increased specificity in studies examining the relation between SCT and adjustment. abstract_id: PUBMED:25520166 Executive Dysfunction and Functional Impairment Associated With Sluggish Cognitive Tempo in Emerging Adulthood. Objective: Research has identified a relationship between sluggish cognitive tempo (SCT) symptoms and symptoms of ADHD, anxiety, and depression; however, no study has controlled for symptoms of ADHD, anxiety, and depression when examining impairment related to SCT symptoms. This study aimed to examine (a) the extent to which functional impairment and executive function (EF) problems were accounted for by SCT symptoms when controlling for ADHD, anxiety, and depression symptoms, and (b) which type of symptoms were associated with the greatest amount of impairment. Method: College students ( N = 458) completed self-report scales of ADHD, SCT, anxiety, and depression symptoms, as well as functional impairment and EF problems. Results: Thirteen percent of the sample was found to have high levels of SCT symptoms. SCT symptoms showed a moderate to strong correlation with the other symptom sets; however, high levels of SCT symptoms often occurred separate from high levels of ADHD, anxiety, or depression symptoms. SCT symptoms accounted for the most unique variance for both EF problems and functional impairment. Students with high levels of SCT symptoms, with or without high levels of ADHD symptoms, exhibited more impairment and EF problems than the controls. Conclusion: SCT is a clinical construct worthy of additional study, particularly among college students. abstract_id: PUBMED:27655143 Sluggish Cognitive Tempo and Speed of Performance. Objective: This study examined whether college students who reported higher levels of sluggish cognitive tempo (SCT) symptoms were actually more "sluggish" in their performance while completing speeded cognitive and academic measures. Method: College students ( N = 253) completed self-reports of SCT and their reading and test-taking abilities as well as tests of processing speed, reading fluency, and reading comprehension. Results: Across all variables, SCT symptoms were most significantly associated with self-reported difficulty on timed reading tasks. However, students with high SCT scores were not significantly slower than controls on any of the timed tasks. Conclusion: In college students, self-reports of high SCT levels do not suggest actual slow performance on cognitive and academic tasks. abstract_id: PUBMED:25945249 Sluggish cognitive tempo and its neurocognitive, social and emotive correlates: a systematic review of the current literature. Objectives: Since the elimination of items associated with Sluggish Cognitive Tempo (SCT) during the transition from DSM-III to DSM-IV from the diagnostic criteria of Attention-deficit Hyperactivity Disorder (ADHD), interest in SCT and its associated cognitive as well as emotional and social consequences is on the increase. The current review discusses recent findings on SCT in clinical as well as community based ADHD populations. The focus is further on clinical correlates of SCT in populations different from ADHD, SCT's genetic background, SCT's association with internalizing and other behavioral comorbidities, as well as SCT's association with social functioning and its treatment efficacy. Method: A systematic review of empirical studies on SCT in ADHD and other pathologies in PsycInfo, SocIndex, Web of Science and PubMed using the key terms "Sluggish Cognitive Tempo", "Cognitive Tempo", "Sluggish Tempo" was performed. Thirty-two out of 63 studies met inclusion criteria and are discussed in the current review. Results/conclusion: From the current literature, it can be concluded that SCT is a psychometrically valid construct with additive value in the clinical field of ADHD, oppositional defiant disorder (ODD), internalizing disorders and neuro-rehabilitation. The taxonomy of SCT has been shown to be far from consistent across studies; however, the impact of SCT on individuals' functioning (e.g., academic achievement, social interactions) seems remarkable. SCT has been shown to share some of the genes with ADHD, however, related most strongly to non-shared environmental factors. Future research should focus on the identification of adequate SCT measurement to promote symptom tailored treatment and increase studies on SCT in populations different from ADHD. abstract_id: PUBMED:34236585 Clarifying the Role of Multiple Self-Damaging Behaviors in the Association Between Emotion Dysregulation and Suicide Risk Among College Students. Suicidal behaviors are increasingly prevalent among college students. Although emotion dysregulation is theorized to increase suicide risk, research supporting this relationship is mixed. Engagement in self-damaging behaviors may play a role in the relationship between emotion dysregulation and suicide risk, theoretically by increasing one's capability of engaging in suicidal behaviors. Such behaviors may interact with emotion dysregulation to predict suicide risk. Alternatively, engaging in self-damaging behaviors may mediate the emotion dysregulation-suicide risk relationship. We examined the potential moderating and mediating roles of engagement in multiple self-damaging behaviors in the relationship between emotion dysregulation and suicide risk among college students. Participants were 181 undergraduate students who reported a history of self-damaging behaviors (i.e., non-suicidal self-injury, alcohol misuse, drug misuse, disordered eating), overall emotion dysregulation, and suicide risk. Findings revealed an interactive effect of emotion dysregulation and self-damaging behaviors on suicide risk, with engagement in more forms of self-damaging behaviors conferring higher risk for suicide, particularly in the context of greater emotion dysregulation. The model testing self-damaging behaviors as a mediator was also significant, such that greater emotion dysregulation had an indirect effect on elevated suicide risk via number of self-damaging behaviors. These findings help clarify associations among emotion dysregulation, self-damaging behaviors, and suicide risk, and have implications for specific targets of intervention and for the prevention of suicide by college students. abstract_id: PUBMED:25515678 ADHD Dimensions and Sluggish Cognitive Tempo Symptoms in Relation to Self-Report and Laboratory Measures of Neuropsychological Functioning in College Students. Objective: This study examined ADHD and sluggish cognitive tempo (SCT) symptoms in relation to self-report and laboratory measures of neuropsychological functioning in college students. Method: College students ( N = 298, aged 17-25, 72% female) completed self-reports of ADHD, SCT, depression, sleep, functional impairment, and executive functioning (EF). Participants also completed a visual working memory task, a Stroop test, and the Conners' Continuous Performance Test-II (CPT-II). Results: ADHD inattentive and SCT symptoms were strong predictors of self-reported EF, with inattention the strongest predictor of Time Management and Motivation and SCT the strongest predictor of Self-Organization/Problem Solving. SCT (but not inattention) was associated with Emotion Regulation. No relationships were found between self-reported symptoms and laboratory task performance. Between-group analyses were largely consistent with regression analyses. Conclusion: Self-reported ADHD and SCT symptoms are strongly associated with college students' self-reported EF, but relationships with laboratory task measures of neuropsychological functioning are limited. Answer: Yes, emotion dysregulation does mediate the association between sluggish cognitive tempo (SCT) and college students' social impairment. A study aimed to examine SCT in relation to college students' social functioning, test if SCT is significantly associated with emotion dysregulation beyond depressive, anxious, and ADHD symptoms, and test if emotion dysregulation mediates the association between SCT symptoms and social impairment. The study found that participants with elevated SCT had higher ADHD, depressive, and anxious symptoms in addition to poorer emotion regulation and social adjustment than participants without elevated SCT. SCT was significantly associated with social impairment but not general interpersonal functioning, even after accounting for other psychopathologies. SCT was also associated with emotion dysregulation, even after accounting for the strong association between depression and emotion dysregulation. Further analyses supported emotion dysregulation as a mediator of the association between SCT and social impairment (PUBMED:24691529).
Instruction: Adhesions at repeat cesarean delivery: is there a personal impact? Abstracts: abstract_id: PUBMED:25877223 Adhesions at repeat cesarean delivery: is there a personal impact? Purpose: The rise in the rate of cesarean deliveries highlights complications related to adhesion formation. This study evaluated whether the incidence and severity of adhesions secondary to repeat cesarean deliveries are a consequence of repeated surgeries or due to an individual's propensity to develop adhesions. Methods: A retrospective chart review was conducted for 160 patients who had more than two repeat cesarean deliveries in a single teaching hospital. Data regarding intra-abdominal adhesions were collected. The severity, location, density and amount of adhesions were evaluated based on standard operative reports. Adhesion progression in subsequent cesarean deliveries was evaluated for each individual patient. Results: 69/160 (43 %) patients developed significant adhesions following the primary cesarean delivery. Of these, 46 (67 %) had significant adhesions at the second surgery. Of the 91 (57 %) patients, who did not develop significant adhesions after the primary cesarean delivery, 34 (37 %) had significant adhesions at the third surgery. A patient presenting with significant adhesions at her second cesarean had a 1.88-fold risk for significant adhesions at her third cesarean (95 % CI 1.3-2.7). Conclusions: Our results suggest that adhesion development might be influenced by individual factors more than by the number of cesarean deliveries. abstract_id: PUBMED:29882442 Incidence and sites of pelvic adhesions in women with post-caesarean infertility. This cross-sectional study was designed to evaluate the incidences and sites of pelvic adhesions in women with post-caesarean unexplained infertility. This study was conducted at the Tanta University Hospitals in the period from August 1 2015 to July 31 2016. The enrolled patients were assessed by a diagnostic laparoscopy for the presence and sites of abdominal and pelvic adhesions. Pelvic adhesions were found in 98 cases (73.13%) and the remaining 36 cases (26.87%) were free of adhesions. Adhesions were tubal in 55.10%, ovarian in 20.40%, combined tubo-ovarian and omental adhesions in 11.22%, uterine adhesions in 6.12% and a frozen pelvis was found in 7.14%. There was no correlation between the severity of the adhesions and the number of previous caesarean sections (CS). The data of this study led us to conclude that pelvic adhesions are common in patients with unexplained infertility following a caesarean delivery. Tubal and ovarian adhesions to the lateral pelvic wall represent a pathognomonic feature in post-caesarean infertility. Impact Statement What is already known on this subject? Adhesions following a caesarean delivery have been assessed by many studies at the time of the next caesarean delivery. These adhesions have not been studied well in the patients with unexplained infertility. What the results of this study add? The results of this study specify the incidences and the sites of the adhesions which are considered to be pathognomonic for caesarean section. What the implications are of these findings for clinical practice and/or further research? These findings should be applied when the cases of post-caesarean infertility are evaluated in order to shorten the duration and burdens of infertility. abstract_id: PUBMED:29415600 Quantifying the effects of postcesarean adhesions on incision to delivery time. Objective: To quantify the effects of postcesarean section adhesions severity on the incision to delivery time. Methods: Secondary analysis of data of a prospective randomized controlled trial of women undergoing first repeat cesarean section. The presence and severity of adhesions were reported by surgeons postoperatively and accrued into an adhesion severity score. The primary outcome measure was the correlation between adhesion severity score and incision to delivery time. Results: Of the 97 women analyzed, 47 (48.5%) had an urgent cesarean delivery. Forty-four patients (45.4%) had adhesions. Adhesion score correlated with incision to delivery time (R = .38, p < .01). Patients with adhesions had a significantly longer incision to delivery time (10.3 + 5.9 versus 8.2 = 3.7 minutes, respectively; p = .04). In the Kaplan-Meier analysis, more patients with adhesions remained undelivered at any time point after incision (p = .036). The mean delivery time of patients with adhesion score three was significantly longer in comparison with women with no adhesions (13.0 versus 8.2 minutes, respectively; p = .002). Conclusions: Post cesarean adhesions delay delivery of the newborn. There is a linear correlation between adhesion severity and the incision to delivery interval. abstract_id: PUBMED:23292673 The relevance of post-cesarean adhesions. With an increasing number of cesareans and repeat cesarean deliveries, clinicians have started to realize the importance of adhesions after cesarean delivery. Adhesions develop more frequently and with increasing severity with each repeat cesarean, and are associated with increasing maternal morbidity especially bladder injury and increased delivery time. It appears that adhesion formation could be reduced with closure of the peritoneum, double-layer closure of the uterine incision, and the use of adhesion barrier. In many reports of adhesion formation after cesarean delivery, authors have used different methods to evaluate adhesions. We encourage clinicians to adopt a newly published site-specific classification of adhesions after caesarean delivery. abstract_id: PUBMED:18756408 Postoperative adhesions: from formation to prevention. Postoperative intra-abdominal and pelvic adhesions are the leading cause of infertility, chronic pelvic pain, and intestinal obstruction. It is generally considered that some people are more prone to develop postoperative adhesions than are others. Unfortunately, there is no available marker to predict the occurrence or the extent and severity of adhesions preoperatively. Ischemia has been thought to be the most important insult that leads to adhesion development. Furthermore, a deficient, suppressed, or overwhelmed natural immune system has been proposed as an underlying mechanism in adhesion development. The type of surgical approach (laparoscopy or laparotomy) and closure of peritoneum in gynecologic surgeries and cesarean section have been debated as important factors that influence the development and extent of postoperative adhesions. In this article, we have reviewed the current state of adhesion development and the effects of barrier agents in prevention of postoperative adhesions. abstract_id: PUBMED:26756563 Economic Impact of the Use of an Absorbable Adhesion Barrier in Preventing Adhesions Following Open Gynecologic Surgeries. We used an economic model to assess the impact of using the GYNECARE INTERCEED absorbable adhesion barrier for reducing the incidence of postoperative adhesions in open surgical gynecologic procedures. Caesarean section surgery, hysterectomy, myomectomy, ovarian surgery, tubal surgery, and endometriosis surgery were modeled with and without the use of GYNECARE INTERCEED absorbable adhesion barrier. Incremental GYNECARE INTERCEED absorbable adhesion barrier material costs, medical costs arising from complications, and adhesion-related readmissions were considered. GYNECARE INTERCEED absorbable adhesion barrier use was assumed in 75% of all procedures. The economic impact was reported during a 3-year period from a United States hospital perspective. Assuming 100 gynecologic surgeries of each type and an average of one GYNECARE INTERCEED absorbable adhesion barrier sheet per surgery, a net savings of $540,823 with GYNECARE INTERCEED absorbable adhesion barrier during 3 years is estimated. In addition, GYNECARE INTERCEED absorbable adhesion barrier use resulted in 62 fewer cases of patients developing adhesions. Although the use of GYNECARE INTERCEED absorbable adhesion barrier added $137,250 in material costs, this was completely offset by the reduction in length of stay ($178,766 savings), fewer adhesion-related readmissions ($458,220 savings), and operating room cost ($41,078 savings). Adoption of the GYNECARE INTERCEED absorbable adhesion barrier for appropriate gynecologic surgeries would likely result in significant savings for hospitals, driven primarily by clinical patient benefits in terms of decreased length of stay and adhesion-related readmissions. abstract_id: PUBMED:17466672 Incidence of adhesions at repeat cesarean delivery. Objective: To compare the incidence and severity of adhesions at repeat cesarean delivery based on the closure at primary section. Study Design: A retrospective chart review was conducted for 62 cases of repeat cesarean sections. A score was assigned based on the severity of adhesions. The primary operative report was reviewed, and the closure type recorded. Statistical analysis was performed with a t test, chi2, and ANOVA. Results: Forty-nine and eight-tenths percent of cases had extensive adhesions. Closure of the peritoneal or rectus abdominis muscle resulted in significantly fewer extensive adhesions than nonclosure (31.2% vs 70.0%; P = .013). The mean adhesion score for the nonclosure group was 2.67, compared with 1.91 for the parietal peritoneal closure group (P = .044) and 1.73 for the rectus muscle group (P = .009), where 1 is no adhesions and 4 is the most severe). Conclusion: Closure of the rectus muscle or the parietal peritoneum at primary section resulted in significantly fewer adhesions at repeat cesarean delivery. abstract_id: PUBMED:16055575 Peritoneal closure at primary cesarean delivery and adhesions. Objective: To evaluate the effect of parietal peritoneal closure at cesarean delivery on adhesion formation. Methods: A prospective cohort study of women undergoing first repeat cesarean delivery was designed. All surgeons were asked immediately after surgery to score the severity and location of adhesions. Patient records were then abstracted to assess prior surgical technique, including parietal peritoneal closure, other attributes of first surgery, and patient characteristics. Exclusion criteria included adhesions, other surgery, or use of permanent suture at the first cesarean, unavailable first postoperative note and course, wound infection or breakdown following first surgery, intervening pelvic surgery, insulin-dependent diabetes mellitus, and steroid-dependent disease. The chi2 test and multivariable logistic regression were used for statistical comparison and analysis. A total of 128 patients was required to have 80% power to detect a 50% reduction in adhesions when the parietal peritoneum was left open. Results: One hundred seventy-three patients were enrolled. Prior parietal peritoneal closure was associated with significantly fewer dense and filmy adhesions (52% versus 73%, P = .006) and significantly fewer dense adhesions (30% versus 45%, P = .043). When controlling for potential confounding variables, including prior infection, visceral peritoneal closure, rectus muscle closure, payor status, ethnicity, maternal age, gestational diabetes, and labor, parietal peritoneal closure at primary cesarean delivery was 5-fold protective against all adhesions (odds ratio 0.20, 95% confidence interval 0.08-0.49), and 3-fold protective against dense adhesions (odds ratio 0.32, 95% confidence interval 0.13-0.79). Omental-fascial adhesions were decreased most consistently. Conclusion: Parietal peritoneal closure at primary cesarean delivery was associated with significantly fewer dense and filmy adhesions. The practice of nonclosure of the parietal peritoneum at cesarean delivery should be questioned. abstract_id: PUBMED:29096649 Prevalence of adhesions and associated postoperative complications after cesarean section in Ghana: a prospective cohort study. Background: The global increase in Cesarean section rate is associated with short- and long-term complications, including adhesions with potential serious maternal and fetal consequences. This study investigated the prevalence of adhesions and association between adhesions and postoperative complications in a tertiary referral hospital in Accra, Ghana. Methods: In this prospective cohort study, 335 women scheduled for cesarean section at Korle-Bu Teaching Hospital in Accra, Ghana were included from June to December 2015. Presence or absence of adhesions was recorded and the severity of the adhesions was scored using a classification system. Associations between presence and severity of adhesions, postoperative complications, and maternal and infant outcomes at discharge and 6 weeks postpartum were assessed using multivariate logistic and linear regression analysis. Results: Of the participating women, 128 (38%) had adhesions and 207 (62%) did not. Prevalence of adhesions increased with history of caesarean section; 2.8% with no CS but may have had an abdominal surgery, 51% with one previous CS, 62% with >1 CS). Adhesions significantly increased operation time (mean 39.2 (±15.1) minutes, absolute adjusted difference with presence of adhesions 9.6 min, 95%CI 6.4-12.8), infant delivery time (mean 5.4 (±4.8) minutes, adjusted difference 2.4 min, 95%CI 1.3-3.4), and blood loss for women with severe adhesions (mean blood loss 418.8 ml (±140.6), adjusted difference 57.6 ml (95%CI 12.1-103.0). No differences for other outcomes were observed. Conclusion: With cesarean section rates rising globally, intra-abdominal adhesions occur more frequently. Risks of adhesions and associated complications should be considered in counseling patients for cesarean section. abstract_id: PUBMED:34058401 Predicting Intra-abdominal Adhesions for Repeat Cesarean Delivery with the Ultrasound Sliding Sign. Objective: To evaluate the diagnostic value of the "sliding sign", a sonographic test, in predicting intra-abdominal adhesions for women undergoing repeat cesarean delivery. Methods: This was a prospective observational study of women undergoing a scheduled repeat cesarean delivery at a regional hospital. The sliding sign is a sonographic observation of sliding movement between the uterus and the abdominal wall during deep breathing. The absence of the sliding sign was considered indicative of a high risk for dense uterine-abdominal adhesions, and this finding was compared with the operative findings. The primary outcome was the accuracy of the sliding sign in predicting adhesions. Secondary outcomes included incision-to-delivery time, operative time, and blood loss. Results: We examined 112 pregnant women. Dense uterine-abdominal adhesions were found in 15 women, 8 of whom had no sliding sign identified on ultrasound. Therefore, the presence of the sliding sign detected 78 out of 97 women without uterine-abdominal adhesions. The sensitivity, specificity, and positive and negative predictive values of the sliding sign were 53.3%, 80.4%, 29.6%, and 91.8%, respectively. The intra- and inter-observer reliability using the kappa coefficient showed moderate consistency at 0.43 and 0.45, respectively. Conclusion: The sliding sign was less predictive than desired for clinical use, and it only detected half of the women with dense uterine-abdominal adhesions. Furthermore, a moderately low kappa coefficient indicates a lack of reproducibility, further emphasizing the sign's limited clinical value. Answer: The development of adhesions following cesarean delivery does appear to have a personal impact on patients. A study evaluating the incidence and severity of adhesions secondary to repeat cesarean deliveries suggested that individual factors might influence adhesion development more than the number of cesarean deliveries (PUBMED:25877223). This is supported by the finding that patients who presented with significant adhesions at their second cesarean had a 1.88-fold risk for significant adhesions at their third cesarean. Additionally, the study found that 43% of patients developed significant adhesions following the primary cesarean delivery, and of these, 67% had significant adhesions at the second surgery. Conversely, of the patients who did not develop significant adhesions after the primary cesarean delivery, 37% had significant adhesions at the third surgery. Furthermore, another study found no correlation between the severity of adhesions and the number of previous cesarean sections, indicating that the propensity to develop adhesions may not be solely related to the number of surgeries (PUBMED:29882442). This study also highlighted that pelvic adhesions are common in patients with unexplained infertility following a cesarean delivery, with tubal and ovarian adhesions being particularly significant. Moreover, postcesarean adhesions have been shown to delay the delivery of the newborn, with a linear correlation between adhesion severity and the incision to delivery interval (PUBMED:29415600). This indicates that adhesions can have a direct impact on the surgical process and outcomes. In summary, the evidence suggests that there is a personal impact of adhesions at repeat cesarean delivery, with individual susceptibility playing a significant role in adhesion development and associated complications.
Instruction: Intradecidual sign: is it effective in diagnosis of an early intrauterine pregnancy? Abstracts: abstract_id: PUBMED:9280240 Intradecidual sign: is it effective in diagnosis of an early intrauterine pregnancy? Purpose: To determine if the intradecidual sign at sonography is effective in the diagnosis of early intrauterine pregnancy. Materials And Methods: In 102 pregnant patients, transvaginal sonography revealed an intrauterine fluid collection without a yolk sac or embryo. Four observers (experienced sonologist, body imaging fellow, 1st-year radiology resident, and premedical student) determined independently whether the intradecidual sign was absent, present, or indeterminate. Interpretations were limited to visualization of only the uterus. Results: Follow-up revealed intrauterine pregnancy in 91 patients (outcome normal in 48 and abnormal in 43) and ectopic pregnancy in 11 patients. Among the four reviewers, sensitivity for diagnosis of an intrauterine pregnancy was 34%-66%, specificity was 55%-73%, accuracy was 38%-65%, positive predictive value was 91%-93%, and negative predictive value was 12%-16%. Three to five ectopic pregnancies were categorized incorrectly as demonstrating the intradecidual sign, depending on the reviewer. Conclusion: The intradecidual sign does not appear to be sensitive or specific in diagnosis of an early intrauterine pregnancy. When an intrauterine fluid collection is present without an embryo or yolk sac (with positive pregnancy test results), a follow-up sonogram should be obtained unless contraindicated clinically. abstract_id: PUBMED:15333362 The intradecidual sign: is it reliable for diagnosis of early intrauterine pregnancy? Objective: Our aim was to determine the accuracy of the intradecidual sign for the diagnosis of intrauterine pregnancy and the exclusion of ectopic pregnancy. Conclusion: The intradecidual sign reliably excludes the presence of an ectopic pregnancy. The sensitivity for diagnosis of an intrauterine pregnancy increases when human chorionic gonadotropin levels are equal to or greater than 2,000 mIU/mL or the mean sac diameter is equal to or greater than 3 mm. It is of utmost importance to visualize this sign on multiple views with an unchanging appearance. abstract_id: PUBMED:3532191 Intradecidual sign: a US criterion of early intrauterine pregnancy. The uterine cavity appears on sonograms as a linear echo, which is usually visible during early pregnancy and remains straight until the eighth to ninth week of gestation. The early gestational sac is not enveloped by two layers of decidua, as suggested by descriptions of the double decidual sac sign; the sac (or echogenic area of early implantation) is actually located within a markedly thickened decidua on one side of the uterine cavity. The combination of these two sonographic characteristics is called the "intradecidual sign." An early implantation of 25 days gestational age can be detected by the presence of the intradecidual sign, which is sooner than a gestational sac can be seen. The implantation site can also be located by means of the intradecidual sign. In a study of 36 patients with early intrauterine pregnancy and five with ectopic pregnancy, the intradecidual sign was more sensitive (91.7% vs. 63.9%) and specific (100% vs. 60%) than the double decidual sac sign in the detection of early intrauterine pregnancy. abstract_id: PUBMED:23804343 Double sac sign and intradecidual sign in early pregnancy: interobserver reliability and frequency of occurrence. Objectives: To assess the interobserver agreement, frequency of occurrence, and prognostic importance of the double sac sign (DSS), intradecidual sign (IDS), and other sonographic findings in early intrauterine pregnancies. Methods: We retrospectively identified all sonograms obtained between January 1, 2006, and December 31, 2011, in which: (1) the scan demonstrated an intrauterine fluid collection without a yolk sac or embryo; (2) a follow-up scan confirmed an intrauterine pregnancy; and (3) the first-trimester outcome was known. Each coinvestigator characterized the 199 study sonograms as demonstrating or not demonstrating a DSS or an IDS, based on judgment about whether the scan met published criteria defining these signs. Results: Interobserver agreement was poor for the DSS (κ= 0.24) and IDS (κ= 0.23). Scans frequently demonstrated neither sign: 150 cases (75.4%) if we considered a sign to be present when both investigators graded it as present and 69 cases (34.7%) using the looser criterion that either graded it as present. The presence of a DSS or an IDS was unrelated to the β-human chorionic gonadotropin (β-hCG) value (P > .05, t test, all comparisons). An inner echogenic ring was present in 158 cases (79.4%), and the decidua was brighter peripherally than centrally in 102 (51.3%). The first-trimester outcome was unrelated to the presence of a DSS or an IDS, presence of an inner echogenic ring, or decidual appearance (P > .05, χ(2), all comparisons). Conclusions: The sonographic appearance of early gestational sacs, before visualization of a yolk sac or embryo, is highly variable. The DSS and IDS are often absent; there is poor interobserver agreement regarding these signs; and the prognosis is unrelated to their presence or absence. A round or oval intrauterine fluid collection in a woman with positive β-hCG should be treated as a gestational sac until proven otherwise, regardless of whether it demonstrates a DSS or an IDS. abstract_id: PUBMED:25393076 Accuracy of first-trimester ultrasound in diagnosis of intrauterine pregnancy prior to visualization of the yolk sac: a systematic review and meta-analysis. Objectives: To evaluate the diagnostic accuracy of ultrasound in predicting the location of an intrauterine pregnancy before visualization of the yolk sac is possible. Methods: This was a systematic review conducted in accordance with the PRISMA statement and registered with PROSPERO. We searched MEDLINE, EMBASE and The Cochrane Library for relevant citations. Studies were selected in a two-stage process and their data extracted by two reviewers. Accuracy measures were calculated for each ultrasound sign, i.e. gestational sac, double decidual sac sign, intradecidual sign, chorionic rim sign and yolk sac. Individual study estimates were plotted in summary receiver-operating characteristics curves and forest plots for examination of heterogeneity. The quality of included studies was assessed. Results: Seventeen studies including 2564 women were selected from 19 959 potential papers. Following meta-analysis, the presence of a gestational sac on ultrasound examination was found to predict an intrauterine pregnancy with a sensitivity of 52.8% (95% CI, 38.2-66.9%) and specificity of 97.6% (95% CI, 94.3-99.0%). The corresponding performance of the double decidual sac sign, intradecidual sign, chorionic rim sign and yolk sac were: 81.8% (95% CI, 68.1-90.4%) and 97.3% (95% CI, 76.1-99.8%); 66.1% (95% CI, 58.9-72.8%) and 100% (95% CI, 91.0-100%); 79.9% (95% CI, 73.0-85.7%) and 97.1% (95% CI, 89.9-99.6%); and 42.2% (95% CI, 27.7-57.9%) and 100% (95% CI, 54.1-100%), respectively. Conclusion: Visualization of a gestational sac, double decidual sac sign, intradecidual sign or chorionic rim sign increases the probability of an intrauterine pregnancy but is not as accurate for diagnosis as the detection of the yolk sac. However, the findings were limited by the small number and poor quality of the studies included and heterogeneity in the index test and reference standard. abstract_id: PUBMED:26656544 NEW APPROACHES TO THE EARLY DIAGNOSIS OF INTRAUTERINE VIRAL INFECTIONS IN NEWBORNS The importance of intrauterine viral infections in newborns pathology remain incompletely understood, as there is the problem of early verification of the etiologic pathogen. The aim of the study was to develop diagnostic criteria for intrauterine viral infections by introducing rapid diagnostic methods, the study of perinatal factors, medical history, clinical course and laboratory data. Clinical and laboratory examination 834 mothers and their newborn patients with suspected intrauterine infection. We observed 224 children with verified intrauterine viral infection. Studied the history of perinatal risk factors, clinical features and laboratory data. Studies have shown that the predominant form of mixed infections (85.7%). On the basis of statistical methods developed diagnostic criteria and algorithm of differential diagnosis of all possible variants of infection. Testing diagnostic algorithm has shown high reliability of diagnostic criteria, which allows recommend them for clinical use. abstract_id: PUBMED:32045016 "Pseudogestational Sac" and Other 1980s-Era Concepts in Early First-Trimester Ultrasound: Are They Still Relevant Today? Objectives: To determine whether an intrauterine round or oval fluid collection ("saclike structure") can prove to be either an intrauterine pregnancy or intrauterine fluid in conjunction with an ectopic pregnancy (sometimes termed "pseudogestational sac") and whether ultrasound features, including the presence or absence of an echogenic rim, "double sac sign" (DSS), or "intradecidual sign" (IDS), are helpful for establishing the diagnosis or predicting the prognosis. Methods: We identified all sonograms obtained from women with positive serum human chorionic gonadotropin results at our institution between January 1, 2012, and June 30, 2018, meeting the following criteria: presence of an intrauterine saclike structure without a yolk sac or embryo; no extraovarian adnexal mass; and follow-up information identifying the location of the pregnancy as intrauterine or ectopic. Study authors reviewed sonograms in all cases and recorded the following information: presence or absence of each of an echogenic rim around the collection, a DSS, and an IDS, as well as the mean sac diameter. The indications for the initial ultrasound examinations were recorded. Results: A total of 649 sonograms met the inclusion criteria. Of these, 598 fluid collections showed an echogenic rim, 182 a DSS, and 347 an IDS (findings not mutually exclusive). In all 649 cases, a subsequent sonogram or other clinical follow-up confirmed that the patient had an intrauterine pregnancy. That is, none of the fluid collections proved to be a pseudogestational sac. In total, 41.2% were live at the end of the first trimester, and 58.8% miscarried. The prognosis was better in cases with, compared to without, an IDS (P = .01, χ2 ), but no ultrasound feature was clinically useful for ruling in or excluding a good prognosis. Conclusions: In a woman with positive human chorionic gonadotropin results and no extraovarian adnexal mass, the ultrasound finding of an intrauterine saclike structure is virtually certain to be a gestational sac. Ultrasound features of the structure are of no diagnostic or clinically useful prognostic value. Concepts introduced 30 to 40 years ago when ultrasound equipment had far lower resolution than currently, including a DDS, an IDS, and a pseudogestational sac, have no role today in assessing early pregnancy. abstract_id: PUBMED:8503990 T-cell receptors are expressed but down-regulated on intradecidual T lymphocytes. Problem: Dietl et al. considered "intradecidual T cell tolerance towards fetal antigens" with their observation that intradecidual T cells lack immunohistochemically detectable amounts of T cell receptor (TCR) molecules while expressing normal amounts of CD3 molecules during early normal pregnancy (Am J Reprod Immunol. 1990; 24:33-36). Method: To reevaluate these findings we examined the TCR and CD3 expression on intradecidual T cells using flow cytometry. Conclusions: In our study, all intradecidual CD3+ T cells expressed either TCR alpha beta or TCR gamma delta. However, the expression of the CD3/TCR complex on intradecidual T cells was down-regulated. The level of CD3/TCR complex expression on intradecidual T cells was about two-thirds of that on peripheral blood T cells. Further, the proportions of alpha beta+ and gamma delta+ cells in CD3+ cells did not significantly differ between decidua and peripheral blood. abstract_id: PUBMED:34734072 Ovarian pregnancy rupture following ovulation induction and intrauterine insemination: A case report. Background: Ovarian pregnancy after assisted reproductive technology treatment has rarely been reported; ovarian pregnancy following intrauterine insemination (IUI) is even rarer, and only nine cases have previously been reported. Case Summary: We report a case of ovarian pregnancy rupture after ovulation induction and IUI. The patient presented with bilateral lower abdominal pain and was referred to the emergency department. Ultrasound examination revealed ovarian pregnancy and intraperitoneal bleeding. Laparoscopy revealed an ovarian pregnancy with hemoperitoneum, which was subsequently removed. Pelvic adhesions were detected intraoperatively, which were treated immediately. The patient spontaneously conceived an intrauterine pregnancy 3 mo later, which was ongoing at the time of writing this study. Conclusion: Close attention should be paid to any history of pelvic inflammatory disease before commencing IUI treatment,and patients with such a history should be closely followed up after IUI. Early measurement of serum β-human chorionic gonadotropin levels and ultrasonic examination are essential for timely diagnosis of ovarian pregnancy after ovulation induction and IUI to avoid more serious complications. abstract_id: PUBMED:1018498 Perspectives of intrauterine contraception The author describes the current research in intrauterine contraception. It is directed in two ways: to improve the design of inert devices and to construct an optimal bioactive IUD. The design of the inert IUD should be adapted as much as possible to anatomical and functional conditions of the uterus. In developing new bioactive devices it is ained at finding the optimal dosage of metals and hormones as well as the new material acting as carriers of bioactive substances. With new constructions and the determination of the optimale dosage of antifertile and antifibrinolitic substances it is hoped to reduce three main insufficiences of intrauterine contraception: bleeding, expulsion and pregnancy. Answer: The intradecidual sign has been studied to determine its effectiveness in diagnosing early intrauterine pregnancy. However, the results across different studies are not entirely consistent. One study found that the intradecidual sign does not appear to be sensitive or specific in the diagnosis of an early intrauterine pregnancy. The sensitivity for diagnosis ranged from 34%-66%, and specificity ranged from 55%-73% among four reviewers. The study concluded that when an intrauterine fluid collection is present without an embryo or yolk sac, a follow-up sonogram should be obtained unless contraindicated clinically (PUBMED:9280240). Another study suggested that the intradecidual sign reliably excludes the presence of an ectopic pregnancy and that its sensitivity for diagnosis of an intrauterine pregnancy increases when human chorionic gonadotropin levels are equal to or greater than 2,000 mIU/mL or the mean sac diameter is equal to or greater than 3 mm (PUBMED:15333362). A different study reported that the intradecidual sign was more sensitive (91.7%) and specific (100%) than the double decidual sac sign in the detection of early intrauterine pregnancy (PUBMED:3532191). However, another study highlighted poor interobserver agreement for the intradecidual sign (κ= 0.23) and found that the presence of the sign was unrelated to the first-trimester outcome, suggesting that the sonographic appearance of early gestational sacs is highly variable and the prognosis is unrelated to the presence or absence of the intradecidual sign (PUBMED:23804343). A systematic review and meta-analysis found that the intradecidual sign had a sensitivity of 66.1% and a specificity of 100% for predicting an intrauterine pregnancy, but it also noted limitations due to the small number and poor quality of the studies included and heterogeneity in the index test and reference standard (PUBMED:25393076). Lastly, a study concluded that concepts such as the intradecidual sign, which were introduced decades ago when ultrasound equipment had lower resolution, have no role today in assessing early pregnancy due to the high variability in sonographic appearances and the lack of diagnostic or clinically useful prognostic value (PUBMED:32045016).
Instruction: Are childhood socio-economic circumstances related to coronary heart disease risk? Abstracts: abstract_id: PUBMED:17440028 Are childhood socio-economic circumstances related to coronary heart disease risk? Findings from a population-based study of older men. Background: The independent influence of childhood social circumstances on health in later life remains uncertain. We examined the extent to which childhood socio-economic circumstances are related to the risk of coronary heart disease (CHD) in older British men, taking account of adult social class and behavioural risk factors. Methods: A socio-economically representative sample of 5552 British men (52-74 years) with retrospective assessment of childhood socio-economic circumstances (father's occupation and childhood household amenities) who were followed up for CHD (fatal and non-fatal) for 12 years. Results: Men whose childhood social class was manual had an increased hazard ratio (HR) 1.34 (95% CI 1.11-1.63)-this effect was diminished when adjusted for adult social class and adult behavioural risk factors (cigarette smoking, alcohol, physical activity and body weight) (HR 1.19; 95% CI 0.97-1.46). Men whose family did not own a car in their childhood were at increased CHD risk even after adjustments for adult social class and behaviours (HR 1.35, 95% CI 1.04-1.75). Men with combined exposure to both childhood and adult manual social class had the highest risk of CHD (HR 1.51; 95% CI 1.19-1.91); this was substantially reduced by adjustment for adult behavioural risk factors (adjusted HR 1.28; 95% CI 0.99-1.65). Conclusions: Less affluent socio-economic conditions in childhood may have a modest persisting influence on risk of CHD in later life. abstract_id: PUBMED:27325946 Socio-economic disparities in heart disease in the Republic of Lebanon: findings from a population-based study. Background: Socio-economic inequalities in the incidence of heart disease exist in developed countries. No data are available on the relation between heart disease and socio-economic status in Arab countries. This study examined the relation between heart disease and socio-economic status (income and education) among adults in Lebanon. Methods: The study examined data from 7879 respondents aged 40 years or more in the 2004 Lebanese Survey of Family Health. The dependent variable was reported heart disease. The main independent variables were education and household income. The analysis adjusted for the classic risk factors of coronary heart disease (CHD), namely smoking, diabetes mellitus, hypertension, hypercholesterolaemia, age, sex and other socio-demographic variables. Bivariate associations were calculated using χ(2) tests. Adjusted ORs for heart disease were calculated using multivariate logistic regression models. Results: 7.5% of respondents reported cardiac disease, 15.2% hypertension, 10.1% diabetes, 3.2% hypercholesterolaemia and 47.5% smoked at the time or previously. After adjustment for the classic risk factors of CHD, reported heart disease was inversely associated with education (OR=1.53, 95% CI 1.15 to 2.04, for those with less than elementary and OR=1.34, 95% CI 1.00 to 1.80, for those with elementary education). Reported heart disease was also inversely associated with income (OR=1.40, 95% CI 1.09 to 1.80, for those in the lowest income bracket). Past smoking, hypertension, age, male sex, marriage and residence in Beirut were all significantly associated with reported cardiac disease. Conclusions: In Lebanon, adults with lower income and educational levels had a higher prevalence of heart disease independent of the risk factors of CHD. abstract_id: PUBMED:21051471 Childhood socio-economic position and risk of coronary heart disease in middle age: a study of 49,321 male conscripts. Background: Poor social circumstances in childhood are associated with increased risk of coronary heart disease (CHD). In previous studies, social circumstances and risk factors in adulthood have been suggested to explain this association. In the present study, we included potential explanatory factors from childhood and adolescence. Methods: We investigated the association between childhood socio-economic position (SEP) and CHD in middle age among 49,321 Swedish males, born during 1949-51, who were conscripted for military service at 18-20 years of age. Register-based data on childhood social circumstances, educational attainment and occupational class in adulthood were used in combination with information on cognitive ability, smoking, body mass index and body height in late adolescence obtained from a compulsory conscription examination. Incidence of CHD from 1991 to 2007 (between 40 and 58 years of age) was followed in national registers. Results: We demonstrated an inverse association between childhood SEP and CHD in middle age: among men with the lowest childhood SEP the crude hazard ratio of CHD was 1.47 (95% CI = 1.30-1.67). Adjustment for crowded housing in childhood, body height, cognitive ability, smoking and BMI in late adolescence attenuated relative risks of CHD considerably. Additional adjustment for educational level had a further, although limited, attenuating effect on associations, but additional adjustment for occupational class had no such effect. Conclusions: Results showed that social, cognitive and behavioural factors evident prior to adulthood may be of greater importance in explaining the association between childhood SEP and CHD later in life than socio-economic indicators in adulthood. abstract_id: PUBMED:24889280 Impact of socio-economic factors on the long-term effectiveness of antihypertensive treatment with an angiotensin II receptor blocker: an observational study. Objective: To investigate the role of socio-economic factors on the therapeutic effectiveness of and therapeutic adherence to the angiotensin II receptor blocker (ARB) olmesartan (OM) alone or in combination with hydrochlorothiazide in the treatment of arterial hypertension. Research Design And Methods: In a multi-center, open-label, prospective and long-term observational study, data from hypertensive patients treated with OM were analyzed at baseline, month 3 and month 12 within the context of patients' socio-economic status (SES), determined using pre-defined criteria by physicians in outpatient practices and including multivariate analysis. Results: Overall, 7724 patients were assigned to three subgroups representing low, medium and high socio-economic status. Baseline conditions differed significantly between the subgroups. Patients of low SES had worse nutritional habits, less physical activity and more concomitant medication compared to patients of high SES. Cardiovascular risk factors were more common in the low SES group as were concomitant diseases such as heart failure, coronary heart disease, atherosclerosis and renal failure. OM therapy led to a significant decrease in blood pressure (23.0/11.6 mmHg) in all patients. The blood pressure target of <140/90 mmHg was achieved in about 70% of the documented population. Effectiveness was comparable between patients with low, medium or high SES. Treatment adherence was high in the overall population with only minor differences between the subgroups. In total the incidence of adverse events (AEs) was 1.6% documented in 98 patents (1.3%) during the course of the study. Of this total number only 1.0% was related to the drug, matching the percentage expressed in the Summary of Product Characteristics (SmPC). Conclusions: The ARB OM is effective and well tolerated in all patients, irrespective of their socio-economic status. The risk status and the established cardiovascular disease of hypertensive patients are strongly influenced by the SES. To validate these interesting data a randomized controlled trial is needed. abstract_id: PUBMED:4049020 Socio-economic conditions in childhood and mortality and morbidity caused by coronary heart disease in adulthood in rural Finland. In this study, the hypothesis that bad socio-economic conditions in childhood may increase the probability of coronary heart disease in adulthood is examined. The study is based partly on the data of the East-West Study in Finland, which is part of the Seven Countries Study. The study began with 823 men in Eastern Finland and 888 men in Western Finland in 1959. The mortality and morbidity of the cohorts were followed from 1959 to 1974. Risk factors were measured in medical examinations in 1959, 1964, 1969 and 1974. Parents of those included in the sample were traced by using parish registers from 1900 to 1919. Over 90% of those in the East-West Study were found. The parents' socio-economic position (socio-economic conditions in childhood) was determined. According to our findings, the relative risks of coronary death, myocardial infarction and ischemic heart disease are systematically increased for those born landless in East Finland. Variables partly explaining the increased risk were body height and smoking. The effect of cholesterol was negligible. abstract_id: PUBMED:34058892 Cardiovascular disease in people born to unmarried mothers in two historical periods: The Helsinki Birth Cohort Study 1934-1944. Aims:Socio-economic conditions in early life are important contributors to cardiovascular disease - the leading cause of mortality globally - in later life. We studied coronary heart disease (CHD) and stroke in adulthood among people born out of wedlock in two historical periods: before and during World War II in Finland. Methods: We compared offspring born out of wedlock before (1934-1939) and during (1940-1944) World War II with the offspring of married mothers in the Helsinki Birth Cohort Study. The war affected the position of unmarried mothers in society. We followed the study subjects from 1971 to 2014 and identified deaths and hospital admissions from CHD and stroke. Data were analysed using a Cox regression, adjusting for other childhood and adulthood socio-economic circumstances. Results: The rate of out-of-wedlock births was 240/4052 (5.9%) before World War II and 397/9197 (4.3%) during World War II. Among those born before World War II, out-of-wedlock birth was associated with an increased risk of stroke (hazard ratio (HR)=1.44; 95% confidence interval (CI) 1.00-2.07) and CHD (HR=1.37; 95% CI 1.02-1.86). Among those born out of wedlock during World War II, the risks of stroke (HR=0.89; 95% CI 0.58-1.36) and CHD (HR=0.70; 95% CI 0.48=1.03) were similar to those observed for the offspring of married mothers. The p-values for interaction of unmarried×World War II were (p=0.015) for stroke and (p=0.003) for CHD. Conclusions: In a society in which marriage is normative, being born out of wedlock is an important predictor of lifelong health disadvantage. However, this may change rapidly when societal circumstances change, such as during a war. abstract_id: PUBMED:11873093 Men of low socio-economic and educational level possess pronounced deficient knowledge about the risk factors related to coronary heart disease. Background: The aim of the present study was to determine whether certain background factors such as gender, education and social status were associated with an individual's knowledge of coronary heart disease (CHD) risk factors. Design: A questionnaire survey. Methods: A questionnaire survey designed to evaluate participants' general knowledge about the risk factors for CHD was used. A total of 1011 50-year-old individuals (457 men and 554 women) from 34 Health Care Centers participated in the study. Results: Knowledge about CHD risk factors was significantly poorer in men than in women. Low education and low socio-economic status were other factors related to poor knowledge of CHD risk factors. Conclusion: This study showed that men with low educational level and low socio-economic status had inadequate information about the risk factors involved in CHD. abstract_id: PUBMED:14596732 Perceived stress and coronary heart disease risk factors: the contribution of socio-economic position. Objectives: The aim of this study was to explore the relationship between risk factors for coronary heart disease (CHD) and perceived stress, adjusted for socio-economic position. Design: Cross-sectional analysis of CHD risk factors, perceived stress and socio-economic position. Method: A cohort of employed Scottish men (N = 5848) and women (N = 984) completed a questionnaire and attended a physical examination. Results: Higher socio-economic groups registered higher perceived stress scores. Perceived stress was associated with the following CHD risk factors in the expected direction: high plasma cholesterol, little recreational exercise, cigarette smoking, and high alcohol consumption. Contrary to expectations, stress was related negatively to high diastolic blood pressure, body mass index (BMI) and low forced expiratory volume. Correction for socio-economic position tended to abolish the associations between stress and physiological risk factors; the associations between stress and behavioural risk factors withstood such correction. The residual patterns of associations between perceived stress and CHD risk were broadly similar for men and women. A lower BMI, a greater number of cigarettes smoked, and greater alcohol consumption were associated with higher levels of perceived stress for both sexes. Lower levels of recreational exercise were associated with higher levels of stress for men only. Conclusions: Self-reported stress is related to health-related behaviours and to physiological CHD risk factors. The direction of the association with physiological risk was often contrary to expectation and appeared to be largely due to confounding by socio-economic position. In contrast, the association with health-related behaviours was in the expected direction and was largely independent of such confounding. abstract_id: PUBMED:6958184 Socio-economic status as a coronary risk factor: the Oslo study. The association between socioeconomic status, measured by a combination of income and education, and CHD mortality has been studied in a cohort of 40-49 year old Oslo men. Socio-economic status was significantly associated with CHD mortality. However, the lowest CHD mortality was found in social status Group III (middle class) and this could not be explained by the risk factor gradients seen among those studied. Although the number of fatalities is small (68 CHD deaths during 4.5 years) socio-economic status seems to be independently associated with coronary risk after adjusting for serum cholesterol, systolic blood pressure and cigarette smoking. abstract_id: PUBMED:16257232 Systematic review of the influence of childhood socioeconomic circumstances on risk for cardiovascular disease in adulthood. Purpose: Adverse socioeconomic circumstances in childhood may confer a greater risk for adult cardiovascular disease (CVD). The purpose of this review is to systematically evaluate evidence for an association between socioeconomic circumstances during childhood and specific CVD subtypes, independent of adult socioeconomic conditions. Methods: We systematically retrieved individual-level studies of morbidity and mortality from CVD and specific CVD subtypes linked to early life influences, including coronary heart disease (CHD), ischemic and hemorrhagic stroke, peripheral vascular disease, markers of atherosclerosis (carotid intima-media thickness and stenosis), and rheumatic heart disease. Indicators of socioeconomic position in childhood varied, although most studies relied on father's occupation. Results: We located 40 studies (24 prospective, 11 case-control, and 5 cross-sectional) reported in 50 publications. Thirty-one studies (19 prospective, 7 case-control, and all 5 cross-sectional) found a robust inverse association between childhood circumstances and CVD risk, although findings sometimes varied among specific outcomes, socioeconomic measures, and sex. Case-control studies reported mixed results. The association was stronger for stroke and, in particular, hemorrhagic stroke, than for CHD. Childhood socioeconomic conditions remained important predictors of CVD, even in younger cohorts. Conclusion: Childhood and adulthood socioeconomic circumstances are important determinants of CVD risk. The specific contribution of childhood and adulthood characteristics varies across different CVD subtypes. Disease-specific mechanisms are likely to explain the childhood origins of these adult health inequalities. Answer: Yes, childhood socio-economic circumstances are related to coronary heart disease (CHD) risk. Multiple studies have found an association between less affluent socio-economic conditions in childhood and an increased risk of CHD in later life. For instance, a study of British men found that those whose childhood social class was manual had an increased hazard ratio for CHD, and this effect persisted even after adjusting for adult social class and behavioral risk factors (PUBMED:17440028). Similarly, a study of Swedish males demonstrated an inverse association between childhood socio-economic position and CHD in middle age, with social, cognitive, and behavioral factors from childhood and adolescence being important in explaining this association (PUBMED:21051471). In Lebanon, adults with lower income and educational levels had a higher prevalence of heart disease independent of classic CHD risk factors (PUBMED:27325946). Research in rural Finland also indicated that bad socio-economic conditions in childhood may increase the probability of coronary heart disease in adulthood, with factors such as body height and smoking partly explaining the increased risk (PUBMED:4049020). Additionally, a systematic review confirmed that adverse socioeconomic circumstances in childhood may confer a greater risk for adult cardiovascular disease, including CHD (PUBMED:16257232). Moreover, the Helsinki Birth Cohort Study found that being born out of wedlock, which is often associated with lower socio-economic status, was an important predictor of lifelong health disadvantage, including increased risks of stroke and CHD, although societal changes such as those during World War II could modify these risks (PUBMED:34058892). Lastly, men of low socio-economic and educational level were found to possess deficient knowledge about CHD risk factors, which could contribute to higher disease risk (PUBMED:11873093). In conclusion, there is substantial evidence to suggest that childhood socio-economic circumstances have a persisting influence on the risk of coronary heart disease in adulthood.
Instruction: Assessing vitamin D nutritional status: Is capillary blood adequate? Abstracts: abstract_id: PUBMED:27018135 Assessing vitamin D nutritional status: Is capillary blood adequate? Background: Venous blood is the usual sample for measuring various biomarkers, including 25-hydroxyvitamin D (25OHD). However, it can prove challenging in infants and young children. Hence the finger-prick capillary collection is an alternative, being a relatively simple procedure perceived to be less invasive. We elected to validate the use of capillary blood sampling for 25OHD quantification by liquid chromatography tandem-mass spectrometry (LC/MS-MS). Methods: Venous and capillary blood samples were simultaneously collected from 15 preschool-aged children with asthma 10days after receiving 100,000IU of vitamin-D3 or placebo and 20 apparently healthy adult volunteers. 25OHD was measured by an in-house LC/MS-MS method. Results: The venous 25OHD values varied between 23 and 255nmol/l. The venous and capillary blood total 25OHD concentrations highly correlated (r(2)=0.9963). The mean difference (bias) of capillary blood 25OHD compared to venous blood was 2.0 (95% CI: -7.5, 11.5) nmol/l. Conclusion: Our study demonstrates excellent agreement with no evidence of a clinically important bias between venous and capillary serum 25OHD concentrations measured by LC/MS-MS over a wide range of values. Under those conditions, capillary blood is therefore adequate for the measurement of 25OHD. abstract_id: PUBMED:29173483 Assessing the boron nutritional status by analyzing its cummulative frequency distribution in the hair and whole blood. Boron is a non-essential ubiquitous trace element in the human body. The aim of this study was to assess boron nutritional status by analyzing boron frequency distribution in the long-term biological indicator tissue of hair and the short-term biological indicator of whole blood. Hair samples were analyzed in 727 apparently healthy subjects (263 ♂ and 464 ♀) and the whole blood boron was analyzed in the random subsample of them (80 ♂ and 152 ♀). Samples were analyzed by the ICP-MS at the Center for Biotic Medicine, Moscow, Russia. The adequate reference range for hair boron concentration was (μg∙g-1) 0.771- 6.510 for men and distinctly lower 0.472-3.89 for women; there was no detectable difference in the whole blood boron for the adequate reference range between men (0.020-.078) and women (0019-0.062). Boron may play an essential role in the metabolism of the connective tissue of the biological bone matrix. abstract_id: PUBMED:18706163 Investigation on nutritional intakes for hospitalized children with blood disease Objective: To investigate the diet and nutritional status of hospitalized children with blood disease in order to provide nutritional guidelines. Methods: The patients' daily dietary intakes, including breakfast, lunch, dinner and additional meals, were recorded in detail for seven consecutive days. The intake amount of various nutrients was calculated using the dietary database. Results: The majority of children with blood disease showed inadequate intakes of calories [mean 1825.81 kCal/d, 73.62% of the recommended intake (RNI)] and protein (mean 67.68 g/d, 81.34% of RNI). Intakes of vitamin E and riboflavin were adequate, but intakes of vitamin A, thiamine and vitamin C (66.67%, 77.78% and 69.89% of RNI, respectively) were inadequate. Iron and selenium intakes were adequate, but calcium and zinc intakes (41.11% and 56.21% of RNI, respectively) were grossly inadequate. Conclusions: Hospitalized children with blood disease had decreased dietary intakes of calories, protein, vitamin A, vitamin C, thiamin, calcium and zinc. The dietary pattern and nutritional intake need to be improved. abstract_id: PUBMED:29955625 The roles of vitamin D and dietary calcium in nutritional rickets. The etiology and pathogenesis of nutritional rickets are becoming progressively clearer. Vitamin D deficiency has generally been considered the major or only player in the pathogenesis of nutritional rickets. However, recent research into calcium deficiency has now provided clinicians with reasons to investigate and manage patients with nutritional rickets more appropriately. The important question when assessing cases of nutritional rickets is: "Is it calcium or vitamin D deficiency or both that play a major role in the pathogenesis of the disease?" The case presentation in this review highlights the risk factors, clinical presentation and pathophysiology of nutritional rickets in a young South African black child from a semi-urban area in Johannesburg, a city with abundant sunshine throughout the year. abstract_id: PUBMED:1580429 Materno-fetal nutritional status related to vitamin E There are few papers about the placental transfer of vitamin E in human beings. It is known that umbilical cord vitamin E levels are significantly lower than in mother's plasma. The aim of this study was to analyze the relationship between the vitamin E nutritional state between newborn infants and their mothers. The plasma levels of vitamin E and lipids at birth have been measured by using spectrophotometric methods. The statistical analysis (Student's "t" test for paired data) shows that the plasma levels of vitamin E in the newborn infants are significantly lower than that their mother's, but the nutritional indices (vitamin E/phospholipids and vitamin E/total lipids) show no statistical differences. There is a close correlation between umbilical cord vitamin E concentration and vitamin E levels in the mother's plasma. We have demonstrated that the vitamin E nutritional state of term newborn infants is equivalent to that of their mothers. On the other hand, nutritional indices such as vitamin E/phospholipids and vitamin E/total lipids, are better than the single vitamin E levels to evaluate the nutritional state of tocopherol. abstract_id: PUBMED:2087176 Biochemical and histological methodologies for assessing vitamin A status in human populations. In recent years, new biochemical and histological methodologies have been developed for assessing vitamin A nutritional status in humans at subclinical levels of nutriture. Insensitive static blood levels no longer are the only practical assessment parameter. Some of the newer functional methodologies require additional testing of their sensitivity and specificity under a variety of conditions existing in human populations and that frequently are associated with an inadequate vitamin A status. Some of these conditions could confound the interpretation when only a single assessment method is applied. abstract_id: PUBMED:29103383 Milk vitamin D in relation to the 'adequate intake' for 0-6-month-old infants: a study in lactating women with different cultural backgrounds, living at different latitudes. Breast-fed infants are susceptible to vitamin D deficiency rickets. The current vitamin D 'adequate intake' (AI) for 0-6-month-old infants is 10 µg/d, corresponding with a human milk antirachitic activity (ARA) of 513 IU/l. We were particularly interested to see whether milk ARA of mothers with lifetime abundant sunlight exposure reaches the AI. We measured milk ARA of lactating mothers with different cultural backgrounds, living at different latitudes. Mature milk was derived from 181 lactating women in the Netherlands, Curaçao, Vietnam, Malaysia and Tanzania. Milk ARA and plasma 25-hydroxyvitamin D (25(OH)D) were analysed by liquid-chromatography-MS/MS; milk fatty acids were analysed by GC-flame ionisation detector (FID). None of the mothers reached the milk vitamin D AI. Milk ARA (n; median; range) were as follows: Netherlands (n 9; 46 IU/l; 3-51), Curaçao (n 10; 31 IU/l; 5-113), Vietnam: Halong Bay (n 20; 58 IU/l; 23-110), Phu Tho (n 22; 28 IU/l; 1-62), Tien Giang (n 20; 63 IU/l; 26-247), Ho-Chi-Minh-City (n 18; 49 IU/l; 24-116), Hanoi (n 21; 37 IU/l; 11-118), Malaysia-Kuala Lumpur (n 20; 14 IU/l; 1-46) and Tanzania-Ukerewe (n 21; 77 IU/l; 12-232) and Maasai (n 20; 88 IU/l; 43-189). We collected blood samples of these lactating women in Curaçao, Vietnam and from Tanzania-Ukerewe, and found that 33·3 % had plasma 25(OH)D levels between 80 and 249·9 nmol/l, 47·3 % between 50 and 79·9 nmol/l and 19·4 % between 25 and 49·9 nmol/l. Milk ARA correlated positively with maternal plasma 25(OH)D (range 27-132 nmol/l, r 0·40) and milk EPA+DHA (0·1-3·1 g%, r 0·20), and negatively with latitude (2°S-53°N, r -0·21). Milk ARA of mothers with lifetime abundant sunlight exposure is not even close to the vitamin D AI for 0-6-month-old infants. Our data may point at the importance of adequate fetal vitamin D stores. abstract_id: PUBMED:23808446 Routine supplementation does not warrant the nutritional status of vitamin d adequate after gastric bypass Roux-en-Y. Unlabelled: Bariatric surgery can lead to nutritional deficiencies, including those related to bone loss. The aim of this study was to evaluate serum concentrations of calcium, vitamin D and PTH in obese adults before and six months after gastric bypass surgery in Roux-en-Y (RYGB) and evaluate the doses of calcium and vitamin D supplementation after surgery. Methods: Retrospective longitudinal study of adult patients of both sexes undergoing RYGB. We obtained data on weight, height, BMI and serum concentrations of 25-hydroxyvitamin D, ionized calcium and PTH. Following surgery, patients received dietary supplementation daily 500 mg calcium carbonate and 400 IU vitamin D. Results: We studied 56 women and 27 men. Preoperative serum concentrations of vitamin D were inadequate in 45% of women and 37% of men, while in the postoperative period 91% of women and 85% of men had deficiency of this vitamin. No change in serum calcium was found before and after surgery. Serum PTH preoperatively remained adequate in 89% of individuals of both sexes. After surgery serum concentrations remained adequate and 89% women and 83% men evaluated. Conclusion: Obesity appears to be a risk factor for the development of vitamin D. The results show that supplementation routine postoperative was unable to treat and prevent vitamin D deficiency in obese adults undergoing RYGB. abstract_id: PUBMED:18945280 Clinical manifestations of infants with nutritional vitamin B deficiency due to maternal dietary deficiency. Aim: In developing countries, nutritional vitamin B(12) deficiency in infants due to maternal diet without adequate protein of animal origin has some characteristic clinical features. In this study, haematological, neurological and gastrointestinal characteristics of nutritional vitamin B(12) deficiency are presented. Methods: Hospital records of 27 infants diagnosed in a paediatric haematology unit between 2000 and 2008 were evaluated retrospectively. Results: The median age at diagnosis was 10.5 months (3-24 months). All the infants were exclusively breast fed and they presented with severe nonspecific manifestations, such as weakness, failure to thrive, refusal to wean, vomiting, developmental delay, irritability and tremor in addition to megaloblastic anaemia. The diagnosis was confirmed by complete blood counts, blood and marrow smears and serum vitamin B(12) and folic acid levels. The median haemoglobin level was 6.4 g/dL (3.1-10.6) and mean corpuscular volume (MCV) was 96.8 fL (73-112.3). Some patients also had thrombocytopaenia and neutropaenia. All the infants showed clinical and haematological improvement with vitamin B(12) administration. Patients with severe anaemia causing heart failure received packed red blood cell transfusions as the initial therapy. Conclusion: Paediatricians must consider nutritional vitamin B(12) deficiency due to maternal dietary deficiency in the differential diagnosis of some gastrointestinal, haematological, developmental and neurological disorders of infants with poor socioeconomic status. Delay in diagnosis may cause irreversible neurological damage. abstract_id: PUBMED:17499971 Assessment of vitamin A nutritional status in newborn preterm infants. Objective: This study assessed the vitamin A nutritional status of preterm infants determined by the vitamin A relative dose-response test (RDR) compared with serum levels of vitamin A, retinol-binding protein (RBP), transthyretin (TTR), and retinol relations with carrier proteins. Methods: Serum levels of retinol, RBP, and TTR and retinol/RBP, retinol/TTR, and RBP/TTR molar ratios were determined in 120 infants at 7 d and in 92 at 28 d. For the determination of the performance of the tests, the RDR was considered the reference method. The sensitivity and specificity for all possible cutoff values were determined by constructing receiver operator characteristic curves. The areas under the curves were used to estimate the overall accuracy of the tests. The best cutoff values to be used for the calculation of sensitivity and specificity were determined with 95% confidence intervals. Results: RDR indicated vitamin A deficiency in 60% of the infants at 7 d and in 51.1% at 28 d. In the receiver operator characteristic curves, the best area under the curve was 0.710 obtained for serum retinol at 28 d of postnatal age and considered moderately accurate. The least inadequate cutoff level was set at 25 mg/dL, but no value was considered adequate due to low sensitivity and/or low specificity. Conclusion: Compared with RDR, the determination of serum levels of retinol, RBP, and TTR and their molar ratios are not adequate to assess nutritional vitamin A status in preterm infants. Answer: Yes, capillary blood is adequate for assessing vitamin D nutritional status. A study validated the use of capillary blood sampling for the quantification of 25-hydroxyvitamin D (25OHD) by comparing it with venous blood samples. The study involved 15 preschool-aged children with asthma and 20 apparently healthy adult volunteers. The results showed that the venous and capillary blood total 25OHD concentrations highly correlated (r^2=0.9963), and the mean difference (bias) of capillary blood 25OHD compared to venous blood was 2.0 nmol/l with a 95% confidence interval of -7.5 to 11.5 nmol/l. This indicates excellent agreement with no evidence of a clinically important bias between venous and capillary serum 25OHD concentrations measured by liquid chromatography tandem-mass spectrometry (LC/MS-MS) over a wide range of values. Therefore, under these conditions, capillary blood is considered adequate for the measurement of 25OHD (PUBMED:27018135).
Instruction: Traumatic cardiac arrest: should advanced life support be initiated? Abstracts: abstract_id: PUBMED:28741004 Seventeen years of life support courses for nurses: where are we now? The Life Support Course for Nurses (LSCN) equips nurses with the resuscitation skills to be first responders in in-hospital cardiac arrests. Seventeen years after the initiation of the LSCN, a confidential cross-sectional Qualtrics™ survey was conducted in May 2016 on LSCN graduands to assess the following: confidence in nurse-initiated resuscitation post-LSCN; defibrillation experience and outcomes; and perceived barriers and usefulness of the LSCN. The majority of respondents reported that the course was useful and enhanced their confidence in resuscitation. Skills retention can be enhanced by organising frequent team-based resuscitation training. Resuscitation successes should be publicised to help overcome perceived barriers. abstract_id: PUBMED:34093080 Paediatric Life Support The European Resuscitation Council (ERC) Paediatric Life Support (PLS) guidelines are based on the 2020 International Consensus on Cardiopulmonary Resuscitation Science with Treatment Recommendations of the International Liaison Committee on Resuscitation (ILCOR). This section provides guidelines on the management of critically ill or injured infants, children and adolescents before, during and after respiratory/cardiac arrest. abstract_id: PUBMED:34093079 Basic life support The European Resuscitation Council has produced these basic life support guidelines, which are based on the 2020 International Consensus on Cardiopulmonary Resuscitation Science with Treatment Recommendations. The topics covered include cardiac arrest recognition, alerting emergency services, chest compressions, rescue breaths, automated external defibrillation (AED), cardiopulmonary resuscitation (CPR) quality measurement, new technologies, safety, and foreign body airway obstruction. abstract_id: PUBMED:23354262 Traumatic cardiac arrest: should advanced life support be initiated? Background: Several studies recommend not initiating advanced life support in traumatic cardiac arrest (TCA), mainly owing to the poor prognosis in several series that have been published. This study aimed to analyze the survival of the TCA in our series and to determine which factors are more frequently associated with recovery of spontaneous circulation (ROSC) and complete neurologic recovery (CNR). Methods: This is a cohort study (2006-2009) of treatment benefits. Results: A total of 167 TCAs were analyzed. ROSC was obtained in 49.1%, and 6.6% achieved a CNR. Survival rate by age groups was 23.1% in children, 5.7% in adults, and 3.7% in the elderly (p < 0.05). There was no significant difference in ROSC according to which type of ambulance arrived first, but if the advanced ambulance first, 9.41% achieved a CNR, whereas only 3.7% if the basic ambulance first. We found significant differences between the response time and survival with a CNR (response time was 6.9 minutes for those who achieved a CNR and 9.2 minutes for those who died). Of the patients, 67.5% were in asystole, 25.9% in pulseless electrical activity (PEA), and 6.6% in VF. ROSC was achieved in 90.9% of VFs, 60.5% of PEAs, and 40.2% of those in asystole (p < 0.05), and CNR was achieved in 36.4% of VFs, 7% of PEAs, and 2.7% of those in asystole (p < 0.05). The mean (SD) quantity of fluid replacement was greater in ROSC (1,188.8 [786.7] mL of crystalloids and 487.7 [688.9] mL of colloids) than in those without ROSC (890.4 [622.4] mL of crystalloids and 184.2 [359.3] mL of colloids) (p < 0.05). Conclusion: In our series, 6.6% of the patients survived with a CNR. Our data allow us to state beyond any doubt that advanced life support should be initiated in TCA patients regardless of the initial rhythm, especially in children and those with VF or PEA as the initial rhythm and that a rapid response time and aggressive fluid replacement are the keys to the survival of these patients. Level Of Evidence: Therapeutic study, level IV; epidemiologic study, level III. abstract_id: PUBMED:26563488 Extracorporeal life support in polytraumatized patients. Major trauma is a leading cause of death, particularly amongst young patients. Conventional therapies for post-traumatic cardiovascular shock and acute pulmonary failure may sometimes be insufficient and even dangerous. New approaches to trauma care and novel salvage techniques are necessary to improve outcomes. Extracorporeal life support (ECLS) has proven to be effective in acute cardiopulmonary failure from different etiologies, particularly when conventional therapies fail. Since 2008 we have used ECLS as a rescue therapy in severe poly-trauma patients with refractory clinical setting (cardiogenic shock, cardiac arrest, and/or pulmonary failure). The rationale for using ECLS in trauma patients is to support cardiopulmonary function, providing adequate systemic perfusion and, therefore, avoiding consequent multi-organ failure and permitting organ recovery. From our data ECLS, utilizing heparin-coated support to avoid systemic anticoagulation, is a valuable option to support severely injured patients when conventional therapies are insufficient. It is safe, feasible, and effective in providing hemodynamic support and blood-gas exchange. Moreover, we have identified several pre-ECLS patient characteristics useful in predicting ECLS treatment appropriateness in severe poly-traumatized patients. These might be helpful in deciding whether the ECLS should be initiated in patients who are severely complex and compromised. Future improvements in materials and techniques are expected to make ECLS even easier and safer to manage, leading to a further extension of its use in severely injured patients. abstract_id: PUBMED:33792740 Training module extracorporeal life support (ECLS): consensus statement of the DIVI, DGTHG, DGfK, DGAI, DGIIN, DGF, GRC and DGK Mechanical circulatory support using extracorporeal life support systems (ECLS) has significantly increased in recent years. These critically ill patients pose special challenges to the multiprofessional treatment team and require comprehensive, interdisciplinary and interprofessional concepts. For this reason, to ensure the best possible patient care a standardized ECLS training module has been created at national specialist society level, taking emergency and intensive care management into account. abstract_id: PUBMED:27022699 Resuscitation - cardiopulmonary resuscitation in infants and children (paediatric life support) In children, severe emergencies and cardiorespiratory arrests in particular are relatively rare but time-critical events. As compared to adults, hypoxic arrests caused by respiratory disorders that may subsequently result in pulseless electrical activity or asystole are more prevalent. The current Paediatric Life Support (PLS) Guidelines 2015 of the European Resuscitation Council (ERC) acknowledge both limited scientific evidence and aspects of practicability. They also take into account the rather limitedpaediatric routine that most providers have as well as national and local infrastructural differences. Particular emphasis was put on early recognition and treatment of a critically ill or injured child, hence the prevention of cardiorespiratory arrest and the early start of lay rescuer interventions. There have been no major changes in the 2010 algorithms, including retention of the ABC sequence (airway, breathing, circulation). abstract_id: PUBMED:3633592 Advanced cardiac life support. It is important for all physicians to be familiar with the equipment and medications needed for advanced cardiac life support. The most important aspect in advanced life support is the establishment and maintenance of an airway and ventilation of the patient. Office personnel should be trained in Basic Cardiac Life Support, so that the physician can intubate the patient, start an intravenous line, and administer intravenous medications to support the patient. Appropriate drug dosage tables and defibrillation tables are included in this article. abstract_id: PUBMED:11114466 The European Resuscitation Council's paediatric life support course 'Advanced Paediatric Life Support'. The poor outcome for resuscitation from cardiopulmonary arrest in childhood is widely recognised. The European Resuscitation Council has adopted the Advanced Paediatric Life Support course (originating in the UK and now available in a number of countries) as its course for providers caring for children. This paper outlines the course content and explains its remit, which is to reduce avoidable deaths in childhood by not only resuscitation from cardiac arrest but, more effectively, by recognising and treating in a timely and effective fashion life-threatening illness and injury in infants and children. Two related courses Paediatric Life Support, a less intense course for less advanced providers, and Pre-Hospital Paediatric Life Support for immediate care providers are also described. abstract_id: PUBMED:9349376 Resuscitation. II: Advanced cardiac life support. Advanced cardiac life support is the definitive sequence of initial treatment for the victim of a cardiac arrest. Advanced cardiac life support forms a large part of cardiopulmonary resuscitation, which also includes basic life support and post-resuscitation intensive care. Answer: Yes, advanced life support should be initiated in cases of traumatic cardiac arrest (TCA). A study analyzing the survival of TCA patients found that return of spontaneous circulation (ROSC) was obtained in 49.1% of cases, and 6.6% achieved complete neurologic recovery (CNR). The survival rate varied by age group, with children having a 23.1% survival rate, adults 5.7%, and the elderly 3.7%. The study concluded that advanced life support should be initiated in TCA patients regardless of the initial rhythm, especially in children and those with ventricular fibrillation (VF) or pulseless electrical activity (PEA) as the initial rhythm. Rapid response time and aggressive fluid replacement were also identified as key factors for survival in these patients (PUBMED:23354262).
Instruction: BMI or BIA: Is Body Mass Index or Body Fat Mass a Better Predictor of Cardiovascular Risk in Overweight or Obese Children and Adolescents? Abstracts: abstract_id: PUBMED:34744995 Tri-Ponderal Mass Index as a Screening Tool for Identifying Body Fat and Cardiovascular Risk Factors in Children and Adolescents: A Systematic Review. Because of the limitation of body mass index (BMI) in distinguishing adipose mass from muscle, the tri-ponderal mass index (TMI) has been proposed as a new indicator for better assessing adiposity in children and adolescents. However, it remains unclear whether TMI performs better than BMI or other adiposity indices in predicting obesity status in childhood and obesity-related cardiovascular risk factors (CVRFs) in childhood or adulthood. We searched PubMed, Cochrane Library, and Web of Science for eligible publications until June 15, 2021. A total of 32 eligible studies were included in this systematic review. We found that TMI had a similar or better ability to predict body fat among children and adolescents than BMI. However, most of the included studies suggested that TMI was similar to BMI in identifying metabolic syndrome although TMI was suggested to be a useful tool when used in combination with other indicators (e.g., BMI and waist circumference). In addition, limited evidence showed that TMI did not perform better than BMI for identifying specific CVRFs, including insulin resistance, high blood pressure, dyslipidemia, and inflammation in children and adolescents, as well as CVRFs in adults. Systematic Review Registration: https://www.crd.york.ac.uk/prospero, CRD42021260356. abstract_id: PUBMED:38386029 Comparison of body mass index and fat mass index to classify body composition in adolescents-The EVA4YOU study. The objectives of this study were to develop age- and sex-specific reference percentiles for fat mass index (FMI) and fat-free mass index (FFMI) in adolescents aged 14 to 19 years and to determine differences in overweight/obesity classification by FMI and body mass index (BMI). The EVA4YOU study is a single-center cross-sectional study conducted in western Austria. Cardiovascular risks including anthropometric measurements and bioelectrical impedance analysis were assessed in adolescents (mean age 17 years). FMI and FFMI were calculated as the ratio of fat mass (FM) and fat-free mass (FFM) to the square of height and compared to study population-specific BMI percentiles. One thousand four hundred twenty-two adolescents were included in the analysis. Girls had a significantly higher mean FM and FMI and a significantly lower mean FFM, FFMI (p < 0.001, each), and mean BMI (p = 0.020) than boys. Body composition classification by FMI and BMI percentiles shows a concordance for the < 75th and > 97th percentile, but a significant difference in percentile rank classifications between these two cut-off values (all p < 0.05). Based on FMI, 15.5% (221/1422) of the whole population and 29.4% (92/313) of those between the 75th and 97th percentiles are classified one category higher or lower than those assigned by BMI. Conclusion: Classification of normal or pathologic body composition based on BMI and FMI shows good accordance in the clearly normal or pathologic range. In an intermediate range, FMI reclassifies categories based on BMI in more than a quarter of adolescents. Cut-off values to differentiate normal from pathologic FMI values on a biological basis are needed. Trial Registration: The study is registered at www. Clinicaltrials: gov (Identifier: NCT04598685; Date of registration: October 22, 2020). What Is Known: • Chronic non-communicable diseases (NCDs) are the leading cause of morbidity and mortality globally, with major risk factors including unhealthy diets, harmful behaviors, and obesity. Obesity in children and adolescents is a key risk factor for later NCDs, which is commonly measured by Body Mass Index (BMI). • BMI can be misleading as it doesn't distinguish between fat mass (FM) and fat-free mass (FFM), leading to potential misclassification of obesity in children. Previous studies have already suggested the use of the Fat Mass Index (FMI) and Fat-Free Mass Index (FFMI) as a more accurate measures of body composition. What Is New: • This study adds the first age- and sex-specific reference values for FMI and FFMI in Austrian adolescents using bioelectrical impedance analysis (BIA) as a safe and secure measurement method of a large representative cohort. • We found percentile misclassification between BMI and FMI when categorizing for obesity, especially in intermediate categories of body composition. Furthermore, when comparing the new reference values for FMI and FFMI to existing ones from the US, UK, and Germany we could show a good alignment within the European cohorts and major differences with American values, indicating and confirming the difference of FMI and FFMI for different populations of different ethnical background, living on different continents. abstract_id: PUBMED:24012666 Resistin levels are related to fat mass, but not to body mass index in children. The relationship of resistin levels with obesity remains unclear. The aim of this study was to determine resistin levels in prepubertal children and adolescents and evaluate their association with anthropometric parameters and body composition. The study population included 420 randomly selected 6-8-year-old children and 712 children aged 12-16 years. Anthropometric data were measured and body mass index (BMI) and waist-to-hip and waist-to-height ratios were calculated. Body composition was assessed using an impedance body composition analyzer. Serum resistin levels were determined using a multiplexed bead immunoassay. Resistin levels were not significantly different between sexes. No significant differences in serum resistin concentrations were found between obese, overweight, and normal weight children at any age, and no significant correlations were observed between resistin concentrations and weight or BMI. However, resistin levels showed a significant positive correlation with fat mass in 12-16-year-old children, particularly in girls. In addition to describing serum resistin levels in prepubertal children and adolescents, our study suggests that resistin is related to body fat rather than to BMI in adolescents. abstract_id: PUBMED:26542380 Association between fat mass index and fat-free mass index values and cardiovascular risk in adolescents Objective: To describe the association between fat mass index and fat-free mass index values and factors associated with cardiovascular risk in adolescents in the city of Juiz de Fora, Minas Gerais. Methods: Cross-sectional study with 403 adolescents aged 10-14 years, from public and private schools. Anthropometric, clinical, biochemical measurements were obtained, as well as self-reported time spent performing physical exercises, sedentary activities and sexual maturation stage. Results: Regarding the nutritional status; 66.5% of the adolescents had normal weight; 19.9% were overweight and 10.2% were obese. For both genders, the fat mass index was higher in adolescents that had high serum triglycerides, body mass index and waist circumference. Conclusions: Adolescents that had anthropometric, clinical and biochemical characteristics considered to be of risk for the development of cardiovascular disease had higher values of fat mass index. Different methodologies for the assessment of body composition make health promotion and disease prevention more effective. abstract_id: PUBMED:30207269 Body mass index, waist circumference, body fat mass, and risk of developing hypertension in normal-weight children and adolescents. Background And Aims: We prospectively examined the association between three adiposity indices, including body mass index (BMI), waist circumference (WC), and percentage of body fat (PBF), and risk of hypertension in normal-weight Chinese children. Methods And Results: The current study included 1526 (713 boys and 813 girls) normal-weight Chinese children (age 6-14 years old), who were free of hypertension at baseline (2014). Heights, body weight, WC, and PBF (estimated by bioelectrical impedance analysis) were measured at the baseline. Blood pressure was repeatedly measured in 2014, 2015 and 2016. Hypertension was defined as either high systolic blood pressure and/or high diastolic blood pressure, according to age- and sex-specific 95th percentile for Chinese children. We used Cox proportional hazards model to calculate the association between exposures and hypertension. We identified 88 incident hypertension cases during two years of follow up. High BMI was associated with high risk of developing hypertension after adjusting for potential confounders. The adjusted hazard ratio for hypertension was 2.88 (95% CI: 1.24, 6.69) comparing two extreme BMI quartiles. Each SD increase of BMI (≈1.85 kg/m2) was associated with a 32% higher likelihood to developing hypertension (Hazard ratio = 1.32; 95% CI: 1.003, 1.73). In contrast, we did not find significant associations between WC or PBF and higher hypertension risk (p-trend >0.2 for both). Conclusion: High BMI, but not WC and PBF, was associated with high risk of hypertension in normal-weight Chinese children. abstract_id: PUBMED:37305104 Percent body fat, but not body mass index, is associated with cardiometabolic risk factors in children and adolescents. Background: The epidemic of overweight and obesity has become a worldwide public health problem. Cardiometabolic diseases may originate in childhood. We investigated the association between percent body fat (PBF) measured by the bioelectrical impedance assay and cardiometabolic risk (CMR) in pediatrics. Methods: This cross-sectional study involved 3819 subjects (6-17 years old) in Shanghai. We assessed the association between PBF and body mass index (BMI) with multiple CMR factors. We examined the risk for cardiometabolic abnormalities attributable to overweight and obesity based on age- and sex-specific PBF Z-scores and BMI Z-scores, respectively. Results: PBF, but not BMI, was positively associated with multiple CMR factors in males and females except for total cholesterol in females (all p < 0.05). Compared with the non-overweight group based on PBF, overweight and obese subjects had increasingly higher odds ratio of dyslipidemia (2.90 (1.99-4.23), 4.59 (2.88-7.32) for males and 1.82 (1.20-2.75), 2.46 (1.47-4.11) for females) and elevated blood pressure (BP) (3.26 (2.35-4.51), 4.55 (2.92-7.09) for males and 1.59 (1.07-2.34), 3.98 (2.27-6.17) for females). Obesity females showed a higher likelihood for hyperglycemia (2.19 (1.24-3.84)) than non-overweight females. In both sexes, the predictive effect of PBF on dyslipidemia and elevated BP in adolescents was better than that in children. For hyperglycemia, the predictive effect of PBF was better in male adolescents and female children. There was no risk difference for cardiometabolic abnormalities attributable to BMI-based obesity categories. Conclusions: PBF but not BMI was associated with CMR. Overweight and obesity categories based on PBF had an increased risk for cardiometabolic abnormalities in children and adolescents. abstract_id: PUBMED:26087841 BMI or BIA: Is Body Mass Index or Body Fat Mass a Better Predictor of Cardiovascular Risk in Overweight or Obese Children and Adolescents? A German/Austrian/Swiss Multicenter APV Analysis of 3,327 Children and Adolescents. Background: Body fat (BF) percentiles for German children and adolescents have recently been published. This study aims to evaluate the association between bioelectrical impedance analysis (BIA)-derived BF and cardiovascular risk factors and to investigate whether BF is better suited than BMI in children and adolescents. Methods: Data of 3,327 children and adolescents (BMI > 90th percentile) were included. Spearman's correlation and receiver operating characteristics (ROCs) were applied determining the associations between BMI or BF and cardiovascular risk factors (hypertension, dyslipidemia, elevated liver enzymes, abnormal carbohydrate metabolism). Area under the curve (AUC) was calculated to predict cardiovascular risk factors. Results: A significant association between both obesity indices and hypertension was present (all p < 0.0001), but the correlation with BMI was stronger (r = 0.22) compared to BF (r = 0.13). There were no differences between BMI and BF regarding their correlation with other cardiovascular risk factors. BF significantly predicted hypertension (AUC = 0.61), decreased HDL-cholesterol (AUC = 0.58), elevated LDL-cholesterol (AUC = 0.59), elevated liver enzymes (AUC = 0.61) (all p < 0.0001), and elevated triglycerides (AUC = 0.57, p < 0.05), but not abnormal carbohydrate metabolism (AUC = 0.54, p = 0.15). For the prediction of cardiovascular risk factors, no significant differences between BMI and BF were observed. Conclusion: BIA-derived BF was not superior to BMI to predict cardiovascular risk factors in overweight or obese children and adolescents. abstract_id: PUBMED:36276343 Lower fitness levels, higher fat-to-lean mass ratios, and lower cardiorespiratory endurance are more likely to affect the body mass index of Saudi children and adolescents. Background: Several studies suggest that health-related physical fitness may play a prominent role in preventing obesity in children and adolescents. Objectives: The present study examined fitness levels using five components of health-related fitness in Saudi students aged 10-17 years (fat-to-lean mass ratio, cardiorespiratory endurance, upper body strength and endurance, abdominal muscle strength and endurance, and flexibility). Subsequently, the association between BMI and a health-related fitness index (HR-PFI) based on the five fitness components was investigated. Methods: The study was conducted on 1,291 students with a mean age of 12.95 ± 1.72 years. Participants included 1,030 boys aged 12.80 ± 1.79 years, with 479 young boys (11.24 ± 0.81b years), and 551 adolescents (14.16 ± 1.21 years). Moreover, the study examined 261 girls averaging 13.54 ± 1.2 years old, with 66 young girls (11.92 ± 0.27 years), and 195 teenage girls (14.09 ± 0.85 years). Each participant's health-related fitness level was assessed by the following tests: Bioelectrical Impedance Analyzer (BIA) for body composition, one-mile run/walk test for cardiorespiratory endurance, curl-up test for abdominal muscle strength and endurance (AMSE), push-up test for upper body strength and endurance (UBSE), and back-saver sit-and-reach test for flexibility. Results: The overall prevalence of overweight and obesity was 10.4 and 24.7% in boys and 10 and 8.4% in girls, respectively. The mean Z-scores of performances decreased from the underweight to the obese groups. BMI was positively associated with the ratio of fat mass to lean mass and negatively associated with cardiorespiratory endurance in the overall group of participants as well as in the subgroups by sex and age categories. BMI was also negatively associated with flexibility and HR-PFI in the total group, UBSE, AMSE, and HR-PFI in prepubertal boys, and UBSE in prepubertal girls. The coefficient of determination values was 0.65 in the total group, 0.72 in prepubertal boys, 0.863 in adolescent boys, 0.956 in prepubertal girls, and 0.818 in adolescent girls. Conclusions: Overall health-related physical fitness, fat-to-lean mass ratio, and cardiorespiratory endurance are the factors that most affect BMI in Saudi students aged 10 to 17. abstract_id: PUBMED:33985633 Combined influence of lipid accumulation product and body mass index on cardiometabolic risk factors among Yinchuan City children and adolescents Objective: To analyze the relationship between children lipid accumulation product(CLAP) and body mass index(BMI) and cardiovascular risk factors in children and adolescents. Methods: A current situation study design was adopted. A total of 936 children and adolescents aged 12 to 18 years old in Yinchuan City were selected from September 2017 to September 2019 by a convenient sampling method. Among them, 537(57. 40%) boys and an average age of(14. 82±2. 08) years old, the number of Han and other ethnic groups were 705(75. 30%) and 231(24. 70%) respectively. And conduct questionnaire surveys(using Yinchuan Children's Blood Pressure Survey-standard questionnaire, which mainly includes basic information, birth and infant feeding, physical activity and sleep, etc. ), physical examination(including height, weight, blood pressure and body components) and biochemical index detection(including fasting blood glucose and blood lipids), using binary classification Logistics regression to analyze the correlation between CLAP and BMI and cardiovascular risk factors, and ROC curve analysis of the accuracy of CLAP and BMI in the diagnosis of cardiovascular risk factors. Results: The association between CLAP≥P75 and BMI normal weight and cardiovascular risk factor aggregation≥2 was 38. 13(95%CI 23. 83-61. 00) times(P<0. 05) of CLAP<P75 and BMI non-obese, which was higher than that of other different combinations and cardiovascular risk factors Correlation. The accuracy of CLAP≥P75 combined with BMI in the diagnosis of cardiovascular risk factor aggregation≥2 was 0. 87(95%CI 0. 85-0. 89), higher than other diagnoses. Conclusion: CLAP and BMI are associated with cardiovascular risk factors. abstract_id: PUBMED:22510779 Percent body fat is a better predictor of cardiovascular risk factors than body mass index. The objective of the present study was to evaluate the predictive values of percent body fat (PBF) and body mass index (BMI) for cardiovascular risk factors, especially when PBF and BMI are conflicting. BMI was calculated by the standard formula and PBF was determined by bioelectrical impedance analysis. A total of 3859 ambulatory adult Han Chinese subjects (2173 males and 1686 females, age range: 18-85 years) without a history of cardiovascular diseases were recruited from February to September 2009. Based on BMI and PBF, they were classified into group 1 (normal BMI and PBF, N = 1961), group 2 (normal BMI, but abnormal PBF, N = 381), group 3 (abnormal BMI, but normal PBF, N = 681), and group 4 (abnormal BMI and PBF, N = 836). When age, gender, lifestyle, and family history of obesity were adjusted, PBF, but not BMI, was correlated with blood glucose and lipid levels. The odds ratio (OR) and 95% confidence interval (CI) for cardiovascular risk factors in groups 2 and 4 were 1.88 (1.45-2.45) and 2.06 (1.26-3.35) times those in group 1, respectively, but remained unchanged in group 3 (OR = 1.32, 95%CI = 0.92-1.89). Logistic regression models also demonstrated that PBF, rather than BMI, was independently associated with cardiovascular risk factors. In conclusion, PBF, and not BMI, is independently associated with cardiovascular risk factors, indicating that PBF is a better predictor. Answer: The question of whether Body Mass Index (BMI) or Body Fat Mass (BIA) is a better predictor of cardiovascular risk in overweight or obese children and adolescents has been explored in several studies. The evidence suggests that while BMI is a widely used measure, it may not always be the most accurate predictor of cardiovascular risk because it does not distinguish between fat mass and fat-free mass (PUBMED:38386029; PUBMED:24012666; PUBMED:26542380; PUBMED:30207269; PUBMED:37305104; PUBMED:26087841; PUBMED:36276343; PUBMED:33985633; PUBMED:22510779). Several studies have found that body fat percentage (PBF) is more closely associated with cardiovascular risk factors than BMI. For instance, one study concluded that PBF, but not BMI, was positively associated with multiple cardiometabolic risk factors in both males and females, except for total cholesterol in females (PUBMED:37305104). Another study found that PBF is a better predictor of cardiovascular risk factors than BMI, indicating that PBF is independently associated with cardiovascular risk factors (PUBMED:22510779). However, some studies have shown that BMI can still be a significant predictor of certain cardiovascular risks. For example, a study found that high BMI was associated with a higher risk of developing hypertension in normal-weight Chinese children (PUBMED:30207269). Another study indicated that BMI and BIA-derived body fat were both significantly associated with hypertension, but the correlation with BMI was stronger (PUBMED:26087841). In terms of specific measures, the tri-ponderal mass index (TMI) has been suggested as a useful tool when used in combination with other indicators like BMI and waist circumference, although it did not perform better than BMI for identifying specific cardiovascular risk factors (PUBMED:34744995). Additionally, the fat mass index (FMI) has been shown to reclassify categories based on BMI in more than a quarter of adolescents, particularly in intermediate categories of body composition (PUBMED:38386029). Overall, while BMI is a useful screening tool, body composition measures such as PBF or BIA may provide a more accurate assessment of cardiovascular risk in overweight or obese children and adolescents. This suggests that incorporating measures of body fat into clinical practice could enhance the prediction and management of cardiovascular risk in this population.
Instruction: Does implantation of larger bioprosthetic pulmonary valves in young patients guarantee durability in adults? Abstracts: abstract_id: PUBMED:26324680 Does implantation of larger bioprosthetic pulmonary valves in young patients guarantee durability in adults? Durability analysis of stented bioprosthetic valves in the pulmonary position in patients with Tetralogy of Fallot†. Objectives: In a previous study, we identified factors affecting the durability of bioprosthetic valves in the pulmonary position following total repair of Tetralogy of Fallot (TOF). In this study, we aimed to identify factors affecting the durability of the bioprosthetic valve with regard to patient age and implanted valve size in order to guide valve choice in adolescent patients. Methods: We enrolled and analysed 108 cases of pulmonary valve replacement (PVR) with stented bioprosthetic valves in TOF patients between January 1998 and February 2014. Valvular dysfunction was defined as at least a moderate amount of pulmonary regurgitation or a peak pressure gradient of ≥40 mmHg on the most recent echocardiography. We analysed the effect of patient age and valve size on the durability of the bioprosthetic valve in the pulmonary position. Results: There were 2 early deaths; no late deaths were observed. The follow-up duration was 92.8 ± 44.5 months. The mean age at PVR was 19.3 ± 9.1 years. The mean valve size was 24.7 ± 1.8 mm. Whereas patients ≥20 years old showed no valvular dysfunction (i.e. 100% freedom from valvular dysfunction at 10 and 14 years), patients who were adolescents and children (<20 years) showed worse durability, regardless of the z-score of valve size (68.2% at 10 years and 24.7% at 14 years). Although a larger valve with a z-score of ≥2 was implanted, patients <20 years old did not exhibit good valvular durability. The results were particularly worse in patients <10 years old, with 66.7% freedom from valvular dysfunction at 6 years and 33.3% at 8 years, compared with patients within the age range of 10 to <20 years (75.1% at 10 years, and 20.5% at 14 years). Conclusions: The durability of bioprosthetic valves in the pulmonary position was acceptable in patients aged ≥20 years, regardless of the z-score of valve size. However, patients who were children and adolescents did not show optimal durability of the bioprosthetic valve, irrespective of the z-score of valve size. abstract_id: PUBMED:35545525 Durability of Bioprosthetic Valves in Patients on Dialysis. Purpose: This study focused on clarifying the durability of bioprosthetic valves in current practice. Methods: A total of 238 consecutive patients who underwent aortic valve replacement at a single institution from 2011 to 2020 were reviewed. We evaluated valve-related outcomes such as structural valve deterioration (SVD), especially in dialysis patients who received bioprosthetic valve. Results: Among the tissue valves implanted in 212 patients, 5 SVDs were recorded and 3 valves were replaced. All early valve failures occurred in relatively young dialysis patients and were recorded 3 to 5 years after the initial operation. Freedom from SVD at 6 years was 49.9% in patients on dialysis, compared with 100% in non-dialysis patients. Predictors of better survival in dialysis patients were better preoperative functional class and larger prosthetic valve size. Conclusions: The durability of bioprosthetic valves in the aortic position was suboptimal in dialysis patients. Mechanical valves can be an option for young, healthy dialysis patients with a large aortic valve annulus. abstract_id: PUBMED:38127022 Pulmonary Valve Replacement in Tetralogy of Fallot: Procedural Volume and Durability of Bioprosthetic Pulmonary Valves. Background: Robust data on changes in pulmonary valve replacement (PVR) procedural volume and predictors of bioprosthetic pulmonary valve (BPV) durability in patients with tetralogy of Fallot (TOF) are scarce. Objectives: This study sought to assess temporal trends in PVR procedural volume and BPV durability in a nationwide, retrospective TOF cohort. Methods: Data were obtained from patient records. Robust linear regression was used to assess temporal trends in PVR procedural volume. Piecewise exponential additive mixed models were used to estimate BPV durability, defined as the time from implantation to redo PVR with death as a competing risk, and to assess risk factors for reduced durability. Results: In total, 546 PVR were performed in 384 patients from 1976 to 2021. The annual number of PVR increased from 0.4 to 6.0 per million population (P < 0.001). In the last decade, the transcatheter PVR volume increased by 20% annually (P < 0.001), whereas the surgical PVR volume did not change significantly. The median BPV durability was 17 years (Q1: 10-Q3: 10 years-not applicable). There was no significant difference in the durability of different BPV after adjustment for confounders. Age at PVR (HR: 0.78 per 10 years from <1 year; 95% CI: 0.63-0.96; P = 0.02) and true inner valve diameter (9-17 mm vs 18-22 mm HR: 0.40; 95% CI: 0.22-0.73; P = 0.003 and 18-22 mm vs 23-30 mm HR: 0.59; 95% CI: 0.25-1.39; P = 0.23) were associated with reduced BPV durability in multivariate models. Conclusions: The PVR procedural volume has increased over time, with a greater increment in transcatheter than surgical PVR during the last decade. Younger patient age at PVR and a smaller true inner valve diameter predicted reduced BPV durability. abstract_id: PUBMED:32014323 Long-term durability of bioprosthetic valves in pulmonary position: Pericardial versus porcine valves. Objectives: The long-term durability of the 2 most commonly used types of bioprosthetic valves in the pulmonic position, the porcine and pericardial valves, is unclear. We compared the long-term durability of the pericardial (Carpentier-Edwards PERIMOUNT [CE]) and porcine (Hancock II) valves in the pulmonic position in patients with congenital cardiac anomalies. Methods: We retrospectively reviewed the medical records of 258 cases (248 patients) of pulmonary valve implantation or replacement using CE (129 cases, group CE) or porcine (129 cases, group H) valves from 2 institutions between 2001 and 2009. Results: The patients' age at pulmonary valve implantation was 14.9 ± 8.7 years. No significant differences in perioperative characteristics were observed between groups CE and H. Follow-up data were complete in 219 cases (84.9%) and the median follow-up duration was 10.5 (interquartile range, 8.4∼13.0) years. Ten mortalities (3.9%) occurred. Sixty-four patients underwent reoperation for pulmonary valve replacement due to prosthetic valve failure; 10 of these 64 patients underwent reoperation during the study period. Patients in group CE were significantly more likely to undergo reoperation (hazard ratio, 2.17; confidence interval, 1.26-3.72; P = .005) than patients in group H. Patients in group CE showed had a greater prosthetic valve dysfunction (moderate-to-severe pulmonary regurgitation or pulmonary stenosis with ≥3.5 m/s peak velocity through the prosthetic pulmonary valve) rate (hazard ratio, 1.83; confidence interval, 1.07-3.14; P = .027) than patients in group H. Conclusions: Compared with the pericardial valve, the porcine valve had long-term advantages in terms of reduced reoperation rate and prosthetic valve dysfunction in the pulmonic position in patients with congenital cardiac anomalies. abstract_id: PUBMED:33302806 A systematic review on durability and structural valve deterioration in TAVR and surgical AVR. Mechanical valves and bioprosthetic heart valves are widely used for aortic valve replacement (AVR). Mechanical valves are associated with risk of bleeding because of oral anticoagulation, while the durability and structural valve deterioration (SVD) represent the main limitation of the bioprosthetic heart valves. The implantation of bioprosthetic heart valves is increasing precipitously due aging population, and the widespread use of transcatheter aortic valve replacement (TAVR). TAVR has become the standard treatment for intermediate or high surgical risk patients and a reasonable alternative to surgery for low risk patients with symptomatic severe aortic stenosis. Moreover, TAVR is increasingly being used for younger and lower-risk patients with longer life expectancy; therefore it is important to ensure the valve durability for long-term transcatheter aortic valves. Although the results of mid-term durability of the transcatheter heart valves are encouraging, their long-term durability remains largely unknown. This review summarises the definitions, mechanisms, risk factors and assessment of SVD; overviews available data on surgical bioprosthetic and transcatheter heart valves durability. abstract_id: PUBMED:31928255 Current clinical management of dysfunctional bioprosthetic pulmonary valves. Introduction: As with any bioprosthetic valve, bioprosthetic valves in the pulmonary position have a finite life span and patients with bioprosthetic pulmonary valves require lifetime management to treat valve dysfunction.Areas covered: In this article, authors discuss the current medical management for the treatment of dysfunctional bioprosthetic valves. This review is based on both an extensive review of the recent cardiac surgical/interventional cardiology literature (PubMed and MEDLINE database searches from 1958 to 2019) and personal experience.Expert opinion: Valve technology is rapidly progressing and with a coordinated effort from cardiac surgeons and interventional cardiologists, patients suffering from bioprosthetic pulmonary valve dysfunction can expect to have a decreased number of procedures and less invasive procedures over their lifetime now. abstract_id: PUBMED:21281951 Durability of bioprosthetic valves in the pulmonary position: long-term follow-up of 181 implants in patients with congenital heart disease. Objectives: Durability of bioprosthetic valves in the pulmonary position is not well defined. We examined the durability of bioprosthetic valves in the pulmonary position and risk factors associated with bioprosthetic pulmonary valve failure. Methods: Between 1993 and 2004, 181 patients underwent pulmonary valve replacement using bioprostheses. Patients who underwent valved conduit or homograft implantation were excluded. Mean age was 14.2 ± 9.8 years and median valve size was 23 mm (range, 19-27 mm). Types of bioprosthesis used were Hancock II (n = 83), Perimount (n = 53), Freestyle (n = 23), Carpentier-Edwards porcine valve (n = 18), and others (n = 4). Results: There were 3 early and 7 late deaths. Follow-up completeness was 88.6% and mean follow-up duration was 7.3 ± 2.9 years. Forty-three patients underwent redo pulmonary valve replacement. Overall freedom from redo pulmonary valve replacement at 5 and 10 years was 93.9% ± 1.9% and 51.7% ± 8.6%, respectively. Overall freedom from both valve failure and valve dysfunction at 5 and 10 years was 92.2% ± 2.1% and 20.2% ± 6.7%, respectively. In multivariable analysis, younger age at operation, diagnosis of pulmonary atresia with ventricular septal defect, and use of stentless valve were identified as risk factors for redo pulmonary valve replacement. Conclusions: Durability of bioprosthetic valves in the pulmonary position was suboptimal. Valve function was maintained stable until 5 years after operation. By 10 years, however, about 80% will require reoperation or manifest valve dysfunction. In our experience, the stentless valve was less durable than stented valves. abstract_id: PUBMED:20231157 Percutaneous pulmonary valve implantation within bioprosthetic valves. Aims: Replacement of bioprosthetic valves in the right ventricular (RV) outflow tract (RVOT) is inevitable due to acquired valvar dysfunction. Percutaneous pulmonary valve implantation (PPVI) may result in acceptable clinical improvement avoiding surgical reintervention. To report outcomes of PPVI in dysfunctional surgically implanted bioprosthetic valves. Methods And Results: All children undergoing PPVI into a bioprosthetic pulmonary valve between October 2005 and February 2008 were reviewed. Acute haemodynamic changes were compared and an analysis of variance applied to assess changes in ventricular geometry and pressure over time. Fourteen children (seven males), median weight 57.8 kg and 14.7 years of age were identified, with an echocardiographic RVOT gradient of 59.6 +/- 26.8 mmHg and a pulmonary regurgitation (PR) grade of 3.6 +/- 0.8 (out of 4). Implantation was successful in all. Twenty-four hours after implantation, there was a significant improvement in RV pressure (RVP) (from 82.2 +/- 15.6 to 59.4 +/- 9.9 mmHg, P < 0.001) and degree of PR to 0.6 +/- 0.9 (P < 0.001). Mean hospital stay was 2.0 +/- 0.4 days. Freedom from reintervention was 92 and 89% at 1 and 2 years, respectively. Follow-up echocardiography (mean 12.9 +/- 9.8 months) revealed a further reduction in RVP (P < 0.001) and RVOT gradients (P < 0.001) and an increase in left ventricular end-diastolic volume (P= 0.01) and aortic valve annulus diameters (P < 0.001). Conclusions: Percutaneous pulmonary valve implantation for RVOT dysfunction in a previously implanted prosthetic valve is feasible and safe. Short-term follow-up data are encouraging, yet longer-term information is required to determine if this form of palliation has a significant impact on management strategies. abstract_id: PUBMED:24444036 Bioprosthetic heart valves of the future. Glutaraldehyde-fixed bioprosthetic heart valves (GBHVs), derived from pigs or cows, undergo structural valve deterioration (SVD) over time, with calcification and eventual failure. It is generally accepted that SVD is due to chemical processes between glutaraldehyde and free calcium ions in the blood. Valve companies have made significant progress in decreasing SVD from calcification through various valve chemical treatments. However, there are still groups of patients (e.g., children and young adults) that have accelerated SVD of GBHV. Unfortunately, these patients are not ideal patients for valve replacement with mechanical heart valve prostheses as they are at high long-term risk from complications of the mandatory anticoagulation that is required. Thus, there is no "ideal" heart valve replacement for children and young adults. GBHVs represent a form of xenotransplantation, and there is increasing evidence that SVD seen in these valves is at least in part associated with xenograft rejection. We review the evidence that suggests that xenograft rejection of GBHVs is occurring, and that calcification of the valve may be related to this rejection. Furthermore, we review recent research into the transplantation of live porcine organs in non-human primates that may be applicable to GBHVs and consider the potential use of genetically modified pigs as sources of bioprosthetic heart valves. abstract_id: PUBMED:28582118 Valve-in-Valve Therapy for Failed Surgical Bioprosthetic Valves: Clinical Results and Procedural Guidance. With improved life expectancy and increased use of bioprosthetic heart valves, more elderly and frail patients present with degenerative surgical heart valve disease. The valve-in-valve procedure is an attractive alternative to a conventional open redo procedure. Although it is a novel extension of established transcatheter aortic valve implantation for severe aortic stenosis in a high-risk population, it is gaining momentum worldwide, particularly for aortic and mitral positions. Success depends on the operator being familiar with emerging transcatheter heart valve technology and morphology as well as that of the existing surgical heart valve, patient selection, accurate sizing, an ideal implantation position. Answer: Implantation of larger bioprosthetic pulmonary valves in young patients does not guarantee durability in adults. A study analyzing the durability of stented bioprosthetic valves in the pulmonary position in patients with Tetralogy of Fallot found that while patients aged ≥20 years showed acceptable durability regardless of the z-score of valve size, younger patients, including children and adolescents, did not exhibit good valvular durability, irrespective of the z-score of valve size (PUBMED:26324680). This suggests that factors other than valve size, such as patient age, play a significant role in the long-term performance of bioprosthetic valves in the pulmonary position. Furthermore, another study on the durability of bioprosthetic valves in patients on dialysis indicated that the durability of bioprosthetic valves in the aortic position was suboptimal in dialysis patients, with all early valve failures occurring in relatively young dialysis patients within 3 to 5 years after the initial operation (PUBMED:35545525). This further supports the notion that larger valve size alone does not ensure durability, especially in younger patients with additional risk factors such as dialysis. In summary, while larger bioprosthetic pulmonary valves may be expected to have better durability, the evidence suggests that this is not guaranteed, particularly in younger patient populations. Factors such as patient age, underlying health conditions, and possibly other unknown variables, influence the long-term durability of these valves.
Instruction: Staphylococcus aureus nasal colonization in HIV outpatients: persistent or transient? Abstracts: abstract_id: PUBMED:18371514 Staphylococcus aureus nasal colonization in HIV outpatients: persistent or transient? Background: Staphylococcus aureus nasal carriage in HIV patients remains incompletely characterized. The aim of the present study was to describe epidemiologic and molecular features of S. aureus nasal colonization in HIV outpatients. Methods: HIV outpatients with no history of hospitalization within the previous 2 years were screened for S aureus nasal colonization. Three samples were collected from each patient, and the risk factors for colonization were assessed. Nasal carriage was classified as persistent colonization, transient colonization, or no colonization. Persistent colonization was subdivided into simple (same DNA profile) or multiple (different DNA profiles) using pulsed-field gel electrophoresis (PFGE) for genotyping the strains of S. aureus. Results: A total of 111 patients were evaluated, of which 70 (63.1%) had at least 1 positive culture for S aureus. Patients in clinical stages of AIDS were more likely to be colonized than non-AIDS patients (P = .02). Among the patients with S aureus nasal carriage, 25.2% were transient carriers and 39.4% were persistent carriers. PFGE analysis showed that the persistent colonization was simple in 24 patients and multiple in 17 patients. Conclusion: The HIV patients had a high rate of S. aureus nasal colonization. The most common characteristic of colonization was simple persistent colonization showing the same genomic profile. abstract_id: PUBMED:28127988 Staphylococcus aureus nasal colonization among HIV-infected adults in Botswana: prevalence and risk factors. We sought to determine the clinical and epidemiologic determinants of Staphylococcus aureus nasal colonization in HIV-infected individuals at two outpatient centers in southern Botswana. Standard microbiologic techniques were used to identify S. aureus and methicillin-resistant S. aureus (MRSA). In a sample of 404 HIV-infected adults, prevalence of S. aureus nasal carriage was 36.9% (n = 152) and was associated with domestic overcrowding and lower CD4 cell count. MRSA prevalence was low (n = 13, 3.2%), but more common among individuals with asthma and eczema. The implications of these findings for HIV management are discussed. abstract_id: PUBMED:9402078 Staphylococcus aureus nasal colonization in HIV-seropositive and HIV-seronegative drug users. Nasal colonization plays an important role in the pathogenesis of Staphylococcus aureus infections. To identify characteristics associated with colonization, we studied a cross-section of a well-described cohort of HIV-seropositive and -seronegative active and former drug users considered at risk for staphylococcal infections. Sixty percent of the 217 subjects were Hispanic, 36% were women, 25% actively used injection drugs, 23% actively used inhalational drugs, 23% received antibiotics, and 35% were HIV-seropositive. Forty-one percent of subjects had positive nasal cultures for S. aureus. The antibiotic susceptibility patterns were similar to the local hospital's outpatient isolates and no dominant strain was identified by arbitrarily primed polymerase chain reaction (AB-PCR). Variables significantly and independently associated with colonization included antibiotic use (odds ratio [OR] = 0.37; confidence interval [CI] = 0.18-0.77), active inhalational drug use within the HIV-seropositive population (OR = 2.36; CI = 1.10-5.10) and female gender (OR = 1.97; CI = 1.09-3.57). Characteristics not independently associated included injection drug use, HIV status, and CD4 count. The association with active inhalational drug use, a novel finding, may reflect alterations in the integrity of the nasal mucosa. The lack of association between HIV infection and S. aureus colonization, which is contrary to most previous studies, could be explained by our rigorous control for confounding variables or by a limited statistical power due to the sample sizes. abstract_id: PUBMED:30349525 Staphylococcus aureus Nasal Colonization: An Update on Mechanisms, Epidemiology, Risk Factors, and Subsequent Infections. Up to 30% of the human population are asymptomatically and permanently colonized with nasal Staphylococcus aureus. To successfully colonize human nares, S. aureus needs to establish solid interactions with human nasal epithelial cells and overcome host defense mechanisms. However, some factors like bacterial interactions in the human nose can influence S. aureus colonization and sometimes prevent colonization. On the other hand, certain host characteristics and environmental factors can predispose to colonization. Nasal colonization can cause opportunistic and sometimes life-threatening infections such as surgical site infections or other infections in non-surgical patients that increase morbidity, mortality as well as healthcare costs. abstract_id: PUBMED:27925079 Nasal colonization with Staphylococcus aureus in nursing students: ground for monitoring. Objective:: to monitor bacterial strains of Staphylococcus aureus that are resistant or not to oxacillin in nursing undergraduate students, with an emphasis on the process of colonization. Method:: cross-sectional prevalence study carried out with 138 nursing students. The biological samples of the nasal cavity were collected in June 2015, by means of sterile swabs, which were subsequently submitted to confirmatory tests of catalase and coagulase. Isolated Staphylococcus aureus had their sensitivity profile determined by means of the Kirby Bauer method. Descriptive, univariate and bivariate analyses were performed. Results:: the prevalence of Staphylococcus aureus was 21.7. Regarding the resistance profile, 24.1% of strains were resistant to oxacillin, with ampicillin being the antimicrobial with the greatest resistance (82.8%). Conclusion:: the nasal cavity is an important bacterial flora of S. aureus in nursing students. The profile of isolated strains highlights the increase of Staphylococcus aureus resistance to antimicrobials such as oxacillin. abstract_id: PUBMED:38460550 Monitoring Staphylococcus aureus nasal colonization murine model using a bioluminescent methicillin-resistant S. aureus (MRSA). Staphylococcus aureus nasal carriage is considered a risk factor for infections, and the development of nasal decolonization strategies is highly relevant. Despite they are not naturally colonized by Staphylococcus, mice are a good model for S. aureus nasal colonization. Murine models are easy to manipulate, and inter-laboratory reproducibility makes them suitable for nasal colonization studies. Strategies using bioluminescent bacteria allow for the monitoring of infection over time without the need to sacrifice animals for bacterial quantification. In this study, we evaluated S. aureus nasal colonization in three mouse strains (BALB/c, C57BL/6, and Swiss Webster) using a bioluminescent strain (SAP231). In vitro, a visible Bioluminescent Signal Emission (BLSE) was observed until 106 bacteria and detected by IVIS® imaging system up to 104 cells. Animals were inoculated with one or two doses of approximately 109 colony-forming units (CFU) of SAP231. Swiss Webster mice showed the longest colonization time, with some animals presenting BLSE for up to 140 h. In addition, BLSE was higher in this strain. BALB/c and C57BL/6 strains showed consistent BLSE results for 48 h. BLSE intensity was higher in Swiss Webster inoculated with both doses. Three different positions for image capture were evaluated, with better results for the lateral and ventrodorsal positions. After the loss of BLSE, bacterial quantification was performed, and Swiss Webster mice presented more bacteria in the nasal cavity (approximately 105 CFU) than the other strains. Our results demonstrate that bioluminescent S. aureus allow monitoring of nasal colonization and estimation of the bacterial burden present in live animals until 48 h. abstract_id: PUBMED:31374164 Bacterial Anti-adhesives: Inhibition of Staphylococcus aureus Nasal Colonization. Bacterial adhesion to the skin and mucosa is often a fundamental and early step in host colonization, the establishment of bacterial infections, and pathology. This process is facilitated by adhesins on the surface of the bacterial cell that recognize host cell molecules. Interfering with bacterial host cell adhesion, so-called anti-adhesive therapeutics, offers promise for the development of novel approaches to control bacterial infections. In this review, we focus on the discovery of anti-adhesives targeting the high priority pathogen Staphylococcus aureus. This organism remains a major clinical burden, and S. aureus nasal colonization is associated with poor clinical outcomes. We describe the molecular basis of nasal colonization and highlight potentially efficacious targets for the development of novel nasal decolonization strategies. abstract_id: PUBMED:27474529 Host-Bacterial Crosstalk Determines Staphylococcus aureus Nasal Colonization. Staphylococcus aureus persistently colonizes the anterior nares of approximately one fifth of the population and nasal carriage is a significant risk factor for infection. Recent advances have significantly refined our understanding of S. aureus-host communication during nasal colonization. Novel bacterial adherence mechanisms in the nasal epithelium have been identified, and novel roles for both the innate and the adaptive immune response in controlling S. aureus nasal colonization have been defined, through the use of both human and rodent models. It is clear that S. aureus maintains a unique, complex relationship with the host immune system and that S. aureus nasal colonization is overall a multifactorial process which is as yet incompletely understood. abstract_id: PUBMED:27592264 HIV and colonization with Staphylococcus aureus in two maximum-security prisons in New York State. Objective: To evaluate the association between HIV and Staphylococcus aureus colonization after confounding by incarceration is removed. Method: A cross sectional stratified study of all HIV infected and a random sample of HIV-uninfected inmates from two maximum-security prisons in New York State. Structured interviews were conducted. Anterior nares and oropharyngeal samples were cultured and S. aureus isolates were characterized. Log-binomial regression was used to assess the association between HIV and S. aureus colonization of the anterior nares and/or oropharynx and exclusive oropharynx colonization. Differences in S. aureus strain diversity between HIV-infected and uninfected individuals were assessed using Simpson's Index of Diversity. Results: Among 117 HIV infected and 351 HIV uninfected individuals assessed, 47% were colonized with S. aureus and 6% were colonized with methicillin resistant S. aureus. The prevalence of S. aureus colonization did not differ by HIV status (PR = 0.99, 95% CI = 0.76-1.24). HIV infected inmates were less likely to be exclusively colonized in the oropharynx (PR = 0.55, 95% CI = 0.30-0.99). Spa types t571 and t064 were both more prevalent among HIV infected individuals, however, strain diversity was similar in HIV infected and uninfected inmates. Conclusions: HIV infection was not associated with S. aureus colonization in these maximum-security prison populations, but was associated with decreased likelihood of oropharyngeal colonization. Factors that influence colonization site require further evaluation. abstract_id: PUBMED:25485071 Risk Factors for Methicillin Resistant Staphylococcus aureus Nasal Colonization of Healthy Children. Background: Nasal colonization of healthy children with Staphylococcus aureus is an important risk factor for different infections. Detection of colonized individuals with methicillin resistant S. aureus (MRSA) and its eradication is the proper prevention strategy for infection spread in the community and health-care centers. Objectives: The aim of this study was to determine the prevalence, associated risk factors and antibiotic resistance pattern among healthy children who were nasal carriers of S. aureus. Patients And Methods: This cross-sectional study was conducted on 350 one month to 14-year-old healthy children living in Kashan/Iran. The nasal specimens were cultured in blood agar medium for S. aureus. Positive cultures were evaluated for cephalothin, co-trimoxazole, clindamycin, ciprofloxacin, oxacillin and vancomycin susceptibility by the disc diffusion method and E-test. Risk factors for nasal carriage of S. aureus and MRSA were evaluated. Results: Frequency of S. aureus nasal carriage was 92 from 350 cases (26.2%), amongst which 33 (35.9%) were MRSA. Isolates indicated an overall resistance of 52.2% to cephalothin, 33.7% to co-trimoxazol, 26.1% to ciprofloxacin, 26.1% to clindamycin, 35.9% to oxacillin and 4.3% to vancomycin. Factors associated with MRSA nasal carriage included gender (P value 0.001), age of less than four years (P value 0.016), number of individuals in the family (P value < 0.001), antibiotic use (P value < 0.001) and admission (P value < 0.001) during the previous three months, parental smoking (P value < 0.001) and sleeping with parents (P value 0.022). Conclusions: Age of less than four years, male sex, family size being more than four, antibiotic use and admission during the previous three months, parental smoking and sleeping with parents were independent risk factors for nasal colonization with MRSA. Answer: Staphylococcus aureus nasal colonization in HIV outpatients can be either persistent or transient. A study that screened HIV outpatients with no history of hospitalization within the previous 2 years found that among those colonized with S. aureus, 25.2% were transient carriers and 39.4% were persistent carriers. Persistent colonization was further subdivided into simple (same DNA profile) or multiple (different DNA profiles) based on pulsed-field gel electrophoresis (PFGE) genotyping of the strains. The most common characteristic of colonization was simple persistent colonization showing the same genomic profile (PUBMED:18371514). This suggests that while a significant proportion of HIV outpatients experience transient colonization, a larger percentage are persistently colonized with S. aureus, with many showing the same strain over time. The distinction between persistent and transient colonization is important for understanding the epidemiology of S. aureus in HIV-infected populations and for developing strategies to prevent infections associated with colonization.
Instruction: Can a clinical score aid in early diagnosis and treatment of various stroke syndromes? Abstracts: abstract_id: PUBMED:36792807 Clinical score for early diagnosis and treatment of stroke-like episodes in MELAS syndrome. Background And Objectives: Stroke-like episodes (SLEs) in patients with MELAS syndrome are often initially misdiagnosed as acute ischemic stroke (AIS), resulting in treatment delay. We aimed to determine clinical features that may distinguish SLEs from AISs and explore the benefit of early L-arginine treatment on patient outcomes. Methods: We looked retrospectively for MELAS patients admitted between January 2005 and January 2022 and compared them to an AIS cohort with similar lesion topography. MELAS patients who received L-arginine within 40 days of their first SLE were defined as the early treatment group and the remaining as late or no treatment group. Results: Twenty-three SLEs in 10 MELAS patients and 21 AISs were included. SLE patients had significantly different features: they were younger, more commonly reported hearing loss, lower body mass index, had more commonly a combination of headache and/or seizures at presentation, serum lactate was higher, and hemiparesis was less common. An SLE Early Clinical Score (SLEECS) was constructed by designating one point to each above features. SLEECS ≥ 4 had 80% sensitivity and 100% specificity for SLE diagnosis. Compared to late or no treatment, early treatment group patients (n = 5) had less recurrent SLEs (total 2 vs. 11), less seizures (14% vs. 25%, p = 0.048), lower degree of disability at first and last follow-up (modified ranking scale, mRS 2 ± 0.7 vs. 4.2 ± 1, p = 0.005; 2 ± 0.7 vs. 5.8 ± 0.5, p < 0.001, respectively), and a lower mortality (0% vs. 80% p = 0.048). Conclusions: The SLEECS model may aid in the early diagnosis and treatment of SLEs and lead to improved clinical outcomes. abstract_id: PUBMED:9519933 Can a clinical score aid in early diagnosis and treatment of various stroke syndromes? Background: Accurate and timely diagnosis of hemorrhagic and nonhemorrhagic strokes helps in patient management. Neuroimaging studies are useful in diagnosis and distinction of hemorrhagic (HS) and nonhemorrhagic (NHS) strokes. The use of clinical variables, such as Siriraj stroke scores (SSS), has shown good sensitivity, specificity and predictive values (distinguishing stroke types). The aim of our study was to evaluate the use of SSS in a U.S. population and assess whether it could aid to expedite treatment decisions. Methods: Levels of consciousness, vomiting, headache and atheroma markers used in SSS were applied to patients who met the criteria for stroke. Results: Of the 302 patients identified, the SSS classified 254 with sensitivity of 36% (HS) and 90% (NHS) and positive predictive values of 77% and 61%, respectively. Conclusion: Our results suggest that SSS is not reliable in distinguishing stroke types (in a US population). Definite neuroimaging studies are needed prior to thrombolytic therapy. abstract_id: PUBMED:30010123 Unusual Clinical Presentations Challenging the Early Clinical Diagnosis of Creutzfeldt-Jakob Disease. The introduction of prion RT-QuIC, an ultrasensitive specific assay for the in vivo detection of the abnormal prion protein, has significantly increased the potential for an early and accurate clinical diagnosis of Creutzfeldt-Jakob disease (CJD). However, in the clinical setting, the early identification of patients with possible CJD is often challenging. Indeed, CJD patients may present with isolated symptoms that remain the only clinical manifestation for some time, or with neurological syndromes atypical for CJD. To enhance awareness of unusual disease presentations and promote earlier diagnosis, we reviewed the entire spectrum of atypical early manifestations of CJD, mainly reported to date as case descriptions or small case series. They included sensory either visual or auditory disturbances, seizures, isolated psychiatric manifestations, atypical parkinsonian syndromes (corticobasal syndrome, progressive supranuclear palsy-like), pseudobulbar syndrome, isolated involuntary movements (dystonia, myoclonus, chorea, blepharospasm), acute or subacute onsets mimicking a stroke, isolated aphasia, and neuropathy. Since CJD is a rare disease and its clinical course rapidly progressive, an in-depth understanding and awareness of early clinical features are mandatory to enhance the overall diagnostic accuracy in its very early stages and to recruit optimal candidates for future therapeutic trials. abstract_id: PUBMED:37344364 Evaluating the use of the ABCD2 score as a clinical decision aid in the emergency department: Retrospective observational study. Objective: Clinical decision aids (CDAs) can help clinicians with patient risk assessment. However, there is little data on CDA calculation, interpretation and documentation in real-world ED settings. The ABCD2 score (range 0-7) is a CDA used for patients with transient ischaemic attack (TIA) and assesses risk of stroke, with a score of 0-3 being low risk. The aim of this study was to describe ABCD2 score documentation in patients with an ED diagnosis of TIA. Methods: Retrospective observational study of patients with a working diagnosis of a TIA in two Australian EDs. Data were gathered using routinely collected data from health informatics sources and medical records reviewed by a trained data abstractor. ABCD2 scores were calculated and compared with what was documented by the treating clinician. Data were presented using descriptive analysis and scatter plots. Results: Among the 367 patients with an ED diagnosis of TIA, clinicians documented an ABCD2 score in 45% (95% CI 40-50%, n = 165). Overall, there was very good agreement between calculated and documented scores (Cohen's kappa 0.90). The mean documented and calculated ABCD2 score were similar (3.8, SD = 1.5, n = 165 vs 3.7, SD = 1.8, n = 367). Documented scores on the threshold of low and high risk were more likely to be discordant with calculated scores. Conclusions: The ABCD2 score was documented in less than half of eligible patients. When documented, clinicians were generally accurate with their calculation and application of the ABCD2. No independent predictors of ABCD2 documentation were identified. abstract_id: PUBMED:33724895 FM Combined With NIHSS Score Contributes to Early AIS Diagnosis and Differential Diagnosis of Cardiogenic and Non-Cardiogenic AIS. A growing researchers have suggested that fibrin monomer (FM) plays an important role in early diagnosis of thrombotic diseases. We explored the application of FM in the diagnosis and classification of acute ischemic stroke (AIS). The differences in FM, D-dimer, and NIHSS scores between different TOAST (Trial of ORG 10172 in Acute Stroke Treatment) types were analyzed with one-way ANOVA; the correlation between FM, D-dimer and NIHSS score in patients with different TOAST classification was analyzed by Pearson linear correlation. The ROC curve was utilized to analyze the diagnostic performance. 1. FM was more effective in diagnosing patients with AIS than D-dimer. 2. The FM level in cardiogenic AIS was significantly different from that in non-cardiogenic patients (P < 0.05); the NIHSS score in cardiogenic stroke was significantly higher than in atherosclerotic and unexplained stroke group. Whereas, no statistical difference was observed in the D-dimer level between these groups (P > 0.05). 3. The correlation between FM and NIHSS scores in the cardiogenic (r = 0.3832) and atherosclerotic (r = 0.3144) groups was statistically significant. 4. FM exhibited the highest diagnostic efficacy for cardiogenic AIS; furthermore, FM combined with the NIHSS score was more conducive to the differential diagnosis of cardiogenic and non-cardiogenic AIS. FM detection contributes to the early diagnosis of AIS, and is important for the differential diagnosis of different TOAST types of AIS. Moreover, FM combined with the NIHSS score is valuable in the differential diagnosis of cardiogenic and non-cardiogenic AIS. abstract_id: PUBMED:25664241 Spectrum of complicated migraine in children: A common profile in aid to clinical diagnosis. Complicated migraine encompasses several individual clinical syndromes of migraine. Such a syndrome in children frequently presents with various neurological symptoms in the Emergency Department. An acute presentation in the absence of headache presents a diagnostic challenge. A delay in diagnosis and treatment may have medicolegal implication. To date, there are no reports of a common clinical profile proposed in making a clinical diagnosis for the complicated migraine. In this clinical review, we propose and describe: (1) A common clinical profile in aid to clinical diagnosis for spectrum of complicated migraine; (2) How it can be used in differentiating complicated migraine from migraine without aura, migraine with aura, and seizure; (3) We discuss the status of complicated migraine in the International Headache Society classification 2013; and (4) In addition, a common treatment strategy for the spectrum of migraine has been described. To diagnose complicated migraine clinically, it is imperative to adhere with the proposed profile. This will optimize the use of investigation and will also avoid a legal implication of delay in their management. The proposed common clinical profile is incongruent with the International Headache Society 2013. Future classification should minimize the dissociation from clinically encountered syndromes and coin a single word to address collectively this subtype of migraine with an acute presentation of a common clinical profile. abstract_id: PUBMED:23483140 Stroke syndromes and clinical management. The knowledge of brain syndromes is essential for stroke physicians and neurologists, particularly those that can be extremely difficult and challenging to diagnose due to the great variability of symptom presentation and yet of clinical significance in terms of potential devastating effect with poor outcome. The diagnosis and understanding of stroke syndromes has improved dramatically over the years with the advent of modern imaging, while the management is similar to general care as recommended by various guidelines in addition to care of such patients on specialized units with facilities for continuous monitoring of vital signs and dedicated stroke therapy. Such critical care can be provided either in the acute stroke unit, the medical intensive care unit or the neurological intensive care unit. There may be no definitive treatment at reversing stroke syndromes, but it is important to identify the signs and symptoms for an early diagnosis to prompt quick treatment, which can prevent further devastating complications following stroke. The aim of this article is to discuss some of the important clinical stroke syndromes encountered in clinical practice and discuss their management. abstract_id: PUBMED:26075862 Pheochromocytoma - why is its early diagnosis so important for patient? Pheochromocytoma may present with various clinical signs, symptoms due to continuous and/or paroxysmal release of catecholamines. Arterial hypertension may be sustained and/or paroxysmal and palpitations are mainly due to sinus tachycardia. In some cases, even as the first manifestation of pheochromocytoma, may occur severe cardiovascular complications such as hypertensive emergency, myocardial ischemia, cardiomyopathy and heart failure, multisystem crisis or shock. Catecholamine release may be also associated with arrhythmias - tachycardias (supraventricular or ventricular) or less frequently bradycardias (AV blocks and junctional). The effect of catecholamines is not restricted to myocardium, but may also lead to cerebrovascular impairment such as transient ischemic attack or stroke. As many of these complications may be life-threatening, the only prevention is early diagnosis of pheochromocytoma and proper treatment, in particular in specialized centers. abstract_id: PUBMED:33270112 Higher Baseline Cortical Score Predicts Good Outcome in Patients With Low Alberta Stroke Program Early Computed Tomography Score Treated with Endovascular Treatment. Background: Patients with large vessel occlusion and noncontrast computed tomography (CT) Alberta Stroke Program Early CT Score (ASPECTS) <6 may benefit from endovascular treatment (EVT). There is uncertainty about who will benefit from it. Objective: To explore the predicting factors for good outcome in patients with ASPECTS <6 treated with EVT. Methods: We retrospectively reviewed 60 patients with ASPECTS <6 treated with EVT in our center between March 2018 and June 2019. Patients were divided into 2 groups because of the modified Rankin Score (mRS) at 90 d: good outcome group (mRS 0-2) and poor outcome group (mRS ≥3). Baseline and procedural characteristics were collected for unilateral variate and multivariate regression analyses to explore the influent variates for good outcome. Results: Good outcome (mRS 0-2) was achieved in 24 (40%) patients after EVT and mortality was 20% for 90 d. Compared with the poor outcome group, higher baseline cortical ASPECTS (c-ASPECTS), lower intracranial hemorrhage, and malignant brain edema after thrombectomy were noted in the good outcome group (all P < .01). Multivariate logistic regression showed that only baseline c-ASPECTS (≥3) was positive factor for good outcome (odds ratio = 4.29; 95% CI, 1.21-15.20; P = .024). The receiver operating characteristics curve indicated a moderate value of c-ASPECTS for predicting good outcome, with the area under receiver operating characteristics curve 0.70 (95% CI, 0.56-0.83; P = .011). Conclusion: Higher baseline c-ASPECTS was a predictor for good clinical outcome in patients with ASPECTS <6 treated with EVT, which could be helpful to treatment decision. abstract_id: PUBMED:28127292 A Critical Review of Alberta Stroke Program Early CT Score for Evaluation of Acute Stroke Imaging. Assessment of ischemic stroke lesions on computed tomography (CT) or MRI using the Alberta Stroke Program Early CT Score (ASPECTS) is widely used to guide acute stroke treatment. We aimed to review the current evidence on ASPECTS. Originally, the score was developed for standardized lesion assessment on non-contrast CT (NCCT). Early studies described ASPECTS as a predictor of functional outcome and symptomatic intracranial hemorrhage after iv-thrombolysis with a threshold of ≤7 suggested to identify patients at high risk. Following studies rather pointed toward a linear relationship between ASPECTS and functional outcome. ASPECTS has also been applied to assess perfusion CT and diffusion-weighted MRI (DWI). Cerebral blood volume ASPECTS proved to be the best predictor of outcome, outperforming NCCT-ASPECTS in some studies. For DWI-ASPECTS varying thresholds to identify patients at risk for poor outcome were reported. ASPECTS has been used for patient selection in three of the five groundbreaking trials proving efficacy of mechanical thrombectomy published in 2015. ASPECTS values predict functional outcome after thrombectomy. Moreover, treatment effect of thrombectomy appears to depend on ASPECTS values being smaller or not present in low ASPECTS, while patients with ASPECTS 5-10 do clearly benefit from mechanical thrombectomy. However, as patients with low ASPECTS values were excluded from recent trials data on this subgroup is limited. There are several limitations to ASPECTS addressed in a growing number of studies. The score is limited to the anterior circulation, the template is unequally weighed and correlation with lesion volume depends on lesion location. Overall ASPECTS is a useful and easily applicable tool for assessment of prognosis in acute stroke treatment and to help guide acute treatment decisions regardless whether MRI or CT is used. Patients with low ASPECTS values are unlikely to achieve good outcome. However, methodological constraints of ASPECTS have to be considered, and based on present data, a clear cutoff value to define "low ASPECTS values" cannot be given. Answer: Yes, a clinical score can aid in the early diagnosis and treatment of various stroke syndromes. Clinical scores are tools that use specific clinical features to help differentiate between types of strokes or other neurological conditions, and they can expedite treatment decisions. For instance, the SLEECS model, which assigns points to features such as younger age, hearing loss, lower BMI, a combination of headache and/or seizures at presentation, higher serum lactate, and less common hemiparesis, has shown to have 80% sensitivity and 100% specificity for diagnosing stroke-like episodes (SLEs) in MELAS syndrome (PUBMED:36792807). However, the reliability of clinical scores can vary. The Siriraj stroke score (SSS), for example, did not show reliability in distinguishing stroke types in a U.S. population, indicating that definitive neuroimaging studies are necessary before thrombolytic therapy (PUBMED:9519933). The ABCD2 score, used for patients with transient ischemic attack (TIA), assesses the risk of stroke and has been documented in less than half of eligible patients in an emergency department setting. When documented, clinicians were generally accurate with their calculation and application of the ABCD2 score (PUBMED:37344364). Additionally, the combination of fibrin monomer (FM) levels with the NIHSS score has been suggested to contribute to the early diagnosis of acute ischemic stroke (AIS) and the differential diagnosis of cardiogenic and non-cardiogenic AIS (PUBMED:33724895). In the context of complicated migraine in children, a common clinical profile has been proposed to aid in clinical diagnosis, which could optimize the use of investigation and avoid legal implications of delayed management (PUBMED:25664241). Overall, clinical scores can be valuable tools for aiding in the early diagnosis and treatment of various stroke syndromes, but their use should be complemented by neuroimaging and other diagnostic methods for accurate patient management (PUBMED:23483140, PUBMED:28127292).
Instruction: SPF and UVA-PF sunscreen evaluation: are there good correlations among results obtained in vivo, in vitro and in a theoretical Sunscreen Simulator? Abstracts: abstract_id: PUBMED:27012956 SPF and UVA-PF sunscreen evaluation: are there good correlations among results obtained in vivo, in vitro and in a theoretical Sunscreen Simulator? A real-life exercise. Objective: Strategies to optimize the development of sunscreens include the use of theoretical sunscreen simulators to predict sun protection factor (SPF) and UVA protection factor (UVA-PF) and in vitro measurements of UVA-PF. The aims of this study were to assess the correlations between (1) SPF and UVA-PF results obtained in a theoretical sunscreen simulator with those observed in vivo (SPF and UVA-PF) and in vitro (UVA-PF) and (2) the results of UVA-PF observed in vitro and in vivo for products in different galenic forms containing or not pigments. Methods: BASF Sunscreen Simulator software was used to evaluate the theoretical performance of formulations regarding SPF and UVA protection. In vitroUVA-PF and in vivoSPF were determined for all formulations. UVA-PFin vivo measurements were carried out only on products for which the galenic forms (compact foundations and lip balms) or the presence of dye or pigments could make the results of UVA-PFin vitro less reliable (due to a possible uneven film formation). Results: The results of the SPF calculated by the BASF Sunscreen Simulator presented a very good correlation with SPF observed in vivo in the absence of pigments (r = 0.91; P < 0.05) and a good correlation in the presence of pigments (r = 0.70; P < 0.05). The UVA-PF calculated by the BASF Sunscreen Simulator also exhibited a very good correlation with UVA-PF measured in vitro (r = 0.88; P < 0.05) for the formulations not containing pigment and a good correlation (r = 0.75; P < 0.05) for the formulations containing pigment. The correlation of same UVA-PF calculated by BASF Sunscreen Simulator with UVA-PF measured in vivo for the formulations containing pigment was r = 0.74 (P < 0.05), which is considered good. In addition, the measurements of UVA-PFin vivo presented a good correlation with the values obtained in vitro (r = 0.74; P < 0.05). Conclusion: In the present study, the use of BASF Sunscreen Simulator and in vitroUVA tests showed good correlations with in vivo results and could be considered as valuable resources in the development of sunscreens. abstract_id: PUBMED:24417335 New noninvasive approach assessing in vivo sun protection factor (SPF) using diffuse reflectance spectroscopy (DRS) and in vitro transmission. Background/purpose: In the past 56 years, many different in vitro methodologies have been developed and published to assess the sun protection factor (SPF) of products, but there is no method that has 1:1 correlation with in vivo measurements. Spectroscopic techniques have been used to noninvasively assess the UVA protection factor with good correlation to in vivo UVA-PF methodologies. To assess the SPF of sunscreen product by diffuse reflectance spectroscopy (DRS) technique, it is necessary to also determine the absorbance spectrum of the test material in the UVB portion of the spectrum (290-320 nm). However, because of the high absorbance characteristics of the stratum corneum and epidermis, the human skin does not remit enough UVB radiation to be used to measure the absorption spectrum of the applied product on skin. In this work, we present a new method combining the evaluation of the absolute UVA absorption spectrum, as measured by DRS with the spectral absorbance 'shape' of the UVB absorbance of the test material as determined with current in vitro thin film spectroscopy. Methods: The measurement of the in vivo UVA absorption spectrum involves the assessment of the remitted intensity of monochromatic UVA radiation (320-400 nm) before and after a sunscreen product was applied on skin using a spectrofluorimeter Fluorolog 3, FL3-22 (Yvon Horiba, Edison, NJ, USA). The probe geometry assures that light scattering products as well as colored products may be correctly assessed. This methodology has been extensively tested, validated, and reported in the literature. The in vitro absorption spectrum of the sunscreen samples and polyvinyl chloride (PVC) films 'surrogate' sunscreen standards were measured using Labsphere® UV-2000S (Labsphere, North Sutton, NH, USA). Sunscreens samples were tested using PMMA Helioplates (Helioscience, Marseille, France) as substrates. The UVB absorbance spectrum (Labsphere) is 'attached' to the UVA absorbance spectrum (diffuse reflectance) with the UVB absorbance matched to the UVA absorbance at 340 nm to complete the full spectral absorbance from which an estimate the SPF of the product can be calculated. Results: Seventeen test materials with known in vivo SPF values were tested. Two of the tested products were PVC sunscreen thin films with 10-15 micrometers thickness and were used to investigate the absorption spectrum of these films when applied on different reflectance surfaces. Similar to the human in vivo SPF test, the developed methodology suggests limiting the use on Fitzpatrick skin phototypes I to III. The correlation of this new method with in vivo clinical SPF values was 0.98 (r2) with a slope of 1.007. Conclusion: This new methodology provides a new approach to determine SPF values without the extensive UV irradiation procedures (and biological responses) currently used to establish sunscreen efficacy. Further work will be conducted to establish methods for evaluation of products that are not photostable. abstract_id: PUBMED:34060681 Multi-laboratory study of hybrid diffuse reflectance spectroscopy to assess sunscreen SPF and UVA-PFs. Background: Proof-of-principle studies have established the use of Hybrid Diffuse Reflectance Spectroscopy (HDRS) methods to assess both Ultraviolet-A Protection Factor (UVA-PF) and Sun Protection Factor (SPF) indices in individual laboratories. Methods: Multiple laboratories evaluated 23 emulsions and two spray sunscreen products to evaluate repeatability and accuracy of assessment of SPF and UVA-PF values, using HDRS test systems from various manufacturers using different designs. Results: All of the laboratories reported similar SPF and UVA-PF values within a narrow range of values to establish the reliability of the HDRS methodology across laboratories, independent of equipment manufacturer or operator. Conclusion: HDRS test methodology provides a reliable objective instrumental estimation of sunscreen SPF and UVA-PF. These data were provided to ISO-TC217 WG7 to substantiate the ongoing development of an ISO Standard HDRS Method. abstract_id: PUBMED:33728658 In vitro skin model for characterization of sunscreen substantivity upon perspiration. Objective: The resistance of sunscreens to the loss of ultraviolet (UV) protection upon perspiration is important for their practical efficacy. However, this topic is largely overlooked in evaluations of sunscreen substantivity due to the relatively few well-established protocols compared to those for water resistance and mechanical wear. Methods: In an attempt to achieve a better fundamental understanding of sunscreen behaviour in response to sweat exposure, we have developed a perspiring skin simulator, containing a substrate surface that mimics sweating human skin. Using this perspiring skin simulator, we evaluated sunscreen performance upon perspiration by in vitro sun protection factor (SPF) measurements, optical microscopy, ultraviolet (UV) reflectance imaging and coherent anti-Stokes Raman scattering (CARS) microscopy. Results And Conclusion: Results indicated that perspiration reduced sunscreen efficiency through two mechanisms, namely sunscreen wash-off (impairing the film thickness) and sunscreen redistribution (impairing the film uniformity). Further, we investigated how the sweat rate affected these mechanisms and how sunscreen application dose influenced UV protection upon perspiration. As expected, higher sweat rates led to a large loss of UV protection, while a larger application dose led to larger amounts of sunscreen being washed-off and redistributed but also provided higher UV protection before and after sweating. abstract_id: PUBMED:34601762 Laboratory testing of sunscreens on the US market finds lower in vitro SPF values than on labels and even less UVA protection. Background: New research has attributed increased significance to the causal link between ultraviolet A (UVA) radiation and immunosuppression and carcinogenesis. In the United States, sunscreens are labeled with only their sun protection factor (SPF) and an imprecise term "broad-spectrum protection." Sunscreen marketing and efficacy evaluations continue to be based primarily on skin redness (sunburn) or erythema. We sought to evaluate the ultraviolet (UV) protection offered by common sunscreen products on the US market using laboratory-measured UV-absorption testing and comparing with computer-modeled protection and the labeled SPF values. This approach enables an investigation of the relationship between the labeled SPF and measured UVA protection, a factor that is ignored in current regulations. Methods: Fifty-one sunscreen products for sale in the United States with SPF values from 15 to 110 and labeled as providing broad-spectrum protection were tested using a commercial laboratory. All products were evaluated using the ISO 24443:2012 method for sunscreen effectiveness. The final absorbance spectra were used for analysis of in vitro UV protection. Results: In vitro SPF values from laboratory-measured UV absorption and computer modeling were on average just 59 and 42 percent of the labeled SPF. The majority of products provided significantly lower UVA protection with the average unweighted UVA protection factor just 24 percent of the labeled SPF. Conclusion: Regulations and marketplace forces promote sunscreens that reduce sunburn instead of products that provide better, more broad-spectrum UV protection. The production and use of products with broad spectrum UV protection should be incentivized, removing the emphasis on sunburn protection and ending testing on people. abstract_id: PUBMED:37954729 Sunscreen Label Marketing Towards Pediatric Populations: Guidance for Navigating Sunscreen Choice. Introduction: Sunscreen marketing to specific demographics is largely unregulated. Marketing specifically targeting pediatric populations has the potential to drive consumer behavior. The American Academy of Pediatrics (AAP) and American Academy of Dermatology (AAD) provide recommendations for sunscreen use in children over the age of six months. This study sought to determine if sunscreen products marketed toward pediatric populations align with healthcare guidelines. Materials And Methods: Sunscreens available in major retail outlets in the Philadelphia area were cataloged and reviewed for marketing targeting specific demographics such as "baby", "babies", "children", "kids", "sports", and "active". The products were reviewed for sun protection factor (SPF), broad-spectrum ultraviolet (UV) protection, water resistance, active UV filters, and application method. Results: Of 410 sunscreens cataloged, 27 were marketed towards "baby" or "babies", 44 towards "children" or "kids", and 71 towards "sports" or "active". All of the sunscreen products reviewed targeting the pediatric population offered water resistance for up to 80 minutes and broad-spectrum UV coverage. Sunscreens targeting "baby" or "babies" aligned most closely with AAP guidelines for sunscreen use in pediatric populations, with 92.6% offering an SPF between 15 to 50 and no products including oxybenzone as a UV filter. However, sunscreens targeting "children", "kids", "sports", and "active" bore a close resemblance to the overall sunscreen profile for all demographics but with a higher percentage of products containing oxybenzone. Oxybenzone was found in 11.4% of "children" and "kids" products and 16.9% of "sports" and "active" sunscreen products, compared to 7.6% of all sunscreen products available, and was also found in most sunscreen products with an SPF of 70 or higher. Conclusion: Sunscreen products marketed towards "baby" and "babies" tend to align closely with guidelines for sunscreen use in the pediatric population for children over six months of age; however, those with brand marketing towards "children", "kids", "sports", and "active" do not. Limiting recommendations for a sunscreen product with an SPF of 30 to 50 targeting this demographic, however, sufficiently meets guidelines set forth by the AAP and AAD. abstract_id: PUBMED:31206814 Evaluating sunscreen ultraviolet protection using a polychromatic diffuse reflectance device. Background: Sun protection factor (SPF) and UVA protection factor (UVA-PF) are determined using in vivo tests, with high exposures of subjects to ultraviolet (UV) radiation. Hybrid diffuse reflectance spectroscopy (HDRS) enables estimation of both indices using only trace amounts UVB. However, the equipment requires two expensive monochromators that must synchronously scan the spectrum. Methods: An alternate approach was developed using a polychromatic source that illuminates the skin via a custom light guide array, and the diffuse reflected light is measured with a photomultiplier. The ratio of the diffuse reflectance with and without the sunscreen on the skin determines the polychromatic diffuse reflectance UVA-PF (PDRS UVA-PF0 ). This factor was used to adjust in vitro UV spectroscopy scans of the sunscreen (with and without UV exposure to assess photostability), to calculate SPF and UVA protection factors. Ten sunscreens were evaluated and compared to in vivo SPF and UVA-PF values. Results: The data show an excellent correlation with known in vivo determinations. Conclusion: This polychromatic HDRS approach uses simpler, faster, and less expensive equipment to determine both UVA-PF and SPFs without high doses of UV radiation to the test subjects. abstract_id: PUBMED:33549814 Influence of physical-mechanical properties on SPF in sunscreen formulations on ex vivo and in vivo skin. The sun protection factor (SPF) is related to the selected UV filters. The objective of this study was to evaluate and compare the rheological behavior and texture profile of two sunscreen formulations and to correlate these data with the obtained SPF values. Two formulations (F1 and F2) were developed with the same type and amount of UV filters - whereby one of them also contained ethoxylated lanolin as additional film former (F2). Their rheological behavior, texture profile and in vitro and in vivo SPF were analyzed. The film-forming properties were evaluated by skin profilometry and diffuse reflectance spectroscopy. The structures of the formulations were examined by two-photon tomography combined with fluorescence lifetime imaging, and the penetration profile into the stratum corneum was investigated by tape stripping. The formulation with lanolin presented lower and constant values for physical-mechanical parameters, with a higher and better reproducible SPF. Both formulations did not penetrate the viable epidermis. In conclusion, formulations with better surface deposition on the skin surface can influence the film formation and, consequently, improve the SPF. These findings are important to improve the efficacy of sunscreen formulations and reduce the addition of UV filters. abstract_id: PUBMED:33025740 In vivo sun protection factor and UVA protection factor determination using (hybrid) diffuse reflectance spectroscopy and a multi-lambda-LED light source. The sun protection factor (SPF) values are currently determined using an invasive procedure, in which the volunteers are irradiated with ultraviolet (UV) light. Non-invasive approaches based on hybrid diffuse reflectance spectroscopy (HDRS) have shown a good correlation with conventional SPF testing. Here, we present a novel compact and adjustable DRS test system. The in vivo measurements were performed using a multi-lambda-LED light source and an 84-channel imaging spectrograph with a fiber optic probe for detection. A transmission spectrum was calculated based on the reflectance measured with sunscreen and the reflectance measured without sunscreen. The preexposure in vitro spectrum was fitted to the in vivo spectrum. Each of the 11 test products was investigated on 10 volunteers. The SPF and UVA-PF values obtained by this new approach were compared with in vivo SPF results determined by certified test institutes. A correlation coefficient R2 = 0.86 for SPF, and R2 = 0.92 for UVA-PF were calculated. Having examined various approaches to apply the HDRS principle, the method we present was found to produce valid and reproducible results, suggesting that the multi-lambda-LED device is suitable for in-vivo SPF testing based on the HDRS principle as well as for in-vivo UVA-PF measurements. abstract_id: PUBMED:34980740 New in vitro SPF Evaluation Method for Hydrophilic Sunscreen Samples. A new method was developed for the in vitro sun protection factor (SPF) evaluation of sunscreen samples. A new type of substrate, a hydroxyalkyl cellulose-coated plate, was also prepared specifically for hydrophilic samples. This new substrate was required because hydrophilic samples would be unlikely to wet the surface of the standard cosmetic PMMA UV evaluation plate. A super-hydrophilic quartz plate was prepared by corona-discharge treatment before an aqueous solution of hydroxyalkyl cellulose was spread on it. A flat and uniform hydroxyalkyl cellulose film was subsequently formed through the evaporation of water. Special care was taken to inhibit the generation of spatial non-uniformity. Six hydrophilic sunscreen samples with in vivo SPF values of 56, 55, 52, 25, 15, and 4, were then applied to the prepared hydroxyalkyl cellulose-coated plate, as well as a super-hydrophilic quartz plate and a flat hydrophobic PMMA plate. The thicknesses of the applied layers were determined using a wheel-shaped wet film thickness gauge immediately after the application, and UV transmission was measured using an SPF analyzer. The value of in vitro SPF was calculated from the UV absorbance and the thickness of the layer. For two out of the six samples, PMMA plate could not be available, as the samples were unable to wet the PMMA surface. Relatively small differences were observed between the in vitro SPF values when the super-hydrophilic and hydroxyalkyl cellulose-coated plates were used. Samples exhibiting higher in vivo SPF were also associated with higher in vitro SPF values, although a linear relationship was not observed. In contrast to the super-hydrophilic plate whose half-life of the super-hydrophilicity is only approximately five days, the hydrophilicity of the hydroxyalkyl cellulose-coated plate scarcely varied during six months of storage. Finally, a simplified evaluation method was also proposed. The validity of the method was verified through a ring test where three researchers employed this method in different laboratories at three independent organizations. Answer: Yes, there are good correlations among results obtained in vivo, in vitro, and in a theoretical Sunscreen Simulator for SPF and UVA-PF sunscreen evaluation. The study using the BASF Sunscreen Simulator software to predict SPF and UVA protection showed very good correlations with in vivo SPF in the absence of pigments (r = 0.91) and a good correlation in the presence of pigments (r = 0.70). Similarly, the UVA-PF calculated by the simulator also exhibited very good correlation with in vitro UVA-PF for formulations not containing pigment (r = 0.88) and a good correlation for those containing pigment (r = 0.75). The correlation of the same UVA-PF calculated by the simulator with in vivo measurements for formulations containing pigment was also considered good (r = 0.74). Additionally, in vivo UVA-PF measurements presented a good correlation with the values obtained in vitro (r = 0.74) (PUBMED:27012956). Moreover, a new methodology combining diffuse reflectance spectroscopy (DRS) with in vitro transmission showed a high correlation (r^2 = 0.98) with in vivo clinical SPF values (PUBMED:24417335). A multi-laboratory study using Hybrid Diffuse Reflectance Spectroscopy (HDRS) methods to assess both UVA-PF and SPF indices across various laboratories reported similar SPF and UVA-PF values within a narrow range, establishing the reliability of the HDRS methodology (PUBMED:34060681). These studies suggest that theoretical and in vitro methods can provide valuable insights and reliable predictions of sunscreen efficacy, which can be correlated well with in vivo results, thus serving as useful tools in the development and evaluation of sunscreen products.
Instruction: Compliance with biologic therapies for rheumatoid arthritis: do patient out-of-pocket payments matter? Abstracts: abstract_id: PUBMED:18821651 Compliance with biologic therapies for rheumatoid arthritis: do patient out-of-pocket payments matter? Objective: To assess the impact of patient out-of-pocket (OOP) expenditures on adherence and persistence with biologics in patients with rheumatoid arthritis (RA). Methods: An inception cohort of RA patients with pharmacy claims for etanercept or adalimumab during 2002-2004 was selected from an insurance claims database of self-insured employer health plans (n=2,285) in the US. Adherence was defined as medication possession ratio (MPR): the proportion of the 365 followup days covered by days supply. Persistence was determined using a survival analysis of therapy discontinuation during followup. Patient OOP cost was measured as the patient's coinsurance and copayments per week of therapy, and as the proportion of the total medication charges paid by the patient. Multivariate linear regression models of MPR and proportional hazards models of persistence were used to estimate the impact of cost, adjusting for insurance type and demographic and clinical variables. Results: Mean +/- SD OOP expenditures averaged $7.84+/-$14.15 per week. Most patients (92%) paid less than $20 OOP for therapy/week. The mean +/- SD MPR was 0.52+/-0.31. Adherence significantly decreased with increased weekly OOP (coeff= -0.0035, P<0.0001) and with a higher proportion of therapy costs paid by patients (coeff= -0.8794, P<0.0001), translating into approximately 1 week of therapy lost per $5.50 increase in weekly OOP. Patients whose weekly cost exceeded $50 were more likely to discontinue than patients with lower costs (hazard ratio 1.58, P<0.001). Conclusion: Most patients pay less than $20/week for biologics, but a small number have high OOP expenses, associated with lower medication compliance. The adverse impact of high OOP costs on adherence, persistence, and outcomes must be considered when making decisions about increasing copayments. abstract_id: PUBMED:28465766 Biologic Disease-Modifying Antirheumatic Drugs in a National, Privately Insured Population: Utilization, Expenditures, and Price Trends. Background: Spending on biologic drugs is a significant driver of drug expenditures for payers in private health plans. Biologic disease-modifying antirheumatic drugs (DMARDs) are some of the most effective and costly treatments in a physician's arsenal. Understanding the total annual expenditure, the average cost per prescription, and the impact of cost-sharing is important for drug benefit managers. Objective: To assess drug utilization, expenditures, out-of-pocket (OOP) cost, and price trends of biologic DMARDs in patients with rheumatoid arthritis (RA) in a large managed care organization. Methods: We conducted a retrospective database analysis of pharmacy claims data from January 2004 to December 2013 using the Optum Clinformatics Data Mart database, which covers 13.3 million lives. Pharmacy claims for 40,373 patients with RA were identified during the study period. In all, 9 biologic DMARDs approved for the treatment of RA, including infliximab, etanercept, adalimumab, certoizumab, golimumab, tocilizumab, anakinra, abatacept, and rituximab, and 1 nonbiologic oral, small molecule-targeted synthetic drug, tofacitinib, were included in this study. Descriptive statistics were used to analyze the total annual number of prescriptions, the total annual expenditures, the average annual cost per drug (a proxy of drug price), and the average OOP cost (copay plus deductible and coinsurance). All measurements were also stratified by study drugs and by insurance type. Results: Of the 40,373 patients with RA included in the study, approximately 76% were female (mean age, 55 years at diagnosis). Approximately 77% of the patients were white, and almost 48% lived in the South or Midwest region of the United States. Approximately 62% of patients had a point of service insurance plan. Expenditures on biologic DMARDs increased from $166 million in 2004 to $243 million in 2013, and the number of prescriptions and refills increased from 59,960 in 2004 to 105,295 in 2013. Prescriptions for biologic DMARDs increased more than 20% per patient from 2004 to 2013. The average cost per prescription remained relatively unchanged, at approximately $2300 per prescription, but the OOP expenditures increased from $36 (2.5%) per prescription to $128 (7%) during the study period. The OOP expenditures increased the most in HMO plans and in plans categorized as other (284% and 388%, respectively). Conclusions: Spending on biologic DMARDs has been primarily driven by an increase in prescribing rates, as the average amount reimbursed per prescription remained relatively unchanged over time, despite a regular annual increase to the average wholesale acquisition cost of 2% to 10%. The OOP burden for patients has increased, but this does not appear to have limited the use of biologic DMARDs. The entrance of new biologic and nonbiologic DMARDs into the market in the past few years is eroding the market share for several established drugs, and may lead to different results, warranting a study of new trends. abstract_id: PUBMED:33731090 The out-of-pocket burden of chronic diseases: the cases of Belgian, Czech and German older adults. Background: Out-of-pocket payments have a diverse impact on the burden of those with a higher morbidity or the chronically ill. As the prevalence of chronic diseases increases with age, older adults are a vulnerable group. The paper aims to evaluate the impact of chronic diseases on the out-of-pocket payments burden of the 50+ populations in Belgium, the Czech Republic and Germany. Methods: Data from the sixth wave of the Survey of Health, Ageing and Retirement in Europe is used. A two-part model with a logit model in the first part and a generalised linear model in the second part is applied. Results: The diseases increasing the burden in the observed countries are heart attacks, high blood pressure, cancer, emotional disorders, rheumatoid arthritis and osteoarthritis. Reflecting country differences Parkinson's disease and its drug burden is relevant in Belgium, the drugs burden related to heart attack and outpatient care burden to chronic kidney disease in the Czech Republic and the outpatient care burden of cancer and chronic lung disease in Germany. In addition, we confirm the regressive character of out-of-pocket payments. Conclusions: We conclude that the burden is not equitably distributed among older adults with chronic diseases. Identification of chronic diseases with a high burden can serve as a supplementary protective feature. abstract_id: PUBMED:22936845 Drug adherence to biologic DMARDS with a special emphasis on the benefits of subcutaneous abatacept. Major advances in drug development have led to the introduction of biologic disease- modifying drugs for the treatment of rheumatoid arthritis, which has resulted in unprecedented improvement in outcomes for many patients. These agents have been found to be effective in reducing clinical signs and symptoms, improving radiological damage, quality of life, and functionality, and have also been found to have an acceptable safety profile. Despite this, drug adherence is unknown, which has huge health care and health-economic implications. Local and national guidelines exist for the use of biologics; however, its varied use is widespread. Although this may in part reflect differences in prescribing behavior, patient preference plays a key role. In this review we will explore the factors that contribute to patient preference for, and adherence to, biologic therapy for rheumatoid arthritis with emphasis on the subcutaneous preparation of abatacept, a T-cell costimulatory molecule blocker. Overall, subcutaneous administration is preferred by patients and this may well improve drug adherence. abstract_id: PUBMED:30924291 Medication adherence and cost-related medication non-adherence in patients with rheumatoid arthritis: A cross-sectional study. Aim: First, to assess the clinical characteristics and medication adherence to oral rheumatoid arthritis (RA) medications in patients with RA. Second, to examine adherence determinants with a focus on the effect of medication out-of-pocket (OOP) costs on medication adherence to oral RA medications. Lastly, to examine cost-related medication non-adherence (CRN) in patients with RA. Methods: A cross-sectional study of patients with RA was conducted at rheumatology outpatient clinics in Shiraz, Iran. The data collection survey consisted of 5 sections including demographic questions, disease-related questions, Compliance Questionnaire Rheumatology (CQR), CRN questions and an open-ended question. SPSS version 24 was used for analysis. Results: A total of 308 completed surveys were collected. Adherence to oral RA medications was 40.3%. Just under 20% of participants were biologic disease-modifying antirheumatic drugs (bDMARDs) users and these bDMARDs users were 0.82 times less likely to be adherent to their oral RA medications compared to non-bDMARDs users (P < 0.05). There was no statistically significant association between OOP costs and adherence to oral RA medications (P > 0.05). However, 28.7% of participants reported not refilling, delaying to refill, skipping doses or taking smaller doses due to cost. In findings of the open-ended question, medication costs and affordability were the most commonly mentioned barriers to medication adherence. Conclusion: Non-adherence to oral RA medications was prevalent among Iranian patients with RA and OOP costs were a barrier to medication adherence. abstract_id: PUBMED:27747494 Characteristics Associated with Biologic Monotherapy Use in Biologic-Naive Patients with Rheumatoid Arthritis in a US Registry Population. Introduction: The aim of this study was to describe factors associated with initiating a biologic as monotherapy vs in combination with a conventional disease-modifying antirheumatic drug (DMARD) in biologic-naive patients with rheumatoid arthritis (RA) enrolled in the Corrona registry. Methods: First biologic initiations were classified as monotherapy (Bio MT) or combination therapy (Bio CMB). Baseline demographic and clinical characteristics were evaluated. Odds ratios (OR) based on mixed effects regression models estimated the association of covariates and use of monotherapy. Median odds ratios (MOR) based on estimated physician random effects quantified variation in individual physician use of monotherapy. Results: Between October 2001 and April 2012, 3,923 previously biologic-naive patients initiated biologic therapy, of which 19.1 % initiated as monotherapy. Baseline characteristics of patients initiating Bio MT and Bio CMB were similar for age, sex, duration of RA, and clinical disease activity index. Significantly higher proportions of Bio CMB initiators had prior conventional DMARD (97.23 vs 85.60 %; P < 0.01) and methotrexate (MTX) use (91.68 vs 71.87 %; P < 0.01) compared with Bio MT initiators. Variation in individual physician use of monotherapy [MOR 1.89; 95 % confidence interval (CI), 1.66-2.23] and use of biologics approved by the United States Food and Drug Administration for monotherapy (OR 1.47; 95 % CI, 1.20-1.81) significantly influenced the odds of initiating Bio MT. Patient history of hepatic disease, neutropenia, and malignancy were associated with increased odds of being prescribed Bio MT. Conclusion: In addition to regulatory approval for monotherapy and specific pre-existing comorbidities, significant variation in physician use of monotherapy was associated with increased likelihood of initiating Bio MT, independent of patient factors. abstract_id: PUBMED:26545293 Survey of rheumatologists on the use of the Philippine Guidelines on the Screening for Tuberculosis prior to use of Biologic Agents. Background: The use of biologic agents has become an important option in treating patients with rheumatoid arthritis. However, these drugs have been associated with an increased risk of tuberculosis (TB) reactivation. Local guidelines for TB screening prior to the use of biologic agents were developed to address this issue. Aim: This study is a survey describing the compliance of Filipino rheumatologists to these guidelines. Method: Eighty-seven rheumatologists in the Philippines were given the questionnaire and responses from 61 rheumatologists were included in the analysis. Results: All respondents agree that patients should be screened prior to giving the biologic agents. Local guidelines recommend screening with tuberculin skin test (TST) and chest radiograph. However, cut-off values considered for a positive TST and timing of initiation of biologic agents after starting TB prophylaxis and treatment varied among respondents. In addition, screening of close household contacts were only performed by 41 (69.5%) respondents. There were 11 respondents who reported 16 patients developing TB during or after receiving biologic agents, despite adherence to the guidelines. Conclusion: This survey describes the compliance rate of Filipino rheumatologists in applying current local recommendations for TB screening prior to initiating biologic agents. The incidence of new TB cases despite the current guidelines emphasizes the importance of compliance and the need to revise the guidelines based on updated existing literature. abstract_id: PUBMED:30206553 The use of biologic therapies in uveitis. Purpose: Non-infectious uveitis has been long controlled with the use of corticosteroids with many side effects and poor control in some cases. The purpose of this paper was to assess the different biologic agents (in this case infliximab and adalimumab) and to compare their efficacy in the treatment of uveitis. Results: Adalimumab has been proven very successful in replacing or aiding corticosteroid therapy in different autoimmune mediated uveitis (Juvenile Idiopathic Arthritis, Rheumatoid arthritis, sarcoidosis) whereas infliximab has been used intravenously and recently intravitreally with very promising results in controlling Behcet's related uveitis. Conclusion: Biologic Response Modifiers represent the future of therapy in immune-mediated uveitis. Abbreviations AU = Anterior Uveitis, BCVA = Best Corrected Visual Acuity, BRM = Biologic Response Modifiers, CME = Cystoid Macular Oedema, CPR = C Protein Reactive, ESR = Erythrocyte Sediment Rate, HSV = Herpes Simplex Virus, ICAM = Intercellular Adhesion Molecules, IMT = Immunomodulatory Therapy, JIA = Juvenile Idiopathic Arthritis, MMP = Matrix Metalloproteinases, MTX = Methotrexate, RA = Rheumatoid Arthritis, TB = Tuberculosis, VCAM = Vascular Adhesion Molecules. abstract_id: PUBMED:37776499 First multi-center retrospective study assessed the compliance with and persistence of biological therapies in Bulgarian population with rheumatoid arthritis. Rheumatoid arthritis is an inflammatory joint disease that causes progressive joint damage, leading to severe disability. Early diagnosis, optimal therapy, and strict adherence to the prescribed medication are key factors that allow for the cessation of the disease progression and the preserving of the patient's quality of life. The objective of this study was to estimate the compliance to and persistence of biologic disease-modifying anti-rheumatic drugs (bDMARDs) among the Bulgarian population with RA. This retrospective observational cohort study included 179 patients, who were tracked over a 36-month period. During baseline and subsequent follow-up visits (at months 6, 12, 24, and 36), we monitored the disease activity, side effects, medication tolerability and effectiveness, compliance, and persistence to the prescribed biologic agent. The compliance with bDMARDs among Bulgarian patients with RA was 85.5% in the first year, 76.0% in the second year, and 63.7% in the third year. The Infliximab cohort showed the lowest compliance rate (50%), with the other subgroups bDMARDs having similar results (64-70%) during the period of observation. The median therapy duration across all patient cohorts is 61.9 months (IQR 55.7-67.6). Our study did not establish any significant impact of gender, age and disease duration, concomitant treatment with methotrexate, type of biologic agent and previous exposure to biological agents on the treatment adherence. The compliance with and persistence of the prescribed bDMARD among the Bulgarian population with RA is unsatisfactory. Therapy interruption and nonadherence to recommended therapy are associated with disease progression and patient disability. The consequences include not only financial burdens but also psychosocial and physical impacts. abstract_id: PUBMED:25227187 Economics of non-adherence to biologic therapies in rheumatoid arthritis. Adherence to biologic therapies among patients with rheumatoid arthritis is sub-optimal, with the proportion of adherent patients reported to be as low as 11 %. We found few studies evaluating economic outcomes, including health care costs, associated with non-adherence with biologic therapies. Findings suggest that while higher pharmacy costs drive total health care costs among adherent patients, non-adherent patients incur greater health care utilization including inpatient, outpatient, and laboratory services. Finally, economic factors are important determinants of adherence to biologics in patients with rheumatoid arthritis. Evidence to date has shown that higher out-of-pocket payments have a negative association with adherence to biologics. Furthermore, cost-related non-adherence is a highly prevalent problem in rheumatoid arthritis. Given the high costs of biologics and continued expansion of use in rheumatoid arthritis, there is need for more research to understand the economic implications of adherence to these therapies. Answer: Yes, patient out-of-pocket (OOP) payments do matter when it comes to compliance with biologic therapies for rheumatoid arthritis (RA). Studies have shown that higher OOP costs are associated with lower medication compliance. For instance, an inception cohort of RA patients with pharmacy claims for etanercept or adalimumab showed that adherence significantly decreased with increased weekly OOP costs. Patients whose weekly cost exceeded $50 were more likely to discontinue therapy than those with lower costs (PUBMED:18821651). Additionally, a retrospective database analysis of pharmacy claims data revealed that out-of-pocket expenditures for biologic DMARDs increased over time, from $36 per prescription to $128, which corresponds to an increase from 2.5% to 7% of the prescription cost. Despite this increase in OOP burden, the use of biologic DMARDs did not appear to be limited (PUBMED:28465766). A cross-sectional study of patients with RA in Iran found that non-adherence to oral RA medications was prevalent and that OOP costs were a barrier to medication adherence, with 28.7% of participants reporting cost-related non-adherence behaviors such as not refilling prescriptions or taking smaller doses due to cost (PUBMED:30924291). Furthermore, economic factors are important determinants of adherence to biologics in patients with RA, as higher OOP payments have a negative association with adherence to biologics. Cost-related non-adherence is a highly prevalent problem in RA, and given the high costs of biologics and their expanding use, there is a need for more research to understand the economic implications of adherence to these therapies (PUBMED:25227187). In summary, patient out-of-pocket payments are a significant factor affecting compliance with biologic therapies for rheumatoid arthritis, with higher costs leading to lower adherence and increased likelihood of therapy discontinuation.
Instruction: Changes in RBE of 14-MeV (d + T) neutrons for V79 cells irradiated in air and in a phantom: is RBE enhanced near the surface? Abstracts: abstract_id: PUBMED:1230122 IV. Studies on cell biological experiments to the relative biological effectiveness (RBE) of fast neutrons in different phantom depths (author's transl) From carcinomas of the collum uteri and from human embryos (10 weeks) primary cell cultures were obtained, irradiated with 60Co-gamma-rays and fast neutrons (6,2 mev), respectively, in doses ranging from 50 to 300 rad in the phantom depth of 3 and 12 cm at 37 degrees C, and the number of mitotically surviving cells was determined. From the survival curves, the values of D37, Do and Dq were determined. The studies have shown that embryonic cells have higher Dq values than the tumor cells under study, while there were no distinct differences in primary radiation sensitivity Do between normal and tumor cells when considering biological variability. The relative biological effectiveness (RBE) of the 6,2 mev neutrons proved clearly dependent on the level of the single dose or of percent survival rate, respectively. The minima of the RBE values were obtained refering to Do (2,2-2,7). From the quotient RBE (tumor cells)/RBE (embryonic cells) ranging from 0.88 to 1.23 depending on the reference system, the different phantom depth and the cell cultures used can be concluded that the stronger biological effectiveness of fast neutrons is not necessarily an additional therapeutical advantage, but that the "anoxic gain factor" and the ensueing increased killing of hypoxic tumor cells has to be considered as the main advantage of neutron therapy. Using the neutron therapy in a phantom depth of 12 cm, a slow increase of the RBE-values can be registered in comparison to the radiation in a depth of 3 cm, if for the formation of the RBE-values the pure dose of neutrons without reference to the gamma-ray part will be employed. When we use for the formation of the RBE-values the neutron-dose including the dose for the corresponding gamma-ray part, a change of the RBE on the phantom depth tested here is not detectable. abstract_id: PUBMED:10804994 Neutron-energy-dependent cell survival and oncogenic transformation. Both cell lethality and neoplastic transformation were assessed for C3H10T1/2 cells exposed to neutrons with energies from 0.040 to 13.7 MeV. Monoenergetic neutrons with energies from 0.23 to 13.7 MeV and two neutron energy spectra with average energies of 0.040 and 0.070 MeV were produced with a Van de Graaff accelerator at the Radiological Research Accelerator Facility (RARAF) in the Center for Radiological Research of Columbia University. For determination of relative biological effectiveness (RBE), cells were exposed to 250 kVp X rays. With exposures to 250 kVp X rays, both cell survival and radiation-induced oncogenic transformation were curvilinear. Irradiation of cells with neutrons at all energies resulted in linear responses as a function of dose for both biological endpoints. Results indicate a complex relationship between RBEm and neutron energy. For both survival and transformation, RBEm was greatest for cells exposed to 0.35 MeV neutrons. RBEm was significantly less at energies above or below 0.35 MeV. These results are consistent with microdosimetric expectation. These results are also compatible with current assessments of neutron radiation weighting factors for radiation protection purposes. Based on calculations of dose-averaged LET, 0.35 MeV neutrons have the greatest LET and therefore would be expected to be more biologically effective than neutrons of greater or lesser energies. abstract_id: PUBMED:27381730 Cholangiocarcinoma-derived exosomes inhibit the antitumor activity of cytokine-induced killer cells by down-regulating the secretion of tumor necrosis factor-α and perforin. Objective: The aim of our study is to observe the impact of cholangiocarcinoma-derived exosomes on the antitumor activities of cytokine-induced killer (CIK) cells and then demonstrate the appropriate mechanism. Methods: Tumor-derived exosomes (TEXs), which are derived from RBE cells (human cholangiocarcinoma line), were collected by ultracentrifugation. CIK cells induced from peripheral blood were stimulated by TEXs. Fluorescence-activated cell sorting (FACS) was performed to determine the phenotypes of TEX-CIK and N-CIK (normal CIK) cells. The concentrations of tumor necrosis factor-α (TNF-α) and perforin in the culture medium supernatant were examined by using an enzyme-linked immunosorbent assay (ELISA) kit. A CCK-8 kit was used to evaluate the cytotoxic activity of the CIK cells to the RBE cell line. Results: The concentrations of TNF-α and perforin of the group TEX-CIK were 138.61 pg/ml and 2.41 ng/ml, respectively, lower than those of the group N-CIK 194.08 pg/ml (P<0.01) and 3.39 ng/ml (P<0.05). The killing rate of the group TEX-CIK was 33.35%, lower than that of the group N-CIK (47.35% (P<0.01)). The population of CD3(+), CD8(+), NK (CD56(+)), and CD3(+)CD56(+) cells decreased in the TEX-CIK group ((63.2±6.8)%, (2.5±1.0)%, (0.53±0.49)%, (0.45±0.42)%) compared with the N-CIK group ((90.3±7.3)%, (65.7±3.3)%, (4.2±1.2)%, (15.2±2.7)%), P<0.01. Conclusions: Our results suggest that RBE cells-derived exosomes inhibit the antitumor activity of CIK cells by down-regulating the population of CD3(+), CD8(+), NK (CD56(+)), and CD3(+)CD56(+) cells and the secretion of TNF-α and perforin. TEX may play an important role in cholangiocarcinoma immune escape. abstract_id: PUBMED:34610483 MicroRNA-146b-5p suppresses cholangiocarcinoma cells by targeting TRAF6 and modulating p53 translocation. Background: In view of the poor prognosis and high mortality of cholangiocarcinoma, there is a need for new therapeutic strategies. This study aims to reveal the biological function of miR-146b-5p in cholangiocarcinoma cell and its possible mechanism. Methods: The expression level and prognostic information on miR-146b-5p in cholangiocarcinoma were obtained in TCGA database. The biological function of miR-146b-5p on proliferation and vitality of cholangiocarcinoma cell HUCCT-1 was examined by EdU and MTT assay, and the apoptosis of HUCCT-1 cells transfected with miR-146b-5p mimic, mimic control, inhibitor, inhibitor control was detected by flow cytometry analysis. The western blot was done to evaluate the effect of miR-146b-5p targeting substrate and the expression of p53 in whole-cell protein and mitochondria fractions. Results: Our finding revealed that miR-146b-5p expression in patients with CHOL was lower than the normal group(p<0.001). MiR-146b-5p expression was down-regulated in human cholangiocarcinoma HUCCT-1 and RBE cells compared to normal control HIBEC and other cancer cells. The miR-146b-5p mimic could inhibit HUCCT-1 cell proliferation (p<0.05) and promote HUCCT-1 cell apoptosis significantly (p<0.05). The results of western blot showed that miR-146b-5p mimic could directly target TRAF6 3'UTR region and up-regulate the expression of p53 in mitochondria and miR-146b-5p inhibitor could down-regulated the level of p53 in mitochondria. Conclusion: MiR-146b-5p is a cholangiocarcinoma suppressor by inhibiting cell proliferation and promoting cell apoptosis with targeting TRAF6, possibly via modulating p53 translocation to mitochondria. abstract_id: PUBMED:22591401 Impaired degradation followed by enhanced recycling of epidermal growth factor receptor caused by hypo-phosphorylation of tyrosine 1045 in RBE cells. Background: Since cholangiocarcinoma has a poor prognosis, several epidermal growth factor receptor (EGFR)-targeted therapies with antibody or small molecule inhibitor treatment have been proposed. However, their effect remains limited. The present study sought to understand the molecular genetic characteristics of cholangiocarcinoma related to EGFR, with emphasis on its degradation and recycling. Methods: We evaluated EGFR expression and colocalization by immunoblotting and immunofluorescence, cell surface EGFR expression by fluorescence-activated cell sorting (FACS), and EGFR ubiquitination and protein binding by immunoprecipitation in the human cholangiocarcinoma RBE and immortalized cholangiocyte MMNK-1 cell lines. Monensin treatment and Rab11a depletion by siRNA were adopted for inhibition of EGFR recycling. Results: Upon stimulation with EGF, ligand-induced EGFR degradation was impaired and the expression of phospho-tyrosine 1068 and phospho-p44/42 MAPK was sustained in RBE cells as compared with MMNK-1 cells. In RBE cells, the process of EGFR sorting for lysosomal degradation was blocked at the early endosome stage, and non-degradated EGFR was recycled to the cell surface. A disrupted association between EGFR and the E3 ubiquitin ligase c-Cbl, as well as hypo-phosphorylation of EGFR at tyrosine 1045 (Tyr1045), were also observed in RBE cells. Conclusion: In RBE cells, up-regulation of EGFR Tyr1045 phosphorylation is a potentially useful molecular alteration in EGFR-targeted therapy. The combination of molecular-targeted therapy determined by the characteristics of individual EGFR phosphorylation events and EGFR recycling inhibition show promise in future treatments of cholangiocarcinoma. abstract_id: PUBMED:22699055 Effect of hepatitis C virus core gene transfection on NFAT1 expression in human intrahepatic cholangiocarcinoma cells Objective: To explore whether hepatitis C virus core protein (HCV C) regulates the expression of NFAT1 to participate in the progression and malignant biological behavior of intrahepatic cholangiocarcinoma cells. Methods: The recombinant plasmid pEGFP-N(3)-HCV C and the empty vector pEGFP-N(3) were cotransfected with enhanced green fluorescent protein (EGFP) into RBE cells using liposome. Real-time PCR and Western blotting were used to examine the expression of NFAT1 mRNA and protein in the transfected RBE cells. MTT assay was used to evaluate the changes in the cell proliferation, and the cell cycle changes were analyzed by flow cytometry. Results: HCV C transfection significantly enhanced the expressions of NFAT1 mRNA and protein in RBE cells (P<0.05) and promoted the progression of cell cycle into G(2)/M phase to accelerate the cell proliferation. Conclusion: Transfection with HCV C gene up-regulates NFAT1 expression and promotes the cell cycle progression and proliferation of intrahepatic cholangiocarcinoma cells, suggesting the involvement of HCV C in the progression of intrahepatic cholangiocarcinoma. abstract_id: PUBMED:26825606 Hepatocyte nuclear factor 6 inhibits the growth and metastasis of cholangiocarcinoma cells by regulating miR-122. Purpose: Hepatocyte nuclear factor 6 (HNF6) is a liver-enriched transcription factor and highly expressed in mature bile duct epithelial cells. This study sought to investigate the role of HNF6, particularly the molecular mechanisms for how HNF6 is involved in the growth and metastasis of cholangiocarcinoma (CCA) cells. Methods: The expression of HNF6, miR-122 and key molecules was examined by Western blot analysis and real-time RT-PCR. Stable transfectants, HCCC-HNF(low) and RBE-HNF(high), were generated from human CCA HCCC-9810 and RBE cells, respectively. The regulatory effect of HNF6 on miR-122 was evaluated by luciferase reporter assay. Cell proliferation, cycle distribution, migration and invasion were analyzed. The xenograft model was used to assess the effects of HNF6 overexpression on tumorigenesis, growth, metastasis and therapeutic potentials. Results: Human CCA tissues and cells expressed lower levels of HNF6, which positively correlated with miR-122. HNF6 regulated the expression of miR-122 by stimulating its promoter. HNF6 overexpression inhibited cell proliferation by inducing cell cycle arrest at G1 phase through regulating miR-122, cyclin G1 and insulin-like growth factor-1 receptor. HNF6 inhibited the migration and invasion of CCA cells by regulating matrix metalloproteinase-2 and metalloproteinase-9, reversion-inducing-cysteine-rich protein with kazal motifs, E-cadherin and N-cadherin. Co-transfection of anti-miR-122 abrogated the effects of HNF6. HNF6 overexpression inhibited the ability of cells to form tumors and to metastasize to the lungs of mice, and the growth of established tumors. Conclusions: The results indicate that HNF6 may serve as a tumor suppressor by regulating miR-122, and its overexpression may represent a mechanism-based therapy for CCA. abstract_id: PUBMED:15962508 Rat brain endothelial cell lines for the study of blood-brain barrier permeability and transport functions. (1) In vitro models of the BBB have been developed from cocultures between bovine, porcine, rodent or human brain capillary endothelial cells with rodent or human astrocytes. Since most in vivo BBB studies have been performed with small laboratory animals, especially rats, it is important to establish a rat brain endothelial (RBE) cell culture system that will allow correlations between in vitro and in vivo results. The present review will constitute a brief description of the best characterized RBE cell lines (RBE4, GP8/3.9, GPNT, RBEC1, TR-BBBs and rBCEC4 cell lines) and will summarize their recent and important contribution to our current knowledge of the BBB transport functions and permeability to blood-borne solutes, drugs, and cells. (2) In most cases, primary cultures of RBE cells were transduced with an immortalizing gene (SV40 or polyoma virus large T-antigen or adenovirus E1A), either by transfection of plasmid DNA or by infection using retroviral vectors. In one case however, the conditionally immortalized TR-BBB cell line was derived from primary cultures of brain endothelial cells of SV40-T-expressing transgenic rats. (3) All cell lines appear to have an endothelial morphology. The absence of foci formation would mean that the cells are not transformed. The endothelial origin is shown by the expression of Factor VIII-related antigen. Immortalized RBE cells express all the enzymes and transporters that are considered as specific for the blood-brain barrier endothelium, with similar characteristics to those expected from in vivo analyses, but at a significantly lower level. Some RBE cell lines are responsive to astroglial factors, such as RBE4 cells, rBEC4, and TR-BBB cells. None of the immortalized RBE cell lines appear to generate the necessary restrictive paracellular barrier properties that would allow to use them in transendothelial permeability screening. (4) RBE cell lines have been used to demonstrate that transporters such as organic cation transporter/carnitine transporter, serotonin transporter, and the ATA2 system A isoform are expressed in rat brain endothelium. When the transporter is shown to be expressed with the same properties in the immortalized RBE cells as in vivo, regulation studies may be initiated even if the transporter is down-regulated. Pharmacological applications have been proposed with well-characterized transporters such as monocarboxylic acid transporter-1, large neutral amino acid tansporter-1, nucleoside carrier systems, and P-glycoprotein. RBE cell monolayers have also been used to investigate the mechanism of the transendothelial transport of large molecules, such as immunoliposomes or nanoparticles, potentially useful as drug delivery vectors to the brain. (5) RBE4 and GP8 cell lines have been extensively used to demonstrate that intercellular adhesion molecule-1 (ICAM-1) engagement in brain endothelial cells triggers multiple signal transduction pathways. Using functional assays, it was established that ICAM-1 signaling is intimately and actively involved in facilitating lymphocyte infiltration. (6) Several RBE cell lines have been described, which constitute tentative in vitro models of the rat BBB. The major limitation of these models generally appears to be due to their relatively high paracellular permeability to small molecules, thus limiting their use for permeability studies. The strategies developed for the production of these RBE cell lines will enable the characterization of still more efficient permeability models, as well as the immortalization of human brain endothelial cells. abstract_id: PUBMED:11223873 Telomerase activity is repressed during differentiation along the hepatocytic and biliary epithelial lineages: verification on immortal cell lines from the same origin. Recent investigations indicate that telomerase activity regulates the life span of cells by compensating for telomere shortening during DNA replication. In addition, as differentiation progresses, telomerase activity is reduced in several different cell lineages. These findings lend support to the theory that more immature cells have greater remaining proliferative capacity and longer life span. However, it has not been directly demonstrated that the differentiation along a hepatocytic or a bile ductal lineage is accompanied by reduction of telomerase activity. In this study, we present direct evidence that telomerase activity is reduced during hepatocytic and biliary epithelial differentiation by using our unique cell lines including a stem-like cell line, ETK-1. When hepatocytic differentiation was induced in ETK-1 by 5-azacytidine, telomerase activity decreased significantly. Similarly, when we compared the telomerase activity on SSP-25 and RBE cell lines from the same origin but representing different maturation stages of cholangiocarcinoma, more mature cells were found to possess significantly lower activity. These results indicate that the generally accepted relationship between telomerase activity and differentiation stage also applies in the hepatocytic and biliary epithelial lineages. abstract_id: PUBMED:25004948 Assessment of biological effectiveness of boron neutron capture therapy in primary and metastatic melanoma cell lines. Purpose: In order to optimize the effectiveness of Boron Neutron Capture Therapy (BNCT), Relative Biological Effectiveness (RBE) and Compound Biological Effectiveness (CBE) were determined in two human melanoma cell lines, M8 and Mel-J cells, using the amino acid p-boronophenylalanine (BPA) as boron carrier. Materials And Methods: The effects of BNCT on the primary amelanotic cell line M8 and on the metastatic pigmented melanoma cell line Mel-J were studied using colony formation assay. The RBE values were determined using both a gamma ray source, and the neutron beam from the Nuclear Reactor of the National Atomic Energy Commission (RA-3). For the determination of the RBE, cells were irradiated with increasing doses of both sources, between 1 and 8 Gy; and for the determination of CBE factors, the cells were pre-incubated with BPA before irradiation. Afterwards, the cell surviving fraction (SF) was determined for each treatment. Results: Marked differences were observed between both cell lines. Mel-J cells were more radioresistant than the M8 cell line. The clonogenic assays showed that for a SF of 1%, the RBE values were 1.3 for M8 cells and 1.5 for Mel-J cells. Similarly, the CBE values for a 1% SF were 2.1 for M8 and 3 for Mel-J cell lines. For the endpoint of 0.1% of SF the RBE values obtained were 1.2 for M8 and 1.4 for Mel-J cells. Finally, CBE values calculated for a 0.1% were 2 and 2.6 for M8 and Mel-J cell lines respectively. In order to estimate the uptake of the non-radioactive isotope Boron 10 ((10)B), a neutron induced autoradiographic technique was performed showing discrepancies in (10)B uptake between both cell lines. Conclusions: These obtained in vitro results are the first effectiveness factors determined for human melanoma at the RA-3 nuclear reactor and show that BNCT dosimetry planning for patients could be successfully performed using these new factors. Answer: The provided abstracts do not contain specific information about the relative biological effectiveness (RBE) of 14 MeV d-T (deuterium-tritium) neutrons for V79 cells irradiated in air and in a phantom, nor do they discuss whether the RBE is enhanced near the surface. The abstracts focus on various studies related to RBE in different contexts, such as the RBE of fast neutrons in different phantom depths for carcinoma and embryonic cells (PUBMED:1230122), the energy-dependent RBE for cell survival and oncogenic transformation in C3H10T1/2 cells (PUBMED:10804994), and the biological effectiveness of boron neutron capture therapy in melanoma cell lines (PUBMED:25004948). However, none of these abstracts specifically address the RBE of 14 MeV d-T neutrons for V79 cells or the effect of irradiation conditions (in air vs. in a phantom) on RBE near the surface. To answer the question about the changes in RBE of 14 MeV d-T neutrons for V79 cells irradiated in air and in a phantom, and whether the RBE is enhanced near the surface, one would need to refer to studies that specifically investigate these parameters. Such studies would likely involve experimental setups where V79 cells are exposed to 14 MeV d-T neutron irradiation under controlled conditions both in air and within a phantom material, with measurements taken at various depths to determine the RBE at different distances from the surface.
Instruction: Does transrectal ultrasound probe configuration really matter? Abstracts: abstract_id: PUBMED:33572287 Transrectal Ultrasound and Photoacoustic Imaging Probe for Diagnosis of Prostate Cancer. A combined transrectal ultrasound and photoacoustic (TRUS-PA) imaging probe was developed for the clear visualization of morphological changes and microvasculature distribution in the prostate, as this is required for accurate diagnosis and biopsy. The probe consisted of a miniaturized 128-element 7 MHz convex array transducer with 134.5° field-of-view (FOV), a bifurcated optical fiber bundle, and two optical lenses. The design goal was to make the size of the TRUS-PA probe similar to that of general TRUS probes (i.e., about 20 mm), for the convenience of the patients. New flexible printed circuit board (FPCB), acoustic structure, and optical lens were developed to meet the requirement of the probe size, as well as to realize a high-performance TRUS-PA probe. In visual assessment, the PA signals obtained with the optical lens were 2.98 times higher than those without the lens. Moreover, the in vivo experiment with the xenograft BALB/c (Albino, Immunodeficient Inbred Strain) mouse model showed that TRUS-PA probe was able to acquire the entire PA image of the mouse tight behind the porcine intestine about 25 mm depth. From the ex vivo and in vivo experimental results, it can be concluded that the developed TRUS-PA probe is capable of improving PA image quality, even though the TRUS-PA probe has a cross-section size and an FOV comparable to those of general TRUS probes. abstract_id: PUBMED:33225864 Design and experimental study of a novel 7-DOF manipulator for transrectal ultrasound probe. Traditional hand-held ultrasound probe has some limitations in prostate biopsy. Improving the localization and accuracy of ultrasound probe will increase the detection rate of prostate cancer while biopsy techniques remain unchanged. This paper designs a manipulator for transrectal ultrasound probe, which assists doctors in performing prostate biopsy and improves the efficiency and accuracy of biopsy procedure. The ultrasound probe manipulator includes a position adjustment module that can lock four joints at the same time. It reduces operating time and improves the stability of the mechanism. We use the attitude adjustment module designed by double parallelogram RCM mechanism, the ultrasound probe can realize centering and prevent its radial motion. The self-weight balance design helps doctors operate ultrasound probe without weight. Using MATLAB to analyze the manipulator, the results show that the workspace of the mechanism can meet the biopsy requirements. And simulate the centering effect of the ultrasound probe when the attitude is adjusted at different feeding distances, the results show that the ultrasound probe is centering stability. Finally, the centering and joint interlocking tests of the physical prototype are completed. In this paper, a 7-DOF manipulator for transrectal ultrasound probe is designed. The mechanism is analyzed for kinematics, workspace analysis, simulation of centering effects, development of a physical prototype and related experimental research. The results show that the surgical demand workspace is located inside the reachable workspace of the mechanism and the joint locking of the manipulator is reliable. abstract_id: PUBMED:37475848 Value of contrast-enhanced ultrasound in deep angiomyxoma using a biplane transrectal probe: A case report. Background: Deep angiomyxoma (DAM) is a very rare tumor type. Magnetic resonance imaging (MRI) is considered the best imaging modality for diagnosing DAM. Computed tomography (CT) is used mainly to assess the invasion range of DAM. The value of ultrasonography in the diagnosis of DAM is still controversial. Through a literature review, we summarized the current state of ultrasonic examination for DAM and reported for the first time the contrast-enhanced ultrasound (CEUS) features of DAM seen using a biplane transrectal probe. Case Summary: A 37-year-old woman presented with a sacrococcygeal mass that had gradually increased in size over the previous 6 mo. MRI and CT examinations failed to allow a definite diagnosis to be made. Transperineal core needle biopsy (CNB) guided by transrectal ultrasound and CEUS was suggested after a multidisciplinary discussion. Grayscale ultrasound of the lesion showed a layered appearance with alternating hyperechoic and hypoechoic patterns. Transrectal CEUS showed a laminated distribution of the contrast agent that was consistent with the layered appearance of the tumor on grayscale ultrasound. We performed transperineal CNB of the enhanced area inside the tumor under transrectal CEUS guidance and finally made a definitive diagnosis of DAM through histopathology. The patient underwent laparoscopic-assisted transabdominal surgery combined with transperineal surgery for large pelvic tumor resection and pelvic floor peritoneal reconstruction. No recurrence or metastasis was found at the nine-month follow-up. Conclusion: Transrectal CEUS can show the layered perfusion characteristics of the contrast agent, guiding subsequent transperineal CNB of the enhanced area within the DAM. abstract_id: PUBMED:24795525 Endocavity Ultrasound Probe Manipulators. We developed two similar structure manipulators for medical endocavity ultrasound probes with 3 and 4 degrees of freedom (DoF). These robots allow scanning with ultrasound for 3-D imaging and enable robot-assisted image-guided procedures. Both robots use remote center of motion kinematics, characteristic of medical robots. The 4-DoF robot provides unrestricted manipulation of the endocavity probe. With the 3-DoF robot the insertion motion of the probe must be adjusted manually, but the device is simpler and may also be used to manipulate external-body probes. The robots enabled a novel surgical approach of using intraoperative image-based navigation during robot-assisted laparoscopic prostatectomy (RALP), performed with concurrent use of two robotic systems (Tandem, T-RALP). Thus far, a clinical trial for evaluation of safety and feasibility has been performed successfully on 46 patients. This paper describes the architecture and design of the robots, the two prototypes, control features related to safety, preclinical experiments, and the T-RALP procedure. abstract_id: PUBMED:19286200 Does transrectal ultrasound probe configuration really matter? End fire versus side fire probe prostate cancer detection rates. Purpose: We compared prostate cancer detection rates for the 2 most commonly used transrectal ultrasound prostate biopsy probes, end fire and side fire, to determine whether the probe configuration affects detection rates. Materials And Methods: We evaluated 2,674 patients who underwent initial prostate biopsy between 2000 and 2008 with respect to prostate specific antigen, biopsy technique and pathological findings. Patients were divided into 1,124 in whom biopsies were performed with an end fire probe and 1,550 in whom biopsies were performed with a side fire probe. Results: There was a significant difference in the overall cancer detection rate in the end vs side fire arms (45.8% vs 38.5%, p &lt;0.001). In the subsets of patients with prostate specific antigen greater than 4 to 10 ng/ml or less and greater than 10 ng/ml a significant difference persisted (46.4% vs 38.9% and 61.7% vs 49.1%, p &lt;0.004 and &lt;0.015, respectively). There was also a significant difference in detection rates between probes in those who underwent 8 to 19 biopsy cores (p &lt;0.009). Biopsies of greater than 20 cores failed to attain statistical significance (p &gt;0.105). We also found that prostate volume, patient age, prostate specific antigen and hypoechoic findings were independent variables for predicting cancer detection on multivariate analysis (p &lt;0.001). Conclusions: The type of probe significantly affects the overall prostate cancer detection rate, particularly in patients with prostate specific antigen greater than 4 ng/ml and/or nonsaturation (8 to 19 cores) prostate biopsy. This may be because the end fire probe allows better mechanical sampling of the lateral and apical regions of the peripheral zone, where cancer is most likely to reside. We set the stage for a randomized, controlled trial to confirm our observations. abstract_id: PUBMED:33293977 Application of transrectal ultrasound in guiding interstitial brachytherapy for advanced cervical cancer. Purpose: To investigate the role of transrectal ultrasound guidance in interstitial brachytherapy for cervical cancer. Material And Methods: Forty-eight patients who underwent interstitial brachytherapy treatment for cervical cancer between January 2017 and January 2018 were enrolled in the study. The distances between each inserted needle and the lesion were measured at seven sites by ultrasound (D1-D7) and compared to the corresponding distances (M1-M7) when visualised with nuclear magnetic resonance imaging (MRI). Measurements were paired on the basis of the observation sites, e.g. D1 and M1, D2 and M2. The statistical differences, intraclass correlation coefficients (ICCs), and linear relationships for the paired measurements were calculated. Results: No significant differences were found between the paired M and D measurements, with all ICCs showing high levels of concordance (0.81-0.93). Conclusions: Transrectal ultrasound showed strong agreement with MRI results in determining the position of the inserted needles. Transrectal ultrasound is a useful tool for guided interstitial brachytherapy and is appropriate for widespread use in the treatment of locally advanced cervical cancer. abstract_id: PUBMED:26392375 Implanted brachytherapy seed movement reflecting transrectal ultrasound probe-induced prostate deformation. Purpose: Compression of the prostate during transrectal ultrasound-guided permanent prostate brachytherapy is not accounted for during treatment planning. Dosimetry effects are expected to be small but have not been reported. The study aims to characterize the seed movement and prostate deformation due to probe pressure and to estimate the effects on dosimetry. Methods And Materials: C-arm fluoroscopy imaging was performed to reconstruct the implanted seed distributions (compressed and relaxed prostate) for 10 patients immediately after implantation. The compressed prostate was delineated on ultrasound and registered to the fluoroscopy-derived seed distribution via manual seed localization. Thin-plate spline mapping, generated with implanted seeds as control points, was used to characterize the deformation field and to infer the prostate contour in the absence of probe compression. Differences in TG-43 dosimetry for the compressed prostate and that on probe removal were calculated. Results: Systematic seed movement patterns were observed on probe removal. Elastic decompression was characterized by expansion in the anterior-posterior direction and contraction in the superior-inferior and lateral directions up to 4 mm. Bilateral shearing in the anterior direction was up to 6 mm, resulting in contraction of the 145 Gy prescription isodose line by 2 mm with potential consequences for the posterior-lateral margin. The average whole prostate D90 increased by 2% of prescription dose (6% max; p &lt; 0.01). Conclusions: The current investigation presents a novel study on ultrasound probe-induced deformation. Seed movements were characterized, and the associated dosimetry effects were nonnegligible, contrary to common expectation. abstract_id: PUBMED:22578920 Prospective randomized multicenter study comparing prostate cancer detection rates of end-fire and side-fire transrectal ultrasound probe configuration. Objective: To prospectively test the hypothesis that end-fire transrectal ultrasound prostate biopsy probes have greater cancer detection rates than side-fire probes. Retrospective studies have suggested that such probes might have greater cancer detection rates. Methods: The present prospective randomized multicenter trial aimed to compare the prostate cancer detection rates of the end-fire versus side-fire probe configuration during transrectal ultrasound-guided 12-core prostate biopsy. Patients were randomized according to age, prostate-specific antigen level and prostate volume. An interim analysis was planned after the inclusion of 300 patients. Results: At the interim analysis after the inclusion of 297 patients, no differences were found in the mean prostate-specific antigen level (P = .412), mean age (P = .519), mean prostate volume (P = .730), and positive digital rectal examination findings (P = .295). The prostate cancer detection rate did not differ between the end-fire and side-fire probe (34.3% vs 34.4%, P = .972). On multivariate analysis, suspicious digital rectal examination findings (relative risk 8.185, P &lt; .001), prostate-specific antigen level (relative risk 1.051, P = .041), and prostate volume (relative risk 0.973, P &lt; .001), but not probe configuration (relative risk 0.942, P = .831), were independent predictive factors for the detection of prostate cancer. The interim analysis committee suggested that, because no difference of 5 absolute percent was achieved after 300 patients, no additional recruitment was necessary. Therefore, the study was terminated early. Conclusion: The results of the present study have shown that the transrectal ultrasound probe configuration does not affect the prostate cancer detection rate during transrectal ultrasound-guided prostate biopsy. abstract_id: PUBMED:23088974 Geometric evaluation of systematic transrectal ultrasound guided prostate biopsy. Purpose: Transrectal ultrasound guided prostate biopsy results rely on physician ability to target the gland according to the biopsy schema. However, to our knowledge it is unknown how accurately the freehand, transrectal ultrasound guided biopsy cores are placed in the prostate and how the geometric distribution of biopsy cores may affect the prostate cancer detection rate. Materials And Methods: To determine the geometric distribution of cores, we developed a biopsy simulation system with pelvic mock-ups and an optical tracking system. Mock-ups were biopsied in a freehand manner by 5 urologists and by our transrectal ultrasound robot, which can support and move the transrectal ultrasound probe. We compared 1) targeting errors, 2) the accuracy and precision of repeat biopsies, and 3) the estimated significant prostate cancer (0.5 cm(3) or greater) detection rate using a probability based model. Results: Urologists biopsied cores in clustered patterns and under sampled a significant portion of the prostate. The robot closely followed the predefined biopsy schema. The mean targeting error of the urologists and the robot was 9.0 and 1.0 mm, respectively. Robotic assistance significantly decreased repeat biopsy errors with improved accuracy and precision. The mean significant prostate cancer detection rate of the urologists and the robot was 36% and 43%, respectively (p &lt;0.0001). Conclusions: Systematic biopsy with freehand transrectal ultrasound guidance does not closely follow the sextant schema and may result in suboptimal sampling and cancer detection. Repeat freehand biopsy of the same target is challenging. Robotic assistance with optimized biopsy schemas can potentially improve targeting, precision and accuracy. A clinical trial is needed to confirm the additional benefits of robotic assistance. abstract_id: PUBMED:37238152 Design of a Pediatric Rectal Ultrasound Probe Intended for Ultra-High Frequency Ultrasound Diagnostics. It has been shown that ultra-high frequency (UHF) ultrasound applied to the external bowel wall can delineate the histo-anatomic layers in detail and distinguish normal bowel from aganglionosis. This would potentially reduce or lessen the need for biopsies that are currently mandatory for the diagnosis of Hirschsprung's disease. However, to our knowledge, no suitable rectal probes for such a use are on the market. The aim was to define the specifications of an UHF transrectal ultrasound probe (50 MHz center frequency) suitable for use in infants. Probe requirements according to patient anatomy, clinicians' requests, and biomedical engineering UHF prerequisites were collected within an expert group. Suitable probes on the market and in clinical use were reviewed. The requirements were transferred into the sketching of potential UHF ultrasound transrectal probes followed by their 3D prototype printing. Two prototypes were created and tested by five pediatric surgeons. The larger and straight 8 mm head and shaft probe was preferred as it facilitated stability, ease of anal insertion, and possible UHF technique including 128 piezoelectric elements in a linear array. We hereby present the procedure and considerations behind the development of a proposed new UHF transrectal pediatric probe. Such a device can open new possibilities for the diagnostics of pediatric anorectal conditions. Answer: Yes, transrectal ultrasound (TRUS) probe configuration does matter in the context of prostate cancer detection rates. A study comparing the two most commonly used TRUS prostate biopsy probes, end fire and side fire, found a significant difference in overall cancer detection rates between the two, with the end fire probe showing higher detection rates (45.8% vs 38.5%, p <0.001) (PUBMED:19286200). This may be due to the end fire probe's better mechanical sampling of the lateral and apical regions of the peripheral zone, where cancer is most likely to reside. However, a prospective randomized multicenter study did not find a difference in prostate cancer detection rates between end-fire and side-fire probe configurations during TRUS-guided 12-core prostate biopsy (PUBMED:22578920). The study concluded that probe configuration does not affect the prostate cancer detection rate, suggesting that other factors such as suspicious digital rectal examination findings, prostate-specific antigen level, and prostate volume are more predictive of cancer detection. Additionally, the geometric distribution of biopsy cores, which is influenced by the probe configuration and the physician's ability to target the gland according to the biopsy schema, can affect the prostate cancer detection rate. A study using a biopsy simulation system found that systematic biopsy with freehand TRUS guidance does not closely follow the sextant schema and may result in suboptimal sampling and cancer detection (PUBMED:23088974). Robotic assistance with optimized biopsy schemas can potentially improve targeting, precision, and accuracy, indicating that the method of guidance and the configuration of the probe can be significant factors in the effectiveness of prostate cancer detection.
Instruction: Is previous cesarean section a risk for incidental cystotomy at the time of hysterectomy? Abstracts: abstract_id: PUBMED:16325612 Is previous cesarean section a risk for incidental cystotomy at the time of hysterectomy? A case-controlled study. Objective: The purpose of this study was to determine if previous cesarean section is an independent risk factor for incidental cystotomy at the time of hysterectomy. Study Design: This is a case-controlled study that evaluated all cases of incidental cystotomy at the time of hysterectomy between January 1998 and December 2001. Five thousand and ninety-two hysterectomies were performed in the time period mentioned above, and 51 cases of incidental cystotomy were identified. Each case of incidental cystotomy was then matched to 3 controls with similar patient characteristics, medical histories, and surgical histories, as well as the absence of incidental cystotomy at the time of hysterectomy. Results: Overall, 5092 hysterectomies were performed during the study period (total abdominal hysterectomy [TAH] 3140 [61.7%], total vaginal hysterectomy [TVH] 1519 [29.8%], laparoscopically-assisted vaginal hysterectomy [LAVH] 433 [8.5%]). Fifty-one cases of incidental cystotomy were identified (TAH: 24 [47.1%], TVH: 19 [37.3%], LAVH: 8 [15.7%]). The overall incidence of cystotomy was 1.0%. When considering TAH, there were 24/3141 (0.76%) cases of incidental cystotomy, with 8 (33%) of these patients with a history of previous cesarean section. During TVH, we encountered 19/1519 (1.3%) cases of incidental cystotomy, with 4 (21%) of these women having undergone a previous cesarean. Finally, during LAVH, there were 8/433 (1.8%) cases of incidental cystotomy. Five (62.5%) of these patients had a previous history of cesarean section. In comparison, 19/72 (26.4%) TAH controls had a previous history of cesarean. Four out of 57 (7.0%) TVH controls had a history of cesarean section. Finally, 2/24 (8.3%) LAVH controls had a history of previous cesarean. Conclusion: Previous cesarean section is indeed a significant risk factor for damage to the lower urinary tract at the time of hysterectomy (odds ratio [OR] 2.04; 95%CI 1.2-3.5). When analyzed separately, the OR of incidental cystotomy at the time of TAH, TVH, and LAVH in a woman with a history of previous cesarean was 1.26, 3.00, and 7.50, respectively. Only the value for LAVH was statistically significant (P = .005; 95%CI 1.8-31.4). abstract_id: PUBMED:22453160 Incidental cystotomy at the time of a hysterectomy. Objectives: : To evaluate risk factors for incidental cystotomy at the time of a hysterectomy. Methods: : All hysterectomies performed between January 1, 2000 and May 31, 2004 were reviewed. Demographic and operative data were abstracted from medical records. Cases were patients with cystotomies while controls were those without bladder injury. Categorical variables were analyzed with the χ or Fisher exact test (where applicable) while the Student t test was used for continuous data. Logistic regression was used for multivariate analysis. Results: : During the study period, 1424 hysterectomies were performed (50% abdominally, 45% vaginally, and 5% laparoscopically assisted vaginal). Thirty-four (2.4%) cystotomies occurred. Risk factors for incidental cystotomy included prior cesarean delivery (adjusted OR: 2.86, 95% CI: 1.39-5.92), pelvic adhesions (adjusted OR: 2.43, 95% CI: 1.11-5.31), and vaginal hysterectomy (adjusted OR: 2.63, 95% CI: 1.18-5.87). Conclusions: : Prior cesarean delivery, pelvic adhesive disease, and vaginal hysterectomy are independent risk factors for incidental cystotomy at the time of a hysterectomy. abstract_id: PUBMED:37727111 The impact of a previous cesarean section on the risk of perioperative and postoperative complications during vaginal hysterectomy. Objective: To investigate whether a previous cesarean section increases the risk of perioperative and postoperative complications during vaginal hysterectomy. Methods: A retrospective cohort study of women who had undergone a vaginal hysterectomy for benign indications between 2014 and 2019 was conducted, comparing patients with or without a previous cesarean section. Perioperative and postoperative complications during vaginal hysterectomy were assessed according to the Clavien-Dindo classification system within 30 days of surgery. Duration of surgery, estimated blood loss, and postoperative hospitalization days were also recorded. A two-sided P value of less than 0.05 was considered significant. Results: A total of 185 women were included, 25 (13.5%) patients had undergone a previous cesarean section (study group) and 160 (86.5%) had no history of cesarean section (comparison group). We found no significant differences in demographic and clinical characteristics as well as postoperative complications and interventions, duration of surgery, estimated blood loss, and postoperative hospitalization days (P &gt; 0.05). However, patients who underwent two or more cesarean sections had a significantly (P = 0.01) higher rate and grade of complications during vaginal hysterectomy, compared with women with only one previous cesarean section. All women who underwent two or more cesarean sections had mild complications during vaginal hysterectomy (40% grade I and 60% grade II, P = 0.01). Conclusion: Vaginal hysterectomy is a safe procedure with few severe complications, regardless of a previous cesarean section. More than one previous cesarean section may increase the risk of minor complications during a vaginal hysterectomy. Patients who underwent a previous cesarean section could be reassured that they do not face an increased risk of complications during a vaginal hysterectomy. abstract_id: PUBMED:36057560 Outcomes and risk factors for failed trial of labor after cesarean delivery (TOLAC) in women with one previous cesarean section: a Chinese population-based study. Objective: To evaluate the outcomes and risk factors for trial of labor after cesarean delivery (TOLAC) failure in patients in China. Methods: Consecutive patients who had a previous cesarean delivery (CD) and attempted TOLAC were included from 2014 to 2020. Patients who successfully delivered were classified into the TOLAC success group. Patients who attempted TOLAC but had a repeat CD due to medical issues were classified into the TOLAC failure group. Multiple logistic regression analyses were performed to examine the risk factors for TOLAC failure. Results: In total, 720 women who had a previous CD and attempted TOLAC were identified and included. The success rate of TOLAC was 84.2%(606/720). Seven patients were diagnosed with uterine rupture, none of whom underwent hysterectomy. Multiple logistic regression analysis showed that the induction of labor (OR = 2.843, 95% CI: 1.571-5.145, P &lt; 0.001) was positively associated with TOLAC failure, but the thickness of the lower uterine segment (LUS) (OR = 0.215, 95% CI: 0.103-0.448, P &lt; 0.001) was negatively associated with TOLAC failure. Conclusions: This study suggested that TOLAC was effective in decreasing CD rates in the Chinese population. The induction of labor was positively associated with TOLAC failure, but the thickness of the LUS was negatively associated with TOLAC failure. Our findings need to be confirmed in larger samples with patients of different ethnicities. abstract_id: PUBMED:24732915 Urologic considerations of placenta accreta: a contemporary tertiary care institutional experience. Background: As the incidence of cesarean delivery has increased, we are experiencing a higher incidence of subsequent placenta accreta and the associated complications, including urologic complications. Methods: This is a retrospective review of all patients delivered from 2000 to 2011 with a histologically proven diagnosis of placenta accreta. Data were analyzed for baseline maternal characteristics, intraoperative and postoperative outcomes and complications. Results: 83 patients were included in the analysis. The depth of placenta accreta invasion varied in the cohort, with 48, 25 and 27% being classified as placenta accreta, placenta increta and placenta percreta, respectively. 88% of patients had had a previous cesarean delivery, and 58% had more than one prior operative delivery. Cystotomy was encountered in 27% of patients and ureteral injury occurred in 4%. Degree of placenta accreta invasion, number of prior cesarean deliveries and intraoperative blood loss were associated with a higher likelihood of urologic injury. Conclusions: Urologic injuries are among the most frequently encountered intraoperative complications of placenta accreta. Surgeons involved in these cases need to be aware of this risk and maintain a high level of surveillance intraoperatively. abstract_id: PUBMED:24293845 A Case Series of Uterine Rupture: Lessons to be Learned for Future Clinical Practice. Objective: In this article, we try to discuss risk factors and diagnostic difficulties for uterine rupture. Methods: Case series of 12 cases of uterine rupture observed in the Norfolk and Norwich University Hospital in the UK, with an average yearly birth rate of 6,000 deliveries, over a 6-year period. Results: In the present case series, there was no maternal mortality, and uterine rupture was a rare occurrence (12 in 36,000 births). Uterine rupture is associated with clinically significant uterine bleeding, fetal distress, expulsion or protrusion of the fetus, placenta or both into the abdominal cavity, and the need for prompt cesarean delivery and uterine repair or hysterectomy. The risk factors for rupture include previous cesarean sections, multiparity, malpresentation and obstructed labor, uterine anomalies, and use of prostaglandins for induction of labor. Previous cesarean section is, however, the most commonly associated risk factor. The most consistent early indicator of uterine rupture is the onset of a prolonged, persistent, and profound fetal bradycardia. Conclusion: In this case series, we suggest that the signs and symptoms of uterine rupture are typically nonspecific, which makes diagnosis difficult. Delay in definitive therapy causes significant fetal morbidity. The inconsistent signs and the short time in prompting definitive treatment of uterine rupture make it a challenging event. For the best outcome, vaginal birth after previous cesarean section needs to be looked after in an appropriately staffed and equipped unit for an immediate cesarean delivery and advanced neonatal support. abstract_id: PUBMED:20226406 Laparoscopic hysterectomy in the presence of previous caesarean section: a review of one hundred forty-one cases in the Sydney West Advanced Pelvic Surgery Unit. Objective: To examine whether laparoscopic hysterectomy is safe in the presence of previous caesarean section (CS). Design: Canadian Task Force Classification II-2. Setting: Laparoscopic hysterectomies performed for nonmalignant conditions by 7 gynecologic surgeons in public and private hospitals in Western Sydney. Patients: Data were collected from January 2001 through December 2007, involving 574 patients, of which 141 patients had 1 or more previous CS. Intervention: Laparoscopic hysterectomy. Measurements: Conversions to laparotomy and major intraoperative and postoperative complications (within 6 weeks of surgery) were recorded and compared between cohorts of patients with and without previous caesarean sections. Main Results: Of the 574 laparoscopic hysterectomies identified, 141 (24.6%) patients had at least 1 previous CS. Most women with previous CS had only 1 CS (51.8%), whereas 13.5% had 3 or more CS. The overall major complication rate among patients undergoing laparoscopic hysterectomy was 10.1%. The most common complication was hemorrhage (7.3% of patients) and inadvertent cystotomy (2.1%). The rate of major complications varied between the CS and non-CS groups. Among the non-CS group, the complication rate was 8.8%, whereas the complication rate among the CS group was 14.2%. The rate of inadvertent cystotomy in the group with no previous CS was 5 in 433 patients (1.2%). The rate of bladder complications showed an increase with the number of previous CS: 2.5% of patients with 1 or 2 previous CS and 21.1% of patients with 3 or more previous CS. The rate of inadvertent cystotomy in patients with 3 or more CS was 18 times that of patients with no CS (95% CI 5.1, 66.0). Twenty-four (5.5%) patients without previous CS and 15 (10.6%) patients with previous CS required conversion to laparotomy because of dense bladder or bowel adhesions. Conclusion: Laparoscopic hysterectomy in the setting of previous CS is recommended because long-term sequelae are rare. There are higher rates of major complications in patients undergoing laparoscopic hysterectomy with previous CS; the higher the number of previous CS, the higher the rate of complications. The most significant increase is seen in patients with more than 2 previous CS. abstract_id: PUBMED:32753310 Etiopathogenesis and risk factors for placental accreta spectrum disorders. Placenta accreta spectrum (PAS) disorders, comprising placenta accreta, increta, and percreta, are associated with serious maternal morbidity and mortality in both the developed and the developing world. The incidence of PAS has increased in the recent years, and the rising rates of cesarean section rate, placenta accreta in previous pregnancies, and other uterine surgeries including myomectomies and repeated endometrial curettage are implicated in its etiopathogenesis. The absolute risk of PAS increases with the number of previous cesarean sections. The PAS remains undiagnosed in one-half to two-thirds of cases, thus increasing maternal morbidity and mortality. Understanding etiopathogenesis and risk factors of this condition allows early diagnosis and planning of delivery, and thereby would help improve maternal and fetal outcomes. abstract_id: PUBMED:28508030 Incidental leiomyosarcoma found at the time of cesarean hysterectomy for morbidly adherent placenta. Background: Incidental leiomyosarcoma (LMS) is a rare diagnosis in pregnancy or in the puerperium. To our knowledge, this is the first case reported in the literature of incidental LMS after cesarean hysterectomy for morbidly adherent placenta. Case: We present a case of a cesarean hysterectomy performed for a suspected morbidly adherent placenta in a patient with three prior cesarean deliveries, an anterior placenta previa and a fundal fibroid. Subsequent pathology identified a LMS on final specimen. The patient declined bilateral oophorectomy and removal of her remaining cervix. No chemotherapy or radiation was given for her presumed stage IB disease. Conclusion: An incidental finding of a LMS is infrequent; the risk of recurrence is &gt; 50% even if the sarcoma is removed in its entirety. abstract_id: PUBMED:32021465 Perioperative Complications of Hysterectomy After a Previous Cesarean Section: A Systematic Review and Meta-Analysis. Background: With increasing rates of cesarean sections (CS), the number of hysterectomies performed among women with a previous CS is on the rise. Objective: To provide the association between the odds of complications following a hysterectomy performed later in life and a previous CS. Search Strategy: A comprehensive search was performed using major electronic databases, ie, MEDLINE, Scopus, ISI Web of Science, from their inception to April 2019. Selection Criteria: Analytical studies, irrespective of language or publication status, were included. Data Collection And Analysis: Outcomes were extracted in duplicate. The methodological quality of the included studies was independently evaluated by two review authors. A three-level meta-analysis was applied for outcomes with dependent effect sizes. Main Results: Twenty-six studies were included involving 54,815 women. The odds of the following complications were increased in women with a previous CS: urinary tract injury (pooled unadjusted odds ratio (OR)=3.15, 95% CI=2.01-4.94, 15 studies, 33,902 women, and pooled adjusted OR=2.21, 95% CI=1.46-3.34, 3 studies, 31,038 women), gastrointestinal tract injury (pooled unadjusted OR=1.73, 95% CI=1.19-2.53; 7 studies, 30,050, and pooled adjusted OR=1.83, 95% CI=1.11-3.03, 1 study, 25,354 women), postoperative infections (pooled unadjusted OR=1.44, 95% CI=1.22-1.71, 6 studies, 37,832 women), wound complications (pooled unadjusted OR=2.24, 95% CI=1.94-2.57, 9 studies, 37,559 women), reoperation (pooled unadjusted OR=1.46, 95% CI=1.19-1.78, 2 studies, 9,899 women), and blood transfusion (pooled unadjusted OR=1.35, 95% CI=1.03-1.76, 7 studies, 13,430 women). Conclusion: Previous CS increases risks of various complications following hysterectomy. This information reminds the gynecologists to be aware of the associations between previous CS and potential complications among women undergoing hysterectomy. Prospero Registration Number: CRD42018085061. Answer: Yes, previous cesarean section is a risk factor for incidental cystotomy at the time of hysterectomy. Studies have shown that women with a history of previous cesarean section have a significantly increased risk of damage to the lower urinary tract during hysterectomy. Specifically, the odds ratio (OR) for incidental cystotomy during total abdominal hysterectomy (TAH), total vaginal hysterectomy (TVH), and laparoscopically-assisted vaginal hysterectomy (LAVH) in women with a history of previous cesarean was found to be 1.26, 3.00, and 7.50, respectively, with the value for LAVH being statistically significant (P = .005; 95%CI 1.8-31.4) (PUBMED:16325612). Another study confirmed that prior cesarean delivery is an independent risk factor for incidental cystotomy, with an adjusted OR of 2.86 (95% CI: 1.39-5.92) (PUBMED:22453160). Furthermore, the rate of major complications, including inadvertent cystotomy, was higher among patients with previous cesarean sections undergoing laparoscopic hysterectomy, with the rate increasing with the number of previous cesarean sections (PUBMED:20226406). These findings suggest that a history of cesarean section should be considered when evaluating the risk of incidental cystotomy during hysterectomy procedures.
Instruction: Osteo-articular tuberculosis and postpartum: a casual association? Abstracts: abstract_id: PUBMED:15961193 Osteo-articular tuberculosis and postpartum: a casual association? Introduction: Most cases of active tuberculosis in France are due to a recurrence of latent tuberculosis. It seems that immunorestitution during the postpartum can contribute to the return of latent tuberculosis. Exegesis: We report three observations of Mycobacterium tuberculosis osteo-articular infections (two Pott's diseases and one sterno-clavicular arthritis) occurring during the postpartum of women non infected by HIV. Two patients need a surgical treatment. The response to standard tuberculous treatment was favourable and all patients were cured. Conclusion: One must consider osteo-articular tuberculosis when a patient is suffering from osseous pains not proving reliable during the postpartum. We must remind ourselves of the relationships between tuberculosis and postpartum as well as the necessity to the threat both mother and child. Additional epidemiological studies should be realised. It appears necessary to increase in France measures for tracking tuberculosis in particularly about the latent forms. abstract_id: PUBMED:37690276 Diagnosis and treatment of osteo-articular tuberculosis of the foot and ankle (a five case series). Introduction: Osteo-articular tuberculosis is a rare manifestation of this disease, often posing diagnostic challenges that necessitate additional diagnostic imaging modalities such as radiography, CT, and MRI. This article presents a series of five cases involving tuberculosis affecting the bones of the foot and ankle, diagnosed at various stages. The patients received appropriate anti-tuberculosis medical treatment following national protocols, along with surgical interventions when necessary. Case Studies: In this series, we describe the clinical characteristics and management of five cases of foot and ankle bone tuberculosis. These cases were diagnosed at different stages, and all patients received standard anti-tuberculosis medical therapy according to national treatment guidelines. Surgical interventions were performed when deemed necessary to optimize patient outcomes. Discussion: The diagnosis of bone tuberculosis should be considered in any clinical scenario that presents with uncertain features, persistent symptoms, or resistance to conventional treatment approaches. It is crucial to employ a multidisciplinary approach involving medical and surgical management to effectively address this challenging disease. However, it is important to note that surgical intervention cannot replace the necessity of proper medical treatment. Conclusion: Tuberculosis involving the bones of the foot and ankle remains an infrequent occurrence. However, considering the endemic context, prompt therapeutic interventions are essential to prevent significant osteoarticular damage. Early diagnosis, adherence to established treatment protocols, and a comprehensive approach encompassing both medical and surgical modalities are crucial for successful management of this rare entity. abstract_id: PUBMED:18893395 Climatic boost and osteo-articular tuberculosis. N/A abstract_id: PUBMED:491909 Osteo-articular tuberculosis in African (author's transl) General, clinical and therapeutic features of osteo-articular tuberculosis in African, excluding vertebral localizations, are compiled from 81 records: The are: -- a frequency lower than in expatriated Africans and this indicates their special physical resistance when they live in their natural environment; -- frequently an easy diagnosis because of infected advanced foci with associated lesions; -- a surgical indication (curettage, resection, arthrodesis) as frequent as in vertebral localizations. abstract_id: PUBMED:28764233 Multifocal Tubercular Dactylitis: A Rare Presentation of Skeletal Tuberculosis in an Adult. Tubercular dactylitis is an uncommon form of osteo-articular tuberculosis seen in children. Multifocal involvement, simultaneously involving hands and feet is extremely uncommon. Here we report an adult patient with tubercular dactylitis involving multiple digits of both hands and second digit of right foot in absence of any risk factors like immunodeficiency or any debilitating condition. The patient was successfully treated with anti-tubercular drugs for six months. Mycobacterium tuberculosis infection of bones and joints can present in an unusual way but early diagnosis and treatment caries a good prognosis. abstract_id: PUBMED:13130240 Tuberculosis of the knee: MRI findings in two pediatric cases Osteo-articular tuberculosis is rare in infants. The MRI findings reported for adolescents and young adults mainly relate to spinal involvement. Two cases of osteo-articular tuberculosis of infants located at the knee are presented. Vaccination has been correctly done. Skin test and chest radiography were normal. Evolution was insidious for one case. Osseous, medullary, cartilaginous and soft tissue abnormalities revealed by MRI were suspicious for tuberculosis. Diagnosis was confirmed at histology for both cases and bacteriology for one case. The aim of this study is to report the MRI features of osteo-articular tuberculosis in pediatric patients. abstract_id: PUBMED:737957 Osteo-articular tuberculosis: its presentation in coloured races. Four hundred and ninety-nine cases of osteo-articular tuberculosis have been analysed with special emphasis on the "unusual" presentation. Sclerotic bone reaction in association with active tuberculous osteitis is not at all uncommon and may occur in any affected bone. Maintenance of disc spaces with tuberculous spondylitis is not uncommon. Periosteal reactions, often very gross, occur commonly both in multicystic tuberculosis as well as in isolated lesions. Multiple bone involvement occurred in 8.2% of this series. Tomography delineates the total extent of involvement especially in central involvement of vertebral bodies. Healed tuberculosis may give rise to block vertebrae indistinguishable from the congenital type except for the absence of waisting at the level of fusion. abstract_id: PUBMED:18144765 Aetiopathogenesis of osteo-articular tuberculosis. N/A abstract_id: PUBMED:2086659 Clinicosocial aspect of osteo-articular tuberculosis. The study constituted clinicosocial analysis of 194 cases of osteo-articular tuberculosis. The disease was common in 1st and 2nd decades of life but not rare in old age with male preponderance. Majority of the patients were from rural area belonging to lower economic classes. Lesions were usually solitary (96.4%) situated in the weight bearing bones and joints (88.66%). Spinal tuberculosis was commonest (48.97%). The onset of disease was insidious in 94.8% cases. Chronic pain, swelling, impaired movements, deformities, sinus and cold abscess were found to be salient features of disease. Negligence and secondary infection were common. abstract_id: PUBMED:13046032 Modern treatment of osteo-articular tuberculosis N/A Answer: The association between osteo-articular tuberculosis and the postpartum period is not merely casual but may be related to immunological changes that occur during this time. The report by PUBMED:15961193 suggests that immunorestitution during the postpartum period can contribute to the reactivation of latent tuberculosis. This is supported by observations of Mycobacterium tuberculosis osteo-articular infections, such as Pott's disease and sterno-clavicular arthritis, occurring during the postpartum period in women who were not infected by HIV. The patients in these observations responded favorably to standard tuberculosis treatment and were cured, indicating that osteo-articular tuberculosis should be considered when a patient presents with persistent osseous pains during the postpartum period. The necessity to treat both mother and child and to conduct additional epidemiological studies is emphasized, along with the need to improve measures for tracking tuberculosis, particularly latent forms, in France. The abstracts do not provide direct evidence of a causal relationship between the postpartum period and osteo-articular tuberculosis, but they do highlight the importance of considering this form of tuberculosis in postpartum women presenting with bone pain. The abstracts also underscore the need for proper diagnosis and treatment, which may include both medical therapy and surgical interventions when necessary (PUBMED:37690276). The rarity of the disease and the challenges in diagnosis, especially in cases with unusual presentations or in populations with certain risk factors, are also noted (PUBMED:28764233, PUBMED:491909, PUBMED:737957, PUBMED:2086659). In conclusion, while the abstracts do not establish a definitive causal link, they suggest that there is an association between the postpartum period and the reactivation of osteo-articular tuberculosis, warranting consideration of this diagnosis in affected individuals.
Instruction: Are people healthier if their partners are more optimistic? Abstracts: abstract_id: PUBMED:24840138 Are people healthier if their partners are more optimistic? The dyadic effect of optimism on health among older adults. Objective: Optimism has been linked with an array of positive health outcomes at the individual level. However, researchers have not examined how a spouse's optimism might impact an individual's health. We hypothesized that being optimistic (and having an optimistic spouse) would both be associated with better health. Methods: Participants were 3940 adults (1970 couples) from the Health and Retirement Study, a nationally representative panel study of American adults over the age of 50. Participants were tracked for four years and outcomes included: physical functioning, self-rated health, and number of chronic illnesses. We analyzed the dyadic data using the actor-partner interdependence model. Results: After controlling for several psychological and demographic factors, a person's own optimism and their spouse's optimism predicted better self-rated health and physical functioning (bs = .08-.25, ps&lt;.01). More optimistic people also reported better physical functioning (b = -.11, p&lt;.01) and fewer chronic illnesses (b=-.01, p&lt;.05) over time. Further, having an optimistic spouse uniquely predicted better physical functioning (b = -.09, p&lt;.01) and fewer chronic illnesses (b = -.01, p&lt;.05) over time. The strength of the relationship between optimism and health did not diminish over time. Conclusions: Being optimistic and having an optimistic spouse were both associated with better health. Examining partner effects is important because such analyses reveal the unique role that spouses play in promoting health. These findings may have important implications for future health interventions. abstract_id: PUBMED:24836718 Optimistic bias, sexual assault, and fear. A survey of 431 adults documents optimistic bias regarding people's perceived risk of sexual victimization. The findings extend optimistic bias to crime victimization and contribute to the literature by considering a motivational factor, fear, as a predictor of optimistic bias. The study also yielded significant relationships between optimistic bias and demographic variables, including age, gender, and family structure. abstract_id: PUBMED:12794201 Avoiding risky sex partners: perception of partners' risks v partners' self reported risks. Background: Key strategies advocated for lowering personal risk of sexual exposure to STD/HIV include having fewer partners and avoiding risky partners. However, few studies have systematically examined how well people can actually discern their sex partners' risk behaviours. Methods: We conducted face to face interviews with 151 heterosexual patients with gonorrhoea or chlamydial infection and 189 of their sex partners. Interviews examined the patients' perceptions of their sex partners' sociodemographic characteristics and risk behaviours. Patients' perceptions of partners were then sociometrically compared for agreement with partner self reports, using the kappa statistic for discrete variables and concordance correlation for continuous variables. Results: Agreement was highest for perceived partner age, race/ethnicity, and duration of sexual partnership; and lowest for knowledge of partner's work in commercial sex, number of other sex partners, and for perceived quality of communication within the partnership. Index patients commonly underestimated or overestimated partners' risk characteristics. Reported condom use was infrequent and inconsistent within partnerships. Conclusion: Among people with gonorrhoea or chlamydial infection, patients' perceptions of partners' risk behaviours often disagreed with the partners' self reports. Formative research should guide development and evaluation of interventions to enhance sexual health communication within partnerships and within social networks, as a potential harm reduction strategy to foster healthier partnerships. abstract_id: PUBMED:33147740 Optimistic Bias, Food Safety Cognition, and Consumer Behavior of College Students in Taiwan and Mainland China. The purpose of this paper is to investigate how optimistic bias, consumption cognition, news attention, information credibility, and social trust affect the purchase intention of food consumption. Data used in this study came from a questionnaire survey conducted in college students in Taipei and Beijing. Respondents in the two cities returned 258 and 268 questionnaires, respectively. Samples were analyzed through structural equation modelling (SEM) to test the model. Results showed that Taiwanese college students did not have optimistic bias but Chinese students did. The models showed that both Taiwanese and Chinese students' consumption cognition significantly influenced their purchase intention, and news attention significantly influenced only Chinese students' purchase intention. Model comparison analysis suggested significant differences between the models for Taiwan and mainland China. The results revealed that optimistic bias can be reduced in different social contexts as that of the Taiwan model and the mainland Chinese model found in this study were indeed different. This study also confirmed that people had optimistic bias on food safety issues, based on which recommendations were made to increase public awareness of food safety as well as to improve government's certification system. abstract_id: PUBMED:36569788 Optimistic bias, risky behavior, and social norms among Indian college students during COVID-19. Using survey data of college students in India, we investigate whether COVID-19 optimistic bias among individuals increases risky behavior. We also explore whether participants' optimistic bias differs depending on their degree of closeness with others. We found that the presence of friends instead of neighbors/strangers make participants with high COVID-19 optimistic bias inclined to take more risks. Besides, it has been found that preventive behavioral norms followed by peers minimize risky behavior among participants with high optimistic bias. Our findings offer important implications for policymakers to minimize the transmission of the disease among college students. abstract_id: PUBMED:27330193 What Constitutes Intermarriage for Multiracial People in Britain? Intermarriage is of great interest to analysts because a group's tendency to partner across ethnic boundaries is usually seen as a key indicator of the social distance between groups in a multiethnic society. Theories of intermarriage as a key indicator of integration are, however, typically premised upon the union of white and nonwhite individuals, and we know very little about what happens in the unions of multiracial people, who are the children of intermarried couples. What constitutes intermarriage for multiracial people? Do multiracial individuals think that ethnic or racial ancestries are a defining aspect of their relationships with their partners? In this article, I argue that there are no conventions for how we characterize endogamous or exogamous relationships for multiracial people. I then draw on examples of how multiracial people and their partners in Britain regard their relationships with their partners and the significance of their and their partners' ethnic and racial backgrounds. I argue that partners' specific ancestries do not necessarily predict the ways in which multiracial individuals regard their partners' ethnic and racial backgrounds as constituting difference or commonality within their relationships. abstract_id: PUBMED:28614670 HIV Testing and Positivity Patterns of Partners of HIV-Diagnosed People in Partner Services Programs, United States, 2013-2014. Objective: Human immunodeficiency virus (HIV) partner services are an integral part of comprehensive HIV prevention programs. We examined the patterns of HIV testing and positivity among partners of HIV-diagnosed people who participated in partner services programs in CDC-funded state and local health departments. Methods: We analyzed data on 21 484 partners submitted in 2013-2014 by 55 health departments. We conducted descriptive and multivariate analyses to examine patterns of HIV testing and positivity by demographic characteristics and geographic region. Results: Of 21 484 partners, 16 275 (75.8%) were tested for HIV; 4503 of 12 886 (34.9%) partners with test results were identified as newly HIV-positive. Compared with partners aged 13-24, partners aged 35-44 were less likely to be tested for HIV (adjusted odds ratio [aOR] = 0.86; 95% confidence interval [CI], 0.78-0.95) and more likely to be HIV-positive (aOR = 1.35; 95% CI, 1.20-1.52). Partners who were male (aOR = 0.89; 95% CI, 0.81-0.97) and non-Hispanic black (aOR = 0.68; 95% CI, 0.63-0.74) were less likely to be tested but more likely to be HIV-positive (male aOR = 1.81; 95% CI, 1.64-2.01; non-Hispanic black aOR = 1.52; 95% CI, 1.38-1.66) than partners who were female and non-Hispanic white, respectively. Partners in the South were more likely than partners in the Midwest to be tested for HIV (aOR = 1.56; 95% CI, 1.35-1.80) and to be HIV-positive (aOR = 2.18; 95% CI, 1.81-2.65). Conclusions: Partner services programs implemented by CDC-funded health departments are successful in providing HIV testing services and identifying previously undiagnosed HIV infections among partners of HIV-diagnosed people. Demographic and regional differences suggest the need to tailor these programs to address unique needs of the target populations. abstract_id: PUBMED:38327504 Artificial intelligence adoption in extended HR ecosystems: enablers and barriers. An abductive case research. Artificial intelligence (AI) has disrupted modern workplaces like never before and has induced digital workstyles. These technological advancements are generating significant interest among HR leaders to embrace AI in human resource management (HRM). Researchers and practitioners are keen to investigate the adoption of AI in HRM and the resultant human-machine collaboration. This study investigates HRM specific factors that enable and inhibit the adoption of AI in extended HR ecosystems and adopts a qualitative case research design with an abductive approach. It studies three well-known Indian companies at different stages of AI adoption in HR functions. This research investigates key enablers such as optimistic and collaborative employees, strong digital leadership, reliable HR data, specialized HR partners, and well-rounded AI ethics. The study also examines barriers to adoption: the inability to have a timely pulse check of employees' emotions, ineffective collaboration of HR employees with digital experts as well as external HR partners, and not embracing AI ethics. This study contributes to the theory by providing a model for AI adoption and proposes additions to the unified theory of acceptance and use of technology in the context of AI adoption in HR ecosystems. The study also contributes to the best-in-class industry HR practices and digital policy formulation to reimagine workplaces, promote harmonious human-AI collaboration, and make workplaces future-ready in the wake of massive digital disruptions. abstract_id: PUBMED:37694969 Healthier Food Purchase and Its Determinants in an Urban Resettlement Colony of Delhi. Dietary risk, one of the major risk factors for the increasing burden of non-communicable diseases, is influenced by household food choices and purchases. A community-based cross-sectional study was conducted in 250 randomly selected households of an urban resettlement colony in Delhi to estimate the proportion of households purchasing different healthier food options during the last purchasing occasion and to identify its key determinants. Purchase of healthier options in staple items like wheat flour with fiber (100%), plant-based oils (97.9%), unpolished pulses (96.2%), and toned milk (94.5%) was high. Affordability and health considerations in food purchases were identified as key determinants. abstract_id: PUBMED:36324439 The proportion of HIV disclosure to sexual partners among people diagnosed with HIV in China: A systematic review and meta-analysis. Background: Sexual behavior is one of the main routes of HIV/AIDS spread. HIV disclosure to sexual partners has been confirmed to be an important strategy for HIV/AIDS prevention and control. We conducted a systematic review and meta-analysis to pool proportions and characteristics of HIV disclosure to sexual partners among people diagnosed with HIV in China. Methods: We searched eight databases and extracted the data on HIV disclosure to partners. Heterogeneity of the data was tested with I2. Published bias subjectively and objectively analyzed through the funnel plot and Egger's regression test. Subgroup analyses were performed to explore the variation in the proportion by sexual partnership types (unclassified, regular, casual sexual partners), whether being men who have sex with men (MSM), and when to diagnose. The sources of heterogeneity were analyzed. Sensitivity analysis was carried out to evaluate the stability of the results. Results: Out of 3,698 studies, 44 were included in the review; 11 targeted on MSM. The pooled proportion of HIV disclosure to sexual partners was 65% (95% CI: 56%-75%; 34 studies). Sub-group analyses indicated the proportions of HIV disclosure to regular, casual and unclassified sexual partners were 63% (95% CI: 45%-81%; 31 studies), 20% (95% CI: 8%-33%; nine studies), and 66% (95% CI: 59%-73%; 14 studies), respectively. Fifty-seven percent (95% CI: 45%-69%; three studies) disclosed on the day of diagnosis, 62% (95% CI: 42%-82%; four studies) disclosed within 1 month, and 39% (95% CI: 2%-77%; four studies) disclosed 1 month later. Among MSM, the disclosure to regular male partners, regular female sexual partners, spouses, and casual partner were 47% (95% CI: 29%-65%; six studies), 49% (95% CI: 33%-65%; three studies), 48% (95% CI: 18%-78%; seven studies), and 34% (95% CI: 19%-49%; four studies), respectively. Conclusions: The disclosure prevalence of people diagnosed with HIV to sexual partners still need improving in China, and it varies among partner types, key populations, and time being diagnosed. HIV disclosure strategies and procedures need to be developed more detailed and tailored based on the pain points of disclosure status, so as to ultimately prevent HIV transmission through sexual contact. Systematic Review Registration: https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42022291631, identifier: CRD42022291631. Answer: Yes, people are healthier if their partners are more optimistic. A study involving 3940 adults (1970 couples) from the Health and Retirement Study found that both a person's own optimism and their spouse's optimism predicted better self-rated health and physical functioning. More optimistic individuals reported better physical functioning and fewer chronic illnesses over time. Additionally, having an optimistic spouse uniquely predicted better physical functioning and fewer chronic illnesses over time, even after controlling for several psychological and demographic factors. The relationship between optimism and health did not diminish over time, suggesting that the positive effects of optimism on health are consistent (PUBMED:24840138).
Instruction: Does local injury to the endometrium before IVF cycle really affect treatment outcome? Abstracts: abstract_id: PUBMED:22943664 Does local injury to the endometrium before IVF cycle really affect treatment outcome? Results of a randomized placebo controlled trial. Aim: To evaluate the effect of local injury to the endometrium during spontaneous menstrual cycles before in vitro fertilization (IVF) treatment on implantation and pregnancy rates in women with recurrent implantation failure (RIF). Methods: In a prospective randomized controlled trial (RCT), a total of 36 patients, with RIF undergoing IVF, were randomized to two groups. In 18 patients, endometrial biopsies were performed using a pipelle curette on days 9-12 and 21-24 of the menstrual cycle preceding IVF treatment. In 18 control patients, a cervical pipelle was performed. Results: The implantation rate (2.08% versus 11.11%; p = 0.1), clinical (0% versus 31.25%; p &lt; 0.05) and live births rates (0% versus 25%; p = 0.1) were lower in the experimental group compared with controls. Conclusion: Our RCT did not find any benefit from local injury to the endometrium in women with a high number of RIFs. Further studies are warranted to better define the target population of patients who may benefit from this procedure. abstract_id: PUBMED:20607003 Does local endometrial injury in the nontransfer cycle improve the IVF-ET outcome in the subsequent cycle in patients with previous unsuccessful IVF? A randomized controlled pilot study. Background: Management of repeated implantation failure despite transfer of good-quality embryos still remains a dilemma for ART specialists. Scrapping of endometrium in the nontransfer cycle has been shown to improve the pregnancy rate in the subsequent IVF/ET cycle in recent studies. Aim: The objective of this randomized controlled trial (RCT) was to determine whether endometrial injury caused by Pipelle sampling in the nontransfer cycle could improve the probability of pregnancy in the subsequent IVF cycle in patients who had previous failed IVF outcome. Setting: Tertiary assisted conception center. Design: Randomized controlled study. Materials And Methods: 100 eligible patients with previous failed IVF despite transfer of good-quality embryos were randomly allocated to the intervention group and control groups. In the intervention group, Pipelle endometrial sampling was done twice: One in the follicular phase and again in the luteal phase in the cycle preceding the embryo transfer cycle. Outcome Measure: The primary outcome measure was live birth rate. The secondary outcome measures were implantation and clinical pregnancy rates. Results: The live birth rate was significantly higher in the intervention group compared to control group (22.4% and 9.8% P = 0.04). The clinical pregnancy rate in the intervention group was 32.7%, while that in the control group was 13.7%, which was also statistically significant (P = 0.01). The implantation rate was significantly higher in the intervention group as compared to controls (13.07% vs 7.1% P = 0.04). Conclusions: Endometrial injury in nontransfer cycle improves the live birth rate, clinical pregnancy and implantation rates in the subsequent IVF-ET cycle in patients with previous unsuccessful IVF cycles. abstract_id: PUBMED:19352180 Local injury to the endometrium: its effect on implantation. Purpose Of Review: To review the effect of local injury to the endometrium on the implantation rate in IVF-embryo transfer. Recent Findings: In 2003, Barash et al. reported that endometrial sampling of IVF patients using a biopsy catheter substantially increases their chances to conceive at the following IVF-embryo transfer cycle. Such a favorable influence of local injury to the endometrium was later confirmed by Raziel et al. Our previous studies demonstrated that removal of ploys or thickening endometrium 2 weeks before embryo transfer significantly improves the incidence of successful pregnancies in patients undergoing IVF. In 2008, our study suggested that the gene-expression profile of endometria from patients with different pregnancy results are different. Summary: Local injury to endometria of IVF patients in controlled ovarian hyperstimulation cycle may increase the incidence of embryo implantation. abstract_id: PUBMED:36287634 The effect of endometrial scratching on reproductive outcomes in infertile women undergoing IVF treatment cycles. This study was a Randomised Controlled Trial aiming to evaluate the effect of Endometrial Scratching (ES) on fertility rate. Participants were primary infertile women undergoing IVF treatment. ES for the intervention group was done using endometrial aspiration in the luteal phase of the cycle before embryo transfer. In both groups, 2-3 8-celled embryos were transferred after endometrial preparation by Oestrogen and Progesterone. There were no significant differences between the two groups in terms of age, BMI and endometrial thickness (ET). No significant differences were found between intervention and control groups in chemical pregnancy rate (p = 0.410), clinical pregnancy (p = 0.822), the number of abortions (p = 0.282) and the implantation rate (p = 0.777). Local ES had no significant effects in improving the IVF success rate and reducing the embryo abortion rate.Impact statementWhat is already known on this subject? Endometrial scratching (ES) is a local injury to the endometrium that was assumed to affect implantation in IVF and IUI cycles positively. However, various studies have shown conflicting results on this matter.What do the results of this study add? Local ES had no significant effects on improving the IVF success rate and reducing the embryo abortion rate in patients with the first IVF cycle.What are the implications of these findings for clinical practice and/or further research? Larger clinical trials can measure the usefulness of ES with higher powers. However, this study, along with other clinical trials, can help evaluate the ES effect in future meta-analyses. abstract_id: PUBMED:28511086 Local endometrial injury in women with failed IVF undergoing a repeat cycle: A randomized controlled trial. Objective: To evaluate the effectiveness of local endometrial injury in women undergoing in vitro fertilization (IVF) with at least one previous unsuccessful attempt. Study Design: Randomized controlled trial. Recruited women were randomized into two groups. In group A (pipelle group), women underwent pipelle biopsy twice in the luteal phase in the cycle prior to IVF. In group B (control), women did not undergo any intervention prior to IVF. The primary outcome was clinical pregnancy rate. The secondary outcomes included live birth, miscarriage, multiple pregnancy and preterm delivery rates. Results: One hundred and eleven women were included in the study with 55 in the pipelle group and 56 in the control arm. The baseline clinical characteristics were similar in both groups. The clinical pregnancy rates were not significantly different between pipelle and control group (34.09% vs. 27.65%; Odds ratio, OR 1.35, 95% confidence interval, CI 0.55-3.30). The live birth (31.81% vs. 25.53%; OR 1.36, 95% CI 0.55-3.39), multiple pregnancy (33.33% vs. 61.54%; OR 0.31, 95% CI 0.07-1.47), miscarriage (6.66% vs. 7.69%; OR 0.86, 95% CI 0.05-15.23) and preterm delivery rates (35.71% vs. 66.66%; OR 0.28, 95% CI 0.05-1.4) were also not significantly different between the two groups. Conclusion: Current study did not find any improvement in IVF success rates following endometrial injury in woman undergoing IVF after previous failed attempt. abstract_id: PUBMED:25246928 Efficacy of the local endometrial injury in patients who had previous failed IVF-ICSI outcome. Background: The latest studies reported that local endometrial injury is a useful method to improve the success of IVF-ICSI outcome. Objective: To assess whether local endometrial injury occurred by Pipelle in the spontaneous cycle could improve implantation rate, cleavage rate, and pregnancy outcome in the subsequent IVF-ICSI cycle in patients who had recurrent IVF failure. Materials And Methods: An endometrial biopsy was performed on day 21(st) in 41 patients as intervention group in this retrospective cross-sectional study. The control group contained 42 women. Results: Implantation rate was 22.5% and 10.5% in intervention and control group, respectively and this difference was found to be statistically significant (p=001). Pregnancy rate was 43.9% in the intervention group and this parameter was significantly lower in control group (21.4%) (p=0.03). Conclusion: Local endometrial injury in the nontransfer cycle increases the implantation rate and pregnancy rate in the subsequent IVF-ICSI cycle in patients who had previous failed IVF-ICSI outcome. abstract_id: PUBMED:27294218 The effect of endometrial injury on first cycle IVF/ICSI outcome: A randomized controlled trial. Background: Implantation remains a limiting step in IVF/ICSI. Endometrial injury isa promising procedure aiming at improving the implantation and pregnancy rates after IVF/ICSI. Objective: The aim of this study was to evaluate the effect of endometrial injury induced in precedingcycle on IVF/ICSI outcome. Materials And Methods: Four hundred patients undergoing their first IVF/ICSI cycle in two IVF units in Minia, Egypt were randomly selected to undergo either endometrial injury in luteal phase of preceding cycle (intervention group) or no treatment (control group). Primary outcome wasthe implantation and live birth ratesWhile the secondary outcome was clinical pregnancy, miscarriage, multiple pregnancy rates, pain and bleeding during and after procedure. Results: Implantation and live birth rates were significantly higher in intervention compared with control group (22.4% vs. 18.7%, p=0.02 and 67% vs. 28%, p=0.03), respectively. There was also a significant reduction in miscarriage rate in intervention group (4.8% vs. 19.7%, respectively, p&lt;0.001). Conclusion: Endometrial injury in preceding cycle improves the implantation rate and live birth rate and reduces the miscarriage rate per clinical pregnancy in patients undergoing their first IVF/ICSI cycle. abstract_id: PUBMED:22885017 Local endometrial injury and IVF outcome: a systematic review and meta-analysis. A systematic review was conducted of the influence of local endometrial injury (LEI) on the outcome of the subsequent IVF cycle. MEDLINE, EMBASE, the Cochrane Library, National Research Register, ISI Conference Proceedings, ISRCTN Register and Meta-register were searched for randomized controlled trials to October 2011. The review included all trials comparing the outcome of IVF treatment in patients who had LEI in the cycle preceding their IVF treatment with controls in which endometrial injury was not performed. The main outcome measures were clinical pregnancy and live birth rates. In total, 901 participants were included in two randomized (n=193) and six non-randomized controlled studies (n=708). The quality of the studies was variable. Meta-analysis showed that clinical pregnancy rate was significantly improved after LEI in both the randomized (relative risk, RR, 2.63, 95% CI 1.39-4.96, P=0.003) and non-randomized studies (RR 1.95, 95% CI 1.61-2.35, P&lt;0.00001). The improvement did not reach statistical significance in the one randomized study which reported the live birth rate (RR 2.29, 95% CI 0.86-6.11). Robust randomized trials comparing a standardized protocol of LEI before IVF treatment with no intervention in a well-defined patient population are needed. abstract_id: PUBMED:17681303 Local injury to the endometrium in controlled ovarian hyperstimulation cycles improves implantation rates. Objective: To explore the possibility that local injury to the endometrium in controlled ovarian hyperstimulation cycle improves the incidence of embryo implantation and to analyze the gene expression profile in the endometria of pregnant and nonpregnant patients in in vitro fertilization/embryo transfer (IVF-ET). Design: Prospective study. Setting: A clinical assisted reproductive center of a university hospital. Patient(s): Women undergoing fresh IVF-ET cycles (n = 121), treated with a long protocol for controlled ovarian hyperstimulation, whose endometrium were diagnosed by B-ultrasound showing irregular echo. Intervention(s): Local injury to the endometrium of 60 patients in controlled ovarian hyperstimulation cycle, who were randomly selected from a total of 121 patients. Seven endometrial biopsies samples from day 10 were analyzed by Affymetrix U133 plus 2.0 gene chip. Main Outcome Measure(s): Outcomes of IVF-ET and gene expression assayed by gene chip technology. Result(s): Transfer of the same number of embryos (135 in the experimental and control patients, respectively) resulted in rates of implantation (33.33% vs. 17.78%), clinical pregnancy (48.33% vs. 27.86%), and ongoing or live births per ET (41.67% vs. 22.96%) that were higher in the experimental group compared with controls. Statistically significant differences of the expression level of 218 genes (41 up-regulated and 177 down-regulated) were detected in the endometrial biopsy samples from clinical pregnant patients and nonpregnant patients. Conclusion(s): The results suggested local injury to the endometrium during a COH cycle improved the rates of embryo implantation, clinical pregnancy and live birth in ART. We also demonstrated a statistically significant difference in the messenger RNA (mRNA) expression profiles in the endometrium of pregnant and nonpregnant patients. Further studies on the genes identified herein will assist in predicting implantation competence. abstract_id: PUBMED:12798877 Local injury to the endometrium doubles the incidence of successful pregnancies in patients undergoing in vitro fertilization. Objective: Exploration of the possibility that local injury of the endometrium increases the incidence of implantation. Design: Prospective study. Setting: Clinical IVF unit. Patient(s): A group of 134 patients, defined as good responders to hormonal stimulation, who failed to conceive during one or more cycles of IVF and embryo transfer (ET). Intervention(s): The IVF treatment and ET were preceded by repeated endometrial biopsies, in a randomly selected 45 of a total of 134 patients. Main Outcome Measures: Outcome of IVF-ET treatments. Result(s): Transfer of a similar number of embryos (3.4 +/- 1.0 and 3.1 +/- 0.9 in the experimental and control patients, respectively) resulted in rates of implantation (27.7% vs. 14.2%, P =.00011), clinical pregnancy (66.7% vs. 30.3%, P =.00009), and live births per ET (48.9% vs. 22.5%, P =.016) that were more than twofold higher in the experimental group as compared to controls. Conclusion(s): These results suggest that IVF treatment that is preceded by endometrial biopsy doubles the chance for a take-home baby. Answer: The effect of local injury to the endometrium before an IVF cycle on treatment outcomes has been the subject of various studies, with conflicting results. Some studies have found that local endometrial injury can improve outcomes for patients with previous unsuccessful IVF attempts. For instance, a randomized controlled trial found that endometrial injury in the nontransfer cycle improved live birth rate, clinical pregnancy, and implantation rates in the subsequent IVF-ET cycle in patients with previous unsuccessful IVF cycles (PUBMED:20607003). Another study suggested that local injury to the endometrium during a controlled ovarian hyperstimulation cycle may increase the incidence of embryo implantation (PUBMED:19352180). Similarly, a study reported that local endometrial injury in the nontransfer cycle increases the implantation rate and pregnancy rate in the subsequent IVF-ICSI cycle in patients who had previous failed IVF-ICSI outcome (PUBMED:25246928), and another study found that endometrial injury in the preceding cycle improves the implantation rate and live birth rate and reduces the miscarriage rate per clinical pregnancy in patients undergoing their first IVF/ICSI cycle (PUBMED:27294218). However, other studies have not found a benefit from local injury to the endometrium. One randomized controlled trial did not find any benefit from local injury to the endometrium in women with a high number of recurrent implantation failures (PUBMED:22943664). Another study concluded that local endometrial injury did not improve IVF success rates in women undergoing IVF after a previous failed attempt (PUBMED:28511086). Additionally, a randomized controlled trial found no significant effects of local endometrial scratching on improving the IVF success rate and reducing the embryo abortion rate (PUBMED:36287634). A systematic review and meta-analysis indicated that clinical pregnancy rate was significantly improved after local endometrial injury in both randomized and non-randomized studies, but the improvement in live birth rate did not reach statistical significance in the one randomized study that reported it (PUBMED:22885017). In summary, the evidence on the effectiveness of local injury to the endometrium before an IVF cycle is mixed. While some studies suggest a positive effect on implantation and pregnancy rates, others do not find a significant benefit.
Instruction: The appropriateness of red blood cell use and the extent of overtransfusion: right decision? Abstracts: abstract_id: PUBMED:29724430 ACR Appropriateness Criteria® Hematuria-Child. Hematuria is the presence of red blood cells in the urine, either visible to the eye (macroscopic hematuria) or as viewed under the microscope (microscopic hematuria). The clinical evaluation of children and adolescents with any form of hematuria begins with a meticulous history and thorough evaluation of the urine. The need for imaging evaluation depends on the clinical scenario in which hematuria presents, including the suspected etiology. Ultrasound and CT are the most common imaging methods used to assess hematuria in children, although other imaging modalities may be appropriate in certain instances. This review focuses on the following clinical variations of childhood hematuria: isolated hematuria (nonpainful, nontraumatic, and microscopic versus macroscopic), painful hematuria (ie, suspected nephrolithiasis or urolithiasis), and renal trauma with hematuria (microscopic versus macroscopic). The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed annually by a multidisciplinary expert panel. The guideline development and revision include an extensive analysis of current medical literature from peer reviewed journals and the application of well-established methodologies (RAND/UCLA Appropriateness Method and Grading of Recommendations Assessment, Development, and Evaluation or GRADE) to rate the appropriateness of imaging and treatment procedures for specific clinical scenarios. In those instances where evidence is lacking or equivocal, expert opinion may supplement the available evidence to recommend imaging or treatment. abstract_id: PUBMED:33958109 ACR Appropriateness Criteria® Radiologic Management of Lower Gastrointestinal Tract Bleeding: 2021 Update. Diverticulosis remains the commonest cause for acute lower gastrointestinal tract bleeding (GIB). Conservative management is initially sufficient for most patients, followed by elective diagnostic tests. However, if acute lower GIB persists, it can be investigated with colonoscopy, CT angiography (CTA), or red blood cell (RBC) scan. Colonoscopy can identify the site and cause of bleeding and provide effective treatment. CTA is a noninvasive diagnostic tool that is better tolerated by patients, can identify actively bleeding site or a potential bleeding lesion in vast majority of patients. RBC scan can identify intermittent bleeding, and with single-photon emission computed tomography, can more accurately localize it to a small segment of bowel. If patients are hemodynamically unstable, CTA and transcatheter arteriography/embolization can be performed. Colonoscopy can also be considered in these patients if rapid bowel preparation is feasible. Transcatheter arteriography has a low rate of major complications; however, targeted transcatheter embolization is only feasible if extravasation is seen, which is more likely in hemodynamically unstable patients. If bleeding site has been previously localized but the intervention by colonoscopy and transcatheter embolization have failed to achieve hemostasis, surgery may be required. Among patients with obscure (nonlocalized) recurrent bleeding, capsule endoscopy and CT enterography can be considered to identify culprit mucosal lesion(s). The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed annually by a multidisciplinary expert panel. The guideline development and revision include an extensive analysis of current medical literature from peer reviewed journals and the application of well-established methodologies (RAND/UCLA Appropriateness Method and Grading of Recommendations Assessment, Development, and Evaluation or GRADE) to rate the appropriateness of imaging and treatment procedures for specific clinical scenarios. In those instances where evidence is lacking or equivocal, expert opinion may supplement the available evidence to recommend imaging or treatment. abstract_id: PUBMED:21470238 The appropriateness of red blood cell use and the extent of overtransfusion: right decision? Right amount? Background: Shrinkage of the donor pool coupled with an increasing demand for blood presents a major challenge to maintaining an adequate blood supply. Consequently it has become even more important to reduce inappropriate blood use, including decisions about when and how much blood to prescribe. This study aimed to ascertain the levels of inappropriate practice and factors associated with it. Study Design And Methods: The medical records of a randomly selected sample of hospital patients in Northern Ireland who received a red blood cell transfusion during 2005 (n = 1474) were reviewed, and inappropriate transfusion and overtransfusion criteria were applied. Logistic regression models were used to identify factors associated with inappropriate practice and overtransfusion. Results: In this study 23% of transfusions were considered inappropriate, occurring most commonly where the lowest hemoglobin (Hb) threshold for transfusion applied. Younger patients, those undergoing surgery, and those with lower comorbidity and higher Hb values were most likely to have an inappropriate transfusion. Among patients appropriately transfused, 19% were overtransfused. Females and those of lower weight (&lt;65 kg) were most likely to be overtransfused. Conclusion: While the choice of criteria used to judge decisions will influence the absolute level of inappropriate or overtransfusion reported, our findings suggest that a significant minority of clinicians are either unaware of or are reluctant to accept lower transfusion thresholds. To improve further improve transfusion practice we suggest that barriers to the implementation of recommended transfusion thresholds should be examined and guidance on an appropriate posttransfusion Hb level developed. abstract_id: PUBMED:36767066 Evaluation of Six Years of Appropriateness Level of Blood Transfusion in a Pediatric Ward. Background: Blood transfusion can be considered as a life-saving treatment and is a primary health management topic. This study aims to assess the appropriateness of blood transfusion performed in a large tertiary hospital in Italy. Methods: a multispecialist team composed oof hematologists, public health experts and pediatricians analyzed blood transfusions performed between 2018 and 2022 in the pediatric wards comparing the appropriateness with the available NHS guidelines available. Patients' characteristics, clinical features and blood component's data were collected and analyzed. Results: considering 147 blood transfusions performed in 2018-2022, only eight (5.4%) were performed according to guidelines, while 98 (66.7%) were driven by clinicians' expertise, especially for anemia in genetic syndromes (30) (20.5%) and autoimmune diseases (20) (13.6%). Thirty-nine (26.5%) transfusions could be considered as inappropriate, while two (1.4%) blood packs were never been transfused after being requested. Conclusions: This analysis is one of the first performed to assess the appropriateness of blood component transfusions comparing their compliance to NHS guidelines. The importance of this analysis can be explained first by the clinical point of view and second by the economic one. abstract_id: PUBMED:33368693 Effect of patient blood management system and feedback programme on appropriateness of transfusion: An experience of Asia's first Bloodless Medicine Center on a hospital basis. Background: Patient blood management (PBM) programmes minimise red blood cell (RBC) transfusion and improve patient outcomes worldwide. This study evaluated the effect of a multidisciplinary, collaborative PBM programme on the appropriateness of RBC transfusion in medical and surgical departments at a hospital level. Methods/materials: In 2018, the revised PBM programme was launched at the Korea University Anam Hospital, a tertiary hospital with 1048 hospital beds and the first Asian institution where a new computer PBM programme was implemented. Monthly RBC usage and adequacy were analysed from January 2018 to December 2019. The trend of adequacy over time was assessed. Results: A total of 2 201 021 patients were hospitalised and visited an outpatient clinic. The number of RBC units transfused per 10 000 patients decreased from 139.8 for 2018 to 137.3 for 2019. The proportion of patients with Hb &lt;7 g/dL receiving RBC transfusion increased significantly: 29.1%, 34.5%, 40.4% and 40.6% for periods 1, 2, 3 and 4, respectively (p &lt; 0.001). The appropriateness of RBC transfusion significantly increased for medical (35.2%, 41.5%, 49.6% and 74.3% for periods 1, 2, 3 and 4, respectively [p &lt; 0.001]) and surgical (37.8%, 33.3%, 45.5% and 71.1% for periods 1, 2, 3 and 4, respectively [p &lt; 0.001]) departments. Conclusion: Implementation of a PBM programme through a multidisciplinary clinical community approach increased the appropriateness of RBC transfusion in medical and surgical departments. Therefore, expanding publicity and PBM education to health care providers is important to maintain the appropriateness of blood transfusion. abstract_id: PUBMED:31110894 Clinical Decision Support for Pediatric Blood Product Prescriptions. Since the beginning of the 20th century, blood products have been used to effectively treat life-threatening conditions. Over time, we have come to appreciate the many benefits along with significant risks inherent to blood product transfusions. As such, recommendations for the safe and effective use of blood products have evolved over time. Current evidence supports the use of restrictive transfusion strategies that can avoid the risks of unnecessary transfusions. In spite of good evidence, there is a considerable amount of variability in transfusion practices across providers. Clinical decision support (CDS) is an effective tool capable of increasing adherence to evidence-based practices. CDS has been used successfully to improve adherence to transfusion guidelines. Pediatric literature demonstrates strong evidence for the use of CDS to improve appropriateness of red blood cell and plasma transfusion utilization. Further studies in more diverse settings with more standardized reporting are needed to provide more clarity around the effectiveness of CDS in blood product prescriptions. abstract_id: PUBMED:31403929 Patient Blood Management: transfusion appropriateness in the post-operative period. Background: Within the context of Patient Blood Management (PBM) policy for the peri-operative period, the transfusion medicine unit of our institution adopted a series of strategies to support and enhance red blood cell (RBC) transfusion best practices. This study aimed to evaluate the appropriateness of RBC transfusion therapy in the post-operative period, before and after starting a multifactorial PBM policy. Materials And Methods: A 2-phase observational study was conducted on patients who underwent major surgery. The study was designed as follows: 3 months of preliminary audit, followed by multifactorial PBM policy, and a final audit. The policy comprised seminars, teaching lessons, periodic consultations and the insertion of Points of Care. RBC transfusion appropriateness was evaluated in both audits. Results: The preliminary audit, performed on 168 patients, showed that 37.7% of the patients were appropriately transfused. The final audit, performed on 205 patients, indicated a significant increase of RBC transfusion appropriateness to 65.4%. Discussion: In our experience, our multifactorial PBM policy improved the RBC transfusion appropriateness in the post-operative period. We believe that our multifactorial PBM policy, which comprises the insertion of Points of Care, supported the healthcare workers in the transfusion decision-making process. This enhancement of transfusion appropriateness implies clinical and managerial advantages, such as reduced transfusion-related risks, optimisation of health care resources, and reduction in costs. abstract_id: PUBMED:28288056 Ottawa Criteria for Appropriate Transfusions in Hepatectomy: Using the RAND/UCLA Appropriateness Method. Objective: Create practice guidelines for the appropriate use of red blood cell transfusions in hepatectomy. Background: Hepatectomy is associated with a high prevalence of transfusions. A transfusion can be life-saving, but can be associated with important adverse effects. Given the prevalence, the potential for benefit and harm, and the difficulty in conducting clinical trials, transfusion in hepatectomy is well-suited for a study of appropriateness. Methods: Using the RAND/UCLA appropriateness method, an international, multidisciplinary expert panel in hepatobiliary surgery, anesthesia, transfusion medicine, and critical care rated a series of 468 perioperative scenarios for transfusion appropriateness. Scenarios were rated individually, and again during an inperson group moderated session. Median scores and level of agreement were calculated to classify each scenario as appropriate, inappropriate, or uncertain. Results: Approximately, 47.4% of scenarios were rated as appropriate for transfusion, 28.2% were inappropriate, and 24.4% were uncertain. The key recommendations for intraoperative transfusion were (i) it is never inappropriate to transfuse for significant bleeding or ST segment changes; (ii) it is never inappropriate to transfuse for an intraoperative hemoglobin ≤75 g/L; and (iii) in the absence of significant bleeding or ST changes, transfusion for hemoglobin of ≥95 g/L is inappropriate, and transfusion for hemoglobin of ≥85 g/L requires strong justification. The key recommendations for postoperative transfusions were: (i) in a stable, asymptomatic patient, an appropriate transfusion trigger is 70 g/L (without coronary artery disease) or 80 g/L (with coronary artery disease) and (ii) it is appropriate to transfuse any patient for a hemoglobin of ≤75 g/L either immediately post-operative, or with a significant decrease from the previous day (&gt;15 g/L). Conclusions: Based on best available evidence and expert opinion, criteria for appropriate perioperative red blood cell transfusions in hepatectomy were determined. abstract_id: PUBMED:36339516 Factors influencing clinicians' willingness to use an AI-based clinical decision support system. Background: Given the opportunities created by artificial intelligence (AI) based decision support systems in healthcare, the vital question is whether clinicians are willing to use this technology as an integral part of clinical workflow. Purpose: This study leverages validated questions to formulate an online survey and consequently explore cognitive human factors influencing clinicians' intention to use an AI-based Blood Utilization Calculator (BUC), an AI system embedded in the electronic health record that delivers data-driven personalized recommendations for the number of packed red blood cells to transfuse for a given patient. Method: A purposeful sampling strategy was used to exclusively include BUC users who are clinicians in a university hospital in Wisconsin. We recruited 119 BUC users who completed the entire survey. We leveraged structural equation modeling to capture the direct and indirect effects of "AI Perception" and "Expectancy" on clinicians' Intention to use the technology when mediated by "Perceived Risk". Results: The findings indicate a significant negative relationship concerning the direct impact of AI's perception on BUC Risk (ß = -0.23, p &lt; 0.001). Similarly, Expectancy had a significant negative effect on Risk (ß = -0.49, p &lt; 0.001). We also noted a significant negative impact of Risk on the Intent to use BUC (ß = -0.34, p &lt; 0.001). Regarding the indirect effect of Expectancy on the Intent to Use BUC, the findings show a significant positive impact mediated by Risk (ß = 0.17, p = 0.004). The study noted a significant positive and indirect effect of AI Perception on the Intent to Use BUC when mediated by risk (ß = 0.08, p = 0.027). Overall, this study demonstrated the influences of expectancy, perceived risk, and perception of AI on clinicians' intent to use BUC (an AI system). AI developers need to emphasize the benefits of AI technology, ensure ease of use (effort expectancy), clarify the system's potential (performance expectancy), and minimize the risk perceptions by improving the overall design. Conclusion: Identifying the factors that determine clinicians' intent to use AI-based decision support systems can help improve technology adoption and use in the healthcare domain. Enhanced and safe adoption of AI can uplift the overall care process and help standardize clinical decisions and procedures. An improved AI adoption in healthcare will help clinicians share their everyday clinical workload and make critical decisions. abstract_id: PUBMED:31808902 Clinical decision support and improved blood use in patient blood management. Despite many years of published medical society guidelines for red blood cell (RBC) transfusion therapy, along with clinical trials that provide Level 1 evidence that restrictive transfusion practices can be used safely and are equivalent to transfusions given more liberally, annualized blood transfusion activity did not begin to decline in the United States until 2010. Adoption of electronic medical records has subsequently allowed implementation of clinical decision support (CDS): best practice alerts that can be initiated to improve the use of blood components. We describe our own institutional experience using a targeted CDS to promote restrictive blood transfusion practice and to improve RBC use. A 42% reduction in RBC transfusions was demonstrated at our institution from a baseline in 2008 through 2015, and the rate remained stable through 2018. Although the data cannot be used to infer causality, this decreased RBC use was accompanied by improved clinical outcomes. Answer: The appropriateness of red blood cell (RBC) use and the extent of overtransfusion are critical considerations in patient blood management. A study in Northern Ireland found that 23% of transfusions were considered inappropriate, often occurring where the lowest hemoglobin threshold for transfusion applied. Younger patients, those undergoing surgery, and those with lower comorbidity and higher hemoglobin values were most likely to receive an inappropriate transfusion. Among appropriately transfused patients, 19% were overtransfused, with females and those of lower weight being most at risk. The study suggests that some clinicians may be unaware of or reluctant to accept lower transfusion thresholds, indicating a need for improved guidance on appropriate posttransfusion hemoglobin levels (PUBMED:21470238). In pediatric settings, a study in Italy found that only 5.4% of blood transfusions were performed according to guidelines, while 66.7% were driven by clinicians' expertise, and 26.5% could be considered inappropriate. This highlights the importance of adhering to guidelines to ensure clinical and economic efficiency (PUBMED:36767066). The implementation of patient blood management (PBM) programs has been shown to increase the appropriateness of RBC transfusion. For example, a study at Korea University Anam Hospital reported significant increases in the appropriateness of RBC transfusion in both medical and surgical departments after the introduction of a PBM program (PUBMED:33368693). Clinical decision support (CDS) systems have been effective in improving adherence to transfusion guidelines and reducing variability in transfusion practices. Pediatric literature supports the use of CDS to enhance the appropriateness of RBC and plasma transfusion utilization (PUBMED:31110894). In the context of PBM, a multifactorial policy including education, consultations, and the use of point-of-care tools led to a significant increase in RBC transfusion appropriateness in the post-operative period at one institution (PUBMED:31403929). Overall, these studies underscore the importance of evidence-based guidelines, clinician education, and decision support tools in improving the appropriateness of RBC transfusion and reducing the incidence of overtransfusion.
Instruction: Is there an increased rate of anencephaly in twins? Abstracts: abstract_id: PUBMED:16231303 Is there an increased rate of anencephaly in twins? Background: The Israeli Ministry of Health reported an increased rate of twin pregnancies among all cases locally diagnosed as having open neural tube defects. The current study aimed to evaluate whether the etiology of this phenomenon could be attributed either to the twinning or to the mode of conception. Methods: Women admitted to our hospital between January 1997 and July 2004 for termination of pregnancy because of severe fetal abnormality enrolled into this retrospective case series study. They were further subdivided according to mode of conception (spontaneous, in vitro fertilization (IVF) or intracytoplasmic sperm injection (IVF-ICSI) pregnancies). Results: Three-hundred and eighty consecutive pregnancies, of which 340 (89%) were singletons, participated in our study. Anencephaly was diagnosed in 26 cases: 19 singletons and 7 twins. In the entire twin population, they were all dichorionic twins and only one co-twin was affected. Five of the twins were conceived by IVF-ICSI. All the anencephalic IVF-ICSI twins had normal karyotypes. All IVF-ICSI study women had taken folic acid 400 mcg/day 3 months before conception and throughout the first trimester of pregnancy. In order to find out the cause of the high rate of anencephaly found in IVF-ICSI pregnancies (33.3%), either the twinning or the IVF-ICSI process, a logistic regression analysis was used. A significant correlation was found only between anencephaly and twinning (p = 0.001, CI = 1.86-12.63), with a risk ratio of 4.85. Conclusions: Our case series data suggest a comparatively higher rate of anencephaly in IVF-ICSI pregnancy secondary to twinning and not because of the assisted reproductive technology. It is suggested that larger epidemiologic studies are conducted to validate our preliminary results. abstract_id: PUBMED:3912421 Abnormalities of the neural tube in twins We have studied neural tube malformations in twins in order to research into the role of genetic and environmental factors. 12 pairs of twins in which one child had a neural tube defect were studied in Brittany, which is a Celtic country. We found no evidential agreement about the role each factor played. On the other hand there was an excess of twins in the siblings of those with neural tube defects, especially in the siblings of the mothers. There were more dizygotic twin mothers. Analysing the literature has made it possible for us to find a level of agreement of 7.5% for monozygotic twins and 4.6% for dizygotic twins. This last figure corresponds to the recurrence rate found after one case. The aetiological theories are reviewed. Among factors bringing about neural tube defects would seem to be the microenvironment of the uterus and the delay between ovulation and fertilization and implantation of the fertilized egg. Nutrition of the embryo and possible vitamin deficiencies could explain this inter-action between the mother and the fetus. If there is a genetic factor, it is more likely to be maternal than fetal. abstract_id: PUBMED:1821510 Birth defects in twins: study in a Spanish population. The risk for specific defects among twins compared to singletons was studied using data collected by the Spanish Collaborative Study of Congenital Malformations (ECEMC). A total of 136 twins had a major and/or minor congenital defect. The overall rate of congenital defects in twins (2.37%) did not deviate significantly from the rate in singletons (2.21%). Like-sex (LS) and male-male (MM) twin pairs had a slightly higher rate of birth defects than unlike-sex (US) and female-female (FF) pairs, respectively. Defects of the central nervous system, cardiovascular system and genitourinary system were significantly more frequent in LS twins than in singletons, with relative risks of 2.8, 2.5 and 1.6, respectively. No significantly increased risk was found among US twins. Among defects of the central nervous system, the rates of anencephaly, encephalocele and hydrocephaly were significantly higher in total and LS twins; however, no significantly increased risk for spina bifida was observed when compared to singletons. MM twins were also 1.9 times more likely to have hypospadias, but the risk among males of male-female (FM) pairs was decreased. abstract_id: PUBMED:565612 Recurrence rates in sibships and concordance rates in twins for anencephaly. It is suggested that concordance rates in twins for anencephaly are higher than can be explained by the high recurrence rates within sibships, and the hypothesized higher rate of anencephaly in MZ pairs. The evidence is stronger in the case of same-sexed pairs, but points in the same direction for opposite-sexed pairs. abstract_id: PUBMED:1130454 Discordant severe cranial defects in monozygous twins. Three cases of monozygous male twins having exencephaly, anencephaly, and acephaly, with their cotwins being normal, are reported. Monozygosity of the twins was demonstrated by pathologic examination and in one case corroborated by blood group antigen testing. A constellation of abnormalities including adrenal hypoplasia in one of a pair of monozygous male twins with chorionic vascular anastomoses, together with the unusual racial background in each case, are in support of the possibility that environmental factors may play an etiologic role. abstract_id: PUBMED:2609905 Congenital anomalies in twins in Northern Ireland. II: Neural tube defects, 1974-1979. In a large population-based study in Northern Ireland during the period 1974-1979, the rate of anencephalus in twins (9.1/10,000) was found to be less than that in singletons (24.3/10,000). This finding is in contrast with most other studies and the possibility of underascertainment of twin cases is considered, but it is concluded that chance is the likeliest explanation. The rate of spina bifida in twins (36.4/10,000) was similar to that in singletons (31.9/10,000). All of the twins with anencephalus were female and from pairs of like sex. Rates of spina bifida in twins from pairs of the two sex types were similar but, unusually, there was a male preponderance. As in previous studies, the great majority of twins with NTDs had unaffected cotwins. abstract_id: PUBMED:9099654 Asymmetric conjoined twins. Despite recent advances in diagnosis, particularly organ-imaging, and therapeutic options, the management of conjoined twins is still very challenging. We report conjoined twins attached ()end-on" at the lumbo-sacral level and describe the anatomical findings, methods of investigation, and management. abstract_id: PUBMED:779911 Anencephalus, Spina Bifida, twins, and teratoma. The twin pairs with spina bifida and or anencephalus collected from the literature by Rogers and Weatherall (1976) form the basis for an argument that the apparent rarity of dizygous twins concordant for these malformations may be due to the breakdown of the interamniotic partition and a subsequent fetus-fetus interaction. It is suggested that this may lead to complete or partial destruction of one twin. When cells survive they may form teratoma or patches of anomalous skin cover. The hypothesis that monozygous twins concordant for these defects may form double monsters is re-stated. The present hypothesis predicts that the incidence of pineal and intraspinal teratoma will vary in time and place with anencephalus and spina bifida, and that the scalp type hairs found over or around spina bifida may prove, in male infants, to have female chromosomes. abstract_id: PUBMED:7189785 Concordance rates in twins for anencephaly. New estimates are offered of the concordance rates in twins for anencephaly. In MZ pairs, the percentage which are concordant is about 7. This is significantly higher than (about three or four times) the value expected on the assumption that concordance is accounted for by raised recurrence rates within sibships. In DZ pairs, the data are less numerous and the percentage concordant (about 2 to 5) does not so far throw much suspicion on this hypothesis in regard to them. abstract_id: PUBMED:2045744 Congenital malformations in twins in England and Wales. Study Objective: The aim was to compare congenital malformation rates in twin births with those in singleton births. Design: The study was an analysis of malformation rates in singleton and twin births using data from the Office of Population Censuses and Survey's Congenital Malformation Notification Scheme. Setting: This was a national survey of births in England and Wales in 1979-1980 and 1982-1985. Participants: The data comprised 95,510 reported malformations in 3.7 million singleton births, and 1925 reported malformations in 76,000 twin births. Measurements And Main Results: Twin malformation ratios were calculated using maternal age specific singleton rates as standard. In comparison with singleton births, twins have significantly higher reported frequencies of indeterminate sex and pseudohermaphroditism; anencephaly; patent ductus arteriosis; exomphalos; hydrocephalus; anomalies of the umbilical vessels; atresia or stenosis of the large intestine or anus; and tracheo-oesophageal fistula, atresia or stenosis. Twins also have significant reported deficits of polydactyly and syndactyly; congenital dislocation of the hip; anomalies of the tongue, branchial cleft and auricular sinus; post-anal dimple; and Down's syndrome. Conclusions: Several major malformations were significantly more common in twins than in singletons. The excess of indeterminate sex and pseudohermaphroditism has not been described before and may be analogous to freemartinism in cattle. Most of the conditions less common in twins are minor, and the reported deficits may be due to underascertainment of the less serious conditions in twins. Down's syndrome is an exception, and the deficit may well be real. Answer: The data from the provided abstracts suggest that there is an increased rate of anencephaly in twins compared to singletons. Specifically, a study in an Israeli population found a higher rate of anencephaly in IVF-ICSI twin pregnancies, suggesting that the increased rate was related to twinning rather than the assisted reproductive technology itself (PUBMED:16231303). Another study in a Spanish population found that defects of the central nervous system, including anencephaly, were significantly more frequent in like-sex twins than in singletons (PUBMED:1821510). Additionally, a study in Northern Ireland reported a lower rate of anencephaly in twins compared to singletons, but this was considered possibly due to underascertainment or chance (PUBMED:2609905). Other studies have also indicated a genetic component, with a higher concordance rate for anencephaly in monozygotic twins compared to dizygotic twins, suggesting a possible genetic factor that is more likely maternal than fetal (PUBMED:3912421, PUBMED:7189785). However, the evidence is not entirely consistent across all studies, and some have reported discordant cases of severe cranial defects in monozygotic twins (PUBMED:1130454). Overall, while there is evidence to suggest an increased rate of anencephaly in twins, the exact cause of this increase is not fully understood and may involve a combination of genetic and environmental factors. It is recommended that larger epidemiologic studies be conducted to validate these preliminary results and to further investigate the etiology of the increased rate of anencephaly in twins (PUBMED:16231303).
Instruction: Is neonatal toxic erythema a risk factor in the development of allergy in childhood? Abstracts: abstract_id: PUBMED:9586295 Is neonatal toxic erythema a risk factor in the development of allergy in childhood? Objective: In relatively few cases have perinatal factors been included as risk factors for allergy development. Delivery has not been considered as a possible influential factor in allergy development. To identify risk factors in allergy development, we have included erythema toxicum neonatorum (ETN). Patients And Methods: We have prospectively studied 356 newborns that were followed for a period of two years. Characteristics of the delivery, such as the pregnancy, instrumental delivery, circular cord, ETN, number of vesicles, day of presentation, season of birth, maternal and cord blood IgE levels and cord blood eosinophils and the development allergies during the two year follow-up period were included. Results: ETN was seen in 25.3% of the children. The histopathology study of vesicles showed eosinophils. There was a significant difference between males and females (61.9% versus 38.1%, respectively). Cord blood IgE levels were not related to ETN, except in situations of allergy from 0.9 IU in cord blood or from 20 IU at six months of age (p &lt; 0.05). Conclusions: ETN is related to delivery characteristics, instrumental, circulars, amniotic alteration or fall in arterial pH &lt; 7.24. In 84.2% of allergy manifestations during the first two years of life, ETN or a low pH was seen at birth, with atopic dermatitis being those that displayed ETN (85.7%). abstract_id: PUBMED:26288485 Oxcarbazepine induced toxic epidermal necrolysis - a rare case report. Carbamazepine, is well known to cause Stevens-Johnson syndrome and toxic epidermal necrolysis(TEN). Oxcarbazepine, a 10-keto analog of carbamazepine, is an anticholinergic, anticonvulsant and mood stabilizing drug, used primarily in the treatment of epilepsy. Its efficacy is similar to carbamazepine but allergic reactions and enzyme induction is low. We describe a case of oxcarbazepine induced TEN, who presented with erythematous ulcerative maculopapular rash. abstract_id: PUBMED:21339416 Progression of toxic epidermal necrolysis after tanning bed exposure. Background: In addition to recreational tanning bed use, UV radiation exposures are sometimes sought to self-treat skin conditions. The ability of tanning bed radiation exposure to trigger toxic epidermal necrolysis has not been reported. Observations: A young woman attempted to treat a self-limiting drug hypersensitivity reaction via tanning bed radiation exposure, which resulted in a systemic toxic epidermal necrolysis-like reaction. Studies with cultured keratinocytes and an epithelial cell line reveal that UV-A radiation can synergize with other stimuli such as phorbol esters or interleukin 1 to produce large amounts of tumor necrosis factor, providing a potential mechanism for this exaggerated reaction. Conclusion: In addition to inducing photodamage and skin cancer, tanning bed radiation exposure can trigger a toxic epidermal necrolysis-like reaction, possibly via the exaggerated production of keratinocyte cytokines such as tumor necrosis factor. abstract_id: PUBMED:37605396 Stevens-johnson Syndrome and Toxic Epidermal Necrolysis: An Overview of Diagnosis, Therapy Options and Prognosis of Patients. Both Stevens-johnson syndrome (SJS) and Toxic-epidermal necrolysis (TEN) are generally medication-induced pathological conditions that mostly affect the epidermis and mucus membranes. Nearly 1 to 2 patients per 1,000,000 population are affected annually with SJS and TEN, and sometimes these maladies can cause serious life-threatening events. The reported death rates for SJS range from 1 to 5%, and 25 to 35% for TEN. The mortality risk may even be higher among elderly patients, especially in those who are affected by a significant amount of epidermal detachment. More than 50% of TEN patients who survive the illness may experience long-term lower quality of life and lesser life expectancy. The clinical and histopathological conditions of SJS and TEN are characterized by mucocutaneous discomfort, haemorrhagic erosions, erythema, and occasionally severe epidermal separation that can turn into ulcerative patches and dermal necrosis. The relative difference between SJS and TEN is the degree of ulcerative skin detachment, making them two extremes of a spectrum of severe cutaneous adverse drug-induced reactions (cADRs). In the majority of cases, serious drug-related hypercreativities are considered the main cause of SJS &amp; TEN; however, herpes simplex virus and Mycoplasma pneumoniae infections may also produce similar type clinical conditions. The aetiology of a lesser number of cases and their underlying causative factors remain unknown. Among the drugs with a 'greater likelihood' of causing TEN &amp; SJS are carbamazepine (CBZ), trimethoprim-sulfamethoxazole, phenytoin, aminopenicillins, allopurinol, cephalosporins, sulphonamides, antibiotics, quinolones, phenobarbital, and NSAIDs of the oxicam variety. There is also a strong genetic link between the occurrence of SJS and IEN in the Han Chinese population. Such genetic association is based on the human leukocyte antigen (HLA-B*1502) and the co-administration of carbamazepine. The diagnosis of SJS is made mostly on the gross observations of clinical symptoms, and confirmed by the histopathological examination of dermal biopsies of the patients. The differential diagnoses consist of the exclusion of Pemphigus vulgaris, bullous pemphigoid, linear IgA dermatosis, paraneoplastic pemphigus, disseminated fixed bullous drug eruption, acute generalized exanthematous pustulosis (AGEP), and staphylococcal scalded skin syndrome (SSSS). The management of SJS &amp; TEN is rather difficult and complicated, and there is sometimes a high risk of mortality in seriously inflicted patients. Urgent medical attention is needed for early diagnosis, estimation of the SCORTEN prognosis, identification and discontinuation of the causative agent as well as highdose injectable Ig therapeutic interventions along with specialized supportive care. Historical aspects, aetiology, mechanisms, and incidences of SJS and TEN are discussed. An update on the genetic occurrence of these medication-related hypersensitive ailments as well as different therapy options and management of patients is also provided. abstract_id: PUBMED:2762049 Etiology of toxic erythema in newborn infants and its effects on the health status of young children A total of 68 neonates with toxic erythema (TE) were examined for risk factors of its origin depending on the intensity of the clinical manifestations of erythema. It has been established that sensitizing factors (aggravated allergological anamnesis, food, allergens, late toxicosis of pregnancy) and high content of estrogen hormones in the newborn are implicated in the genesis of the generalized TE pattern. The study of the catamnesis of 84 children of the first three years of life with a history of TE of the newborn revealed that the children appeared highly susceptible to respiratory diseases and showed early formation of the allergic manifestations. It is recommended that neonates with marked TE should be attributed to the second health group and be administered the treatment and prophylactic measures beginning from the stay at the maternity home. abstract_id: PUBMED:10663024 Stevens-Johnson syndrome with transition to toxic epidermal necrolysis after carbamazepine administration, heroin and alcohol abuse A 28 year old patient developed a severe bullous exanthem and enanthem combined with hepatitis, fever and blood count abnormalities after taking carbamazepine and consumption of heroin and alcohol. After discontinuing carbamazepine, prednisolone was given over a five day period accompanied by intravenous fluid and electrolyte substitution and local therapy which lead to improvement. Severe bullous skin reactions nowadays are classified into erythema exsudativum multiforme majus (EEMM), Stevens-Johnson syndrome (SJS), overlap Stevens-Johnson syndrome-toxic epidermal necrolysis (SJS/TEN), TEN with maculae and TEN on large erythema, and they are most often caused by antibiotics and anticonvulsant drugs. Heroin and alcohol abuse alters host immunity which subsequently may increase susceptibility to allergic reactions. There is a high (40%) mortality rate for TEN, and patients with organ involvement are at increased risk. abstract_id: PUBMED:17166597 Prevalence and risk factors for allergic rhinitis in primary school children. Objective: Allergic rhinitis is a common chronic illness of childhood. The aim of the study was to evaluate the prevalence and risk factors of allergic rhinitis in 6-12-year-old schoolchildren in Istanbul. Methods: A total of 2500 children aged between 6 and 12 years in randomly selected six primary schools of Istanbul were surveyed by using the International Study of Asthma and Allergies in Childhood (ISAAC) questionnaire between April and May 2004. Results: Of them 2387 (1185 M/1202 F) questionnaires were appropriately completed by the parents with an overall response of 95.4%. The prevalence of physician-diagnosed allergic rhinitis was 7.9% (n=189). A family history of atopy (aOR=1.30, 95% CI=1.00-1.68), frequent respiratory tract infection (aOR=1.36, 95% CI=1.08-1.70) and sinusitis (aOR=2.29, 95% CI=1.64-3.19), antibiotic use in the first year of life (aOR=1.26, 95% CI=1.01-1.57), cat at home in the first year of life (aOR=2.21, 95% CI=1.36-3.61), dampness at home (aOR=1.31, 95% CI=1.04-1.65) and perianal redness (aOR=1.26, 95% CI=1.01-1.57) were significant for increased risk for allergic rhinitis. Frequent consumption of fruits and vegetables were inversely, and frequent consumption of lollipops and candies were positively associated with allergic rhinitis symptoms. Conclusion: Our study reconfirmed that family history of atopy, frequent respiratory tract infections, antibiotics given in the first year of life, cat at home in the first year of life, dampness at home, perianal redness and dietary habits are important independent risk factors for AR. Researchers worldwide should be focused to these factors and try to develop policies for early intervention, primary and secondary preventions for allergic diseases. abstract_id: PUBMED:32796633 Osimertinib-Associated Toxic Epidermal Necrolysis in a Lung Cancer Patient Harboring an EGFR Mutation-A Case Report and a Review of the Literature. Toxic epidermal necrolysis (TEN) and Stevens-Johnson syndrome (SJS) are life-threatening dermatologic adverse events in the same category, caused by a delayed-type drug hypersensitivity reaction. Although skin toxicity is common during treatment with epidermal growth factor receptor tyrosine kinase inhibitors (EGFR-TKIs), osimertinib-associated TEN is quite rare-thus far, only one report has been published from China. We report a case of an 80-year-old Japanese woman with lung adenocarcinoma harboring an EGFR-sensitizing mutation who was treated with osimertinib as the first-line treatment. Forty-six days after osimertinib induction, diffuse erythematous rash rapidly spread over the patient's trunk along with vesicles and purpuric macules; furthermore, she developed targetoid erythema on the face. Despite osimertinib discontinuation and corticosteroid treatment, diffuse erythema with Nikolsky's sign, general epidermal detachment, erosion and loose blisters developed over her entire body including the face. Based on her symptoms, TEN was diagnosed and thus, intravenous immunoglobulin was immediately administered for 4 days. The treatment ameliorated TEN-associated skin toxicity and caused epithelialization. Reports on osimertinib-associated SJS/TEN are scarce and only one report each on SJS and TEN from China is available. This is the first report of osimertinib-associated TEN from Japan. Cases of EGFR-TKI-associated SJS/TEN have been reported predominantly from Asian countries, suggesting ethnicity and genetic linkage play a role in the underlying mechanism. abstract_id: PUBMED:24827945 Prevalence of allergic rhinitis and risk factors in 6- to 7-yearold children in İstanbul, Turkey. The aim of this study was to evaluate the prevalence of allergic rhinitis and its relationship with various risk factors in 6-7-year-old children living in İstanbul. A total of 11,483 children aged 6-7 years in 75 primary schools from all districts of İstanbul were surveyed. Prevalence of symptoms of allergic rhinitis was assessed using a translated version of the International Study of Asthma and Allergies in Childhood (ISAAC) questionnaire. Of them, 9,875 (50.7% M, 49.3% F) questionnaires were appropriately completed by the parents. The prevalence rates of lifetime, current and physician-diagnosed allergic rhinitis were 44.3%, 29.2% and 8.1%, respectively. There was nearly a two-fold variation in the prevalence rates of rhinitis between the districts of İstanbul. Frequent paracetamol and antibiotic use in the first year of life, history of frequent upper respiratory tract infections, adenotonsillectomy, breastfeeding less than six months, dog at home or perianal redness in the first year of life, and frequent trucks passing near the home were independent risk factors. abstract_id: PUBMED:9537008 The etiology of prolonged erythema after chemical peel. Background: As the number and methods of skin resurfacing procedures are increasing, there is a small number of patients that develop a prolonged inflammation during the postoperative period. Objective: We attempted to correlate risk factors for the development of prolonged postpeel erythema (PPPE) and inflammation. A treatment regimen will be described to eliminate permanent skin changes. Methods: A retrospective chart review is presented to define and correlate risk factors for the development of PPPE and a treatment protocol is described. The setting is a large multisurgeon aesthetic center. Two-hundred and thirty-six consecutive chemical phenol peels on 196 patients over a 2-year period were reviewed. Results: Eleven percent of patients developed PPPE. Allergy to tape was the only factor significantly correlated with PPPE. All patients had complete resolution of skin changes with appropriate treatment. Conclusion: A small population of patients undergoing skin resurfacing procedures will develop prolonged erythema. No major risk factor could be correlated with its development. A treatment plan was devised and was successful in all cases. Answer: Yes, neonatal toxic erythema (ETN) has been identified as a risk factor in the development of allergy in childhood. A study that prospectively followed 356 newborns for a period of two years found that ETN was related to delivery characteristics and was observed in 25.3% of the children. The study also noted that in 84.2% of allergy manifestations during the first two years of life, ETN or a low pH was seen at birth, with atopic dermatitis being the condition most associated with ETN (85.7%) (PUBMED:9586295). Additionally, another study examining the etiology of toxic erythema in newborn infants and its effects on the health status of young children found that children with a history of toxic erythema of the newborn appeared highly susceptible to respiratory diseases and showed early formation of allergic manifestations (PUBMED:2762049). These findings suggest that neonatal toxic erythema may indeed be a risk factor for the development of allergies in childhood.
Instruction: Undersized annuloplasty for functional mitral regurgitation: is it responsible for clinically relevant mitral stenosis during exercise? Abstracts: abstract_id: PUBMED:22953670 Undersized annuloplasty for functional mitral regurgitation: is it responsible for clinically relevant mitral stenosis during exercise? Background And Aim Of The Study: The study aim was to assess if an undersized mitral annuloplasty for functional mitral regurgitation (FMR) in dilated cardiomyopathy can determine a clinically relevant mitral stenosis during exercise. Methods: Both, rest and stress echocardiography were performed in 12 patients submitted to an undersized ring annuloplasty for FMR in dilated cardiomyopathy. The mean ring size was 27 +/- 1.3 mm. All patients were in NYHA functional classes I-II, were in stable sinus rhythm, and without significant residual mitral regurgitation (grade &lt; or = 2/4). Results: At peak exercise (mean 81 +/- 12 W), the main cardiac performance indices were significantly improved, including systolic blood pressure (121 +/- 5.6 versus 169 +/- 14 mmHg, p &lt; 0.001), stroke volume (63 +/- 15 versus 77 +/- 14 ml, p &lt; 0.001), left ventricular ejection fraction (43 +/- 9% versus 47 +/- 9%, p = 0.001), and systolic right ventricular function (pulsed tissue Doppler index peak systolic velocity: 8.6 +/- 1.7 versus 11.1 +/- 3.2 cm/s, p = 0.004). A mild increase in planimetric mitral valve area was observed at peak exercise (2.12 +/- 0.4 versus 2.17 +/- 0.3 cm2, p = 0.05). Although the transmitral mean gradient was increased from 3.2 +/- 1.2 to 6.3 +/- 2.3 mmHg (p &lt; 0.0001), the systolic pulmonary artery pressure did not change significantly (27 +/- 2.8 versus 30.1 +/- 6.4 mmHg, p = 0.3), thus revealing a preserved cardiac adaptation to exercise. Conclusion: In these preliminary data, postoperative clinically relevant mitral stenosis was not observed in patients submitted to mitral repair for FMR. Stress echocardiography represents a valuable tool to assess an appropriate cardiac response to exercise and to detect a significant exercise-induced pulmonary hypertension after undersized annuloplasty ring surgery. abstract_id: PUBMED:26228598 Restrictive Mitral Annuloplasty Does Not Limit Exercise Capacity. Background: Restrictive mitral annuloplasty is the preferred method of treating secondary mitral regurgitation. The use of small annuloplasty rings to reduce the high recurrence rates may result in mitral stenosis. Methods: Thirty-six patients who underwent restrictive mitral annuloplasty with Carpentier-Edwards classic 26 size ring underwent exercise echocardiography and ergospirometry. Resting catecholamines and N-terminal pro brain natriuretic peptide (NT-proBNP) levels were measured. Results: At the time of study, the median time from operation was 16.6 months (interquartile range, 8.5 to 43.3 months). Left ventricular end-systolic volume index (LVESVI) was 67 mL/m(2) (interquartile range, 25 to 92 mL/m(2)), and ejection fraction (EF) was 38.8% (interquartile range, 28.3% to 59.0%). Mitral gradients were higher at the leaflet tips than at the annular level. Continuous wave (CW) Doppler gradients at rest were 3.4 mmHg (interquartile range, 2.4 to 4.9 mmHg) mean and 9.5 mmHg (interquartile range, 7.0 to 14.7 mmHg) maximal. On exertion, they increased to 6.8 mmHg (interquartile range, 5.4 to 8.8 mmHg) (p = 0.001) and 19.7 mmHg (interquartile range, 12.8 to 23.3 mmHg) (p = 0.001), respectively. Maximal VO2 was 18.2 mL/kg/min (interquartile range, 16.3 to 21.5 mL/kg/min), VE/VCO2 slope was 31.1 (interquartile range, 26 to 34). Epinephrine level was 0.024 ng/mL (interquartile range, 0.0098 to 0.043 ng/mL), norepinephrine was 0.61 ng/mL (interquartile range, 0.41 to 0.95 ng/mL), and NT-proBNP was 303 pg/mL (interquartile range, 155 to 553 pg/mL). Maximal VO2 negatively correlated with resting norepinephrine level (r = -0.50, p = 0.003). VE/VCO2 slope positively correlated with NT-proBNP (r = 0.36, p = 0.004) and epinephrine (r = 0.36, p = 0.04) levels and with LV volumes (r = 0.51, p = 0.006) and was negatively correlated with LVEF (r = -0.52, p = 0.004). Neither maximal VO2 nor VE/VCO2 slope correlated with the highest mean (r = 0.24, p = 0.2, and r = -0.20, p = 0.3, respectively) and maximal (r = 0.13, p = 0.5, r = -0.20, p = 0.3, respectively) mitral gradients on exertion. Conclusions: Restrictive mitral annuloplasty for secondary mitral regurgitation does result in a degree of mitral stenosis; however, primary heart disease seems more important for patient's exercise performance than the mitral stenosis resulting from using an undersized ring. abstract_id: PUBMED:24199762 Restrictive mitral valve annuloplasty versus mitral valve replacement for functional ischemic mitral regurgitation: an exercise echocardiographic study. Objective: Mitral valve annuloplasty and mitral valve replacement are common strategies for the management of functional ischemic mitral regurgitation with ischemic cardiomyopathy. However, mitral valve annuloplasty may create some degree of functional mitral stenosis. The purpose of this study was to compare the mitral valve hemodynamics in patients with functional ischemic mitral regurgitation undergoing mitral valve annuloplasty or mitral valve replacement, using exercise echocardiography. Methods: We performed resting and exercise echocardiography in 70 patients matched for indexed effective orifice area, systolic pulmonary arterial pressure, and left ventricular ejection fraction after mitral valve annuloplasty or mitral valve replacement with coronary artery bypass grafting. Results: There was no significant difference between the 2 groups regarding baseline demographic and clinical data. Exercise systolic pulmonary arterial pressure was higher in the mitral valve annuloplasty group compared with the mitral valve replacement group (from 36.3 ± 8.1 mm Hg to 55 ± 12 mm Hg, vs mitral valve replacement: 33 ± 6 mm Hg to 42 ± 6.2 mm Hg, P = .0001). Exercise-induced improvement in effective orifice area and indexed effective orifice area was better in the mitral valve replacement group (mitral valve replacement: +0.23 ± 0.04 vs mitral valve annuloplasty: -0.1 ± 0.09 cm², P = .001, for effective orifice area; mitral valve replacement: +0.14 ± 0.03 vs mitral valve annuloplasty: -0.04 ± 0.07 cm²/m², P = .03, for indexed effective orifice area). Exercise indexed effective orifice area was correlated with exercise systolic pulmonary arterial pressure (r = -0.45; P = .01). In a multivariable analysis mitral valve annuloplasty, postoperative indexed effective orifice area and resting mitral peak gradients were independent predictors of elevated systolic pulmonary arterial pressure during exercise. Conclusions: In patients with functional ischemic mitral regurgitation, mitral valve annuloplasty may cause functional mitral stenosis, especially during exercise. Mitral valve annuloplasty was associated with poor exercise mitral hemodynamic performance, lack of mitral valve opening reserve, and markedly elevated postoperative exercise systolic pulmonary arterial pressure compared with mitral valve replacement. abstract_id: PUBMED:24332186 Functional impact of transmitral gradients at rest and during exercise after restrictive annuloplasty for ischemic mitral regurgitation. Objectives: Restrictive mitral valve annuloplasty combined with coronary artery bypass grafting is the treatment of choice for ischemic mitral regurgitation. Postoperative functional mitral stenosis and its potential impact on functional capacity remain the object of debate. The aim of this study was to assess functional and hemodynamic outcome at rest and during exercise in a population with ischemic mitral regurgitation after a standardized restrictive mitral valve annuloplasty. Methods: A total of 23 patients with ischemic mitral regurgitation who were previously treated with coronary artery bypass grafting and restrictive mitral valve annuloplasty underwent a semi-supine (bicycle) exercise test with Doppler echocardiography and ergospirometry. The surgical technique was identical in all patients, using a complete semi-rigid ring downsized by 2 sizes after measuring the height of the anterior mitral leaflet, to achieve a coaptation length of at least 8 mm. Results: At a mean follow-up of 28 ± 15 months, mean transmitral gradients at rest and maximal exercise were 4.4 ± 1.8 mm Hg and 8.2 ± 4.2 mm Hg, respectively (P &lt; .001). Transmitral gradients did not correlate with exercise capacity (maximal oxygen uptake) or pulmonary artery pressures. Patients with a resting mean gradient of 5 mm Hg or greater (n = 9) reached a significantly higher maximal oxygen uptake; however, they had a better ejection fraction and cardiac output at rest and reached a higher cardiac output at peak exercise. Conclusions: Transmitral gradients after restrictive mitral valve annuloplasty for ischemic mitral regurgitation did not correlate with functional capacity as measured by maximal oxygen uptake during semi-supine bicycle testing. Functional capacity and transmitral gradients are determined not only by the severity of mitral stenosis but also by hemodynamic factors, such as ejection fraction and cardiac output. Transmitral gradients should be interpreted with respect to patient hemodynamics and not necessarily be considered as detrimental for functional capacity. abstract_id: PUBMED:27225485 Interrupted commissural band annuloplasty prevents mitral stenosis. Background: Mitral annuloplasty is an important component of the treatment of degenerative mitral valve disease. However, postoperative echocardiography reveals elevated mitral gradients in some patients. We developed a technique that we termed interrupted commissural band annuloplasty (iCBA), which does not shorten either the anterior or posterior annulus and is not associated with the development of a mitral gradient. We compared the echocardiographic characteristics of patients treated using this method versus Cosgrove ring (COS) placement, both at rest and during exercise. Methods: ICBA features placement of three sutures in the commissures using two bands and shortens the commissural annular length by 60 %. We used this method to treat 63 patients and placed Cosgrove bands in 58. Of all patients, 48 who underwent iCBA and 34 with COSs passed the exercise echocardiographic test. Results: The maximal transmitral pressures at rest in the iCBA and Cosgrove groups were 8.04 ± 0.74 and 11.30 ± 0.88 mmHg (P = 0.0029), respectively, and the mean transmitral pressures at rest were 2.46 ± 0.74 and 3.61 ± 0.32 mmHg (P = 0.0037), respectively. The maximal transmitral pressures during exercise were 11.79 ± 0.97 and 18.37 ± 1.16 mmHg (P &lt; 0.0001), and the mean transmitral pressures during exercise were 4.95 ± 0.45 and 7.76 ± 0.53 mmHg (P &lt; 0.0001). Conclusions: ICBA prevents postoperative mitral stenosis both at rest and importantly during exercise. abstract_id: PUBMED:29114917 Pulmonary arterial pressure detects functional mitral stenosis after annuloplasty for primary mitral regurgitation: An exercise stress echocardiographic study. Introduction: The restrictive mitral valve annuloplasty (RMA) is the treatment of choice for degenerative mitral regurgitation (MR), but postoperative functional mitral stenosis remains a matter of debate. In this study, we sought to determine the impact of mitral stenosis on the functional capacity of patients. Methods: In a cross-sectional study, 32 patients with degenerative MR who underwent RMA using a complete ring were evaluated. All participants performed treadmill exercise test and underwent echocardiographic examinations before and after exercise. Results: The patients' mean age was 50.1 ± 12.5 years. After a mean follow-up of 14.1 ± 5.9 months (6-32 months), the number of patients with a mitral valve peak gradient &gt;7.5 mm Hg, a mitral valve mean gradient &gt;3 mm Hg, and a pulmonary arterial pressure (PAP) ≥25 mm Hg at rest were 50%, 40.6%, and 62.5%, respectively. 13 patients (40.6%) had incomplete treadmill exercise test. All hemodynamic parameters were higher at peak exercise compared with at rest levels (all P &lt; .05). The PAP at rest and at peak exercise as well as peak transmitral gradient at peak exercise were higher in patients with incomplete exercise compared with complete exercise test (all P &lt; .05). The PAP at rest (a sensitivity and a specificity of 84.6% and 52.6%, respectively; area under the curve [AUC] = .755) and at peak exercise (a sensitivity and a specificity of 100% and 47.4%, respectively; AUC = .755) discriminated incomplete exercise test. Conclusion: The RMA for degenerative MR was associated with a functional stenosis and the PAP at rest and at peak exercise discriminated low exercise capacity. abstract_id: PUBMED:31773184 Percutaneous mitral valve repair in recurrent severe mitral valve regurgitation after mitral annuloplasty : MitraClip-in-the-ring as a complementary strategy. Background: Patients with reduced left ventricular (LV) function undergoing coronary artery bypass graft surgery or/and aortic valve replacement occasionally show severe mitral valve (MV) regurgitation and thus also undergo surgical mitral annuloplasty. Over time, further deterioration of LV function and additional ischemic events cause recurrence of severe MV regurgitation due to the Carpentier IIIb morphology of the MV that is not adequately addressed by the previously implanted annuloplasty ring. Methods: Seven patients (Society of Thoracic Surgeons score: 7.5 ± 1.5%) with Carpentier type-IIIb recurrent severe MV regurgitation, having undergone prior cardiothoracic surgery (median: 40 months) including mitral annuloplasty, were treated with the MitraClip device. Results: MitraClip implantation resulted in significantly reduced MV regurgitation and improved New York Heart Association functional state, translating into an increased exercise capability and improved cardiac biomarkers. The morphology of the MV was adequately addressed without causing relevant MV stenosis, while the MV annulus area remained unaltered. The procedure was safe with a 30-day mortality rate of 0%. Conclusion: MitraClip-in-the-ring is feasible and in principle safe for treating Carpentier type IIIb severe MV regurgitation after surgical MV repair using mitral annuloplasty. MitraClip-in-the-ring resulted in immediate amelioration of clinical symptoms and increased physical exercise capacity. abstract_id: PUBMED:25660924 Undersized and overstretched: mitral mechanics after restrictive annuloplasty. N/A abstract_id: PUBMED:21251720 Impact of increased transmitral gradients after undersized annuloplasty for chronic ischemic mitral regurgitation. Background: Recent studies have demonstrated that undersized ring mitral annuloplasty (URMA) for chronic ischemic mitral regurgitation (CIMR) can induce iatrogenic mitral stenosis. The impact of this functional mitral stenosis on clinical and echocardiographic results is not well established. Methods: 125 consecutive URMA for CIMR were dichotomized according to postoperative mean trans-mitral gradient (Δp) into Group A (61 patients, &gt;5 mm Hg) and Group B (64 patients, ≤5 mm Hg). Echocardiographic, clinical and functional outcomes were prospectively recorded and compared. Results: There were no hospital deaths. Intensive-care and hospital length of stay were comparable in the 2 groups (p=N.S.). Twenty-three months of actuarial survival was 73.2 ± 8.0%, without inter-group differences (log-rank p=0.627), actuarial freedom from congestive heart failure was 71.4 ± 5.6%, freedom from hospitalization was 59.8 ± 7.7%, without inter-group differences (p=0.497 and 0.393 respectively), and actuarial freedom from recurrent CIMR was 62.7 ± 10.4%, without group-difference (p=0.259), respectively. Both groups showed progressive improvement of NYHA (Time p=0.0001), with reduced diuretics (p=0.0001), and without inter-group differences (Group Time p=0.894 and 0.397 respectively). Both groups showed a constant improvement of left ventricular end-systolic diameters, ejection fraction, CIMR-grade, tricuspid insufficiency grading, indexed left ventricular mass, systolic pulmonary arterial pressure, and tricuspid annular plane systolic excursion (Time p=0.0001 for all), without intergroup differences (p=N.S. for all). However, left ventricular end-diastolic diameters were better remodeled in Group A (Group Time p=0.037), together with a higher mean trans-mitral Δp and a lower coaptation depth (Group Time p=0.0001 and 0.05 respectively). Left atrial diameter was ameliorated in Group B, but remained unchanged in Group A (p=0.168). Conclusions: URMA cures CIMR. The induction of mild mitral stenosis did not affect clinical, functional and echocardiographic outcomes. abstract_id: PUBMED:25660923 Mitral valve area during exercise after restrictive mitral valve annuloplasty: importance of diastolic anterior leaflet tethering. Background: Restrictive mitral valve annuloplasty (RMA) for secondary mitral regurgitation might cause functional mitral stenosis, yet its clinical impact and underlying pathophysiological mechanisms remain debated. Objectives: The purpose of our study was to assess the hemodynamic and clinical impact of effective orifice area (EOA) after RMA and its relationship with diastolic anterior leaflet (AL) tethering at rest and during exercise. Methods: Consecutive RMA patients (n = 39) underwent a symptom-limited supine bicycle exercise test with Doppler echocardiography and respiratory gas analysis. EOA, transmitral flow rate, mean transmitral gradient, and systolic pulmonary arterial pressure were assessed at different stages of exercise. AL opening angles were measured at rest and peak exercise. Mortality and heart failure readmission data were collected for at least 20 months after surgery. Results: EOA and AL opening angle were 1.5 ± 0.4 cm(2) and 68 ± 10°, respectively, at rest (r = 0.4; p = 0.014). EOA increased significantly to 2.0 ± 0.5 cm(2) at peak exercise (p &lt; 0.001), showing an improved correlation with AL opening angle (r = 0.6; p &lt; 0.001). Indexed EOA (EOAi) at peak exercise was an independent predictor of exercise capacity (maximal oxygen uptake, p = 0.004) and was independently associated with freedom from all-cause mortality or hospital admission for heart failure (p = 0.034). Patients with exercise EOAi &lt;0.9 cm(2)/m(2) (n = 14) compared with ≥0.9 cm(2)/m(2) (n = 25) had a significantly worse outcome (p = 0.048). In multivariate analysis, AL opening angle at peak exercise (p = 0.037) was the strongest predictor of exercise EOAi. Conclusions: In RMA patients, EOA increases during exercise despite fixed annular size. Diastolic AL tethering plays a key role in this dynamic process, with increasing AL opening during exercise being associated with higher exercise EOA. EOAi at peak exercise is a strong and independent predictor of exercise capacity and is associated with clinical outcome. Our findings stress the importance of maximizing AL opening by targeting the subvalvular apparatus in future repair algorithms for secondary mitral regurgitation. Answer: The question of whether undersized annuloplasty for functional mitral regurgitation (FMR) leads to clinically relevant mitral stenosis during exercise has been addressed in several studies. According to the available abstracts, the evidence suggests that while restrictive mitral annuloplasty can result in a degree of mitral stenosis, it does not necessarily limit exercise capacity or lead to clinically significant mitral stenosis during exercise. One study found that at peak exercise, patients who underwent undersized ring annuloplasty for FMR showed improved cardiac performance indices and only a mild increase in planimetric mitral valve area, without significant changes in systolic pulmonary artery pressure, indicating preserved cardiac adaptation to exercise (PUBMED:22953670). Another study reported that restrictive mitral annuloplasty for secondary mitral regurgitation resulted in higher mitral gradients during exercise, but primary heart disease was more important for patient's exercise performance than the mitral stenosis resulting from using an undersized ring (PUBMED:26228598). Furthermore, a study comparing mitral valve annuloplasty and mitral valve replacement found that while mitral valve annuloplasty may cause functional mitral stenosis, especially during exercise, it was associated with poor exercise mitral hemodynamic performance and elevated postoperative exercise systolic pulmonary arterial pressure compared with mitral valve replacement (PUBMED:24199762). However, another study indicated that transmitral gradients after restrictive mitral valve annuloplasty did not correlate with functional capacity as measured by maximal oxygen uptake during semi-supine bicycle testing (PUBMED:24332186). Additionally, a technique termed interrupted commissural band annuloplasty (iCBA) was developed to prevent postoperative mitral stenosis both at rest and during exercise (PUBMED:27225485). In contrast, another study found that restrictive mitral valve annuloplasty (RMA) for degenerative mitral regurgitation was associated with functional stenosis, and pulmonary arterial pressure at rest and at peak exercise discriminated low exercise capacity (PUBMED:29114917). In summary, while restrictive mitral annuloplasty may result in some degree of mitral stenosis, the impact on exercise capacity and clinical relevance during exercise appears to be influenced by various factors, including the primary heart disease and the specific surgical technique used.
Instruction: Do Substance Use, Psychosocial Adjustment, and Sexual Experiences Vary for Dating Violence Victims Based on Type of Violent Relationships? Abstracts: abstract_id: PUBMED:27866389 Do Substance Use, Psychosocial Adjustment, and Sexual Experiences Vary for Dating Violence Victims Based on Type of Violent Relationships? Background: We examined whether substance use, psychosocial adjustment, and sexual experiences vary for teen dating violence victims by the type of violence in their relationships. We compared dating youth who reported no victimization in their relationships to those who reported being victims of intimate terrorism (dating violence involving one physically violent and controlling perpetrator) and those who reported experiencing situational couple violence (physical dating violence absent the dynamics of power and control). Methods: This was a cross-sectional survey of 3745 dating youth from 10 middle and high schools in the northeastern United States, one third of whom reported physical dating violence. Results: In general, teens experiencing no dating violence reported less frequent substance use, higher psychosocial adjustment, and less sexual activity than victims of either intimate terrorism or situational couple violence. In addition, victims of intimate terrorism reported higher levels of depression, anxiety, and anger/hostility compared to situational couple violence victims; they also were more likely to report having sex, and earlier sexual initiation. Conclusions: Youth who experienced physical violence in their dating relationships, coupled with controlling behaviors from their partner/perpetrator, reported the most psychosocial adjustment issues and the earliest sexual activity. abstract_id: PUBMED:31485923 Substance Use and Disparities in Teen Dating Violence Victimization by Sexual Identity Among High School Students. Sexual minority youth (SMY) report more substance use and experience more physical and sexual dating violence victimization than heterosexual youth; however, few studies have explored the relationship between substance use and disparities in teen dating violence and victimization (TDVV) using national-level estimates, and examined if these relationships vary by sexual minority subgroups. Data from the nationally representative 2015 and 2017 national Youth Risk Behavior Surveys were used to examine differences in TDVV and substance use by sexual identity, and to determine if substance use was associated with TDVV disparities between SMY and heterosexual high school students who dated 12 months prior to the survey (n = 18,704). Sex-stratified logistic regression models generated prevalence ratios adjusted for demographic characteristics and substance use behaviors to determine if substance use mediated the relationship between sexual identity and TDVV. Compared with their heterosexual peers, SMY experienced higher rates of TDVV and were more likely to report using most types of substances, although differences were more pronounced among female students compared with male students. Disparities in TDVV were reduced for male gay and bisexual students as well as for female bisexual students once substance use was entered into the model, suggesting that there is a relationship between substance use and some of gay and bisexual students' risk for experiences of TDVV. Comprehensive efforts for violence prevention among sexual minority students may benefit from incorporating substance use prevention, given its relationship to disparities in TDVV. abstract_id: PUBMED:33719695 Associations of Relationship Experiences, Dating Violence, Sexual Harassment, and Assault With Alcohol Use Among Sexual and Gender Minority Adolescents. Sexual and gender minority (SGM) adolescents report higher rates of dating violence victimization compared with their heterosexual and cisgender peers. Research on dating violence often neglects diversity in sexual and gender identities and is limited to experiences in relationships. Further, given that dating violence and alcohol use are comorbid, research on experiences of dating violence could provide insights into alcohol use disparities among SGM adolescents. We aimed to map patterns of relationship experiences, sexual and physical dating violence, and sexual and physical assault and explored differences in these experiences among SGM adolescents. Further, we examined how these patterns explained alcohol use. We used a U.S. non-probability national web-based survey administered to 13-17-year-old SGM adolescents (N = 12,534). Using latent class analyses, four patterns were identified: low relationship experience, dating violence and harassment and assault (72.0%), intermediate dating experiences, sexual harassment, and assault and low levels of dating violence (13.1%), high dating experiences, dating violence, and sexual assault (8.6%), and high dating experiences, dating violence, and sexual harassment and assault (6.3%). Compared to lesbian and gay adolescents, bisexual adolescents reported more experiences with dating, dating violence, and sexual assault, whereas heterosexual adolescents reported fewer experiences with dating, dating violence, and sexual harassment and assault. Compared to cisgender boys, cisgender girls, transgender boys, and non-binary/assigned male at birth adolescents were more likely to experience dating violence inside and outside of relationship contexts. Experiences of dating, dating violence, and sexual harassment and assault were associated with both drinking frequency and heavy episodic drinking. Together, the findings emphasize the relevance of relationship experiences when studying dating violence and how dating violence and sexual harassment and assault might explain disparities in alcohol use. abstract_id: PUBMED:38195560 Experiences of nursing students who are victims of dating violence: a qualitative study. Background: Dating Violence (DV) is a type of Intimate Partner Violence that occurs between young people, and they are those behaviours that cause physical, sexual or psychological harm. Objective/aim: To know the experience of university students around dating violence. Design And Methods: Qualitative study with a phenomenological approach was conducted through semi-structured individual interviews with nursing students' victims of dating violence with the same starting categories. The public involve in this study were nursing students who freely agreed to participate in the interviews and gave their informed consent. Results: Eleven nursing students participated, the sample was heterogeneous for gender and sexual diversity. Obtaining results about their experience with dating violence, manifestations of dating violence and cyber violence in their relationships, consequences, formal and informal help seeking and proposals for help as nursing students, among others. Conclusion: Dating violence is a serious problem that seriously affects the victims and requires the creation of prevention programs. The experiences of university students about DV are mainly painful experiences, with serious consequences for those involved, needing help from their close environment and professional help to overcome the problems generated by their partners. Implications: It is important due to the high prevalence of this phenomenon, also among nursing students, to provide key points to future health professionals and victims of dating violence on the correct way to act against violence due to lack of knowledge on the subject. This study clarifies the experiences of dating violence and how to offer help to victims from the informal and professional sphere. Trial Registration: This study was approved by the Ethics Committee of Clinical Research of the Health Area of Talavera de la Reina (Toledo) with code 01/2021. abstract_id: PUBMED:32652131 The mediating role of school connectedness in the associations between dating and sexual violence victimization and substance use among high school students. Dating and sexual violence victimization affect a significant portion of teenagers and result in a wide array of negative health and behavioral outcomes, including increased alcohol and drug use. In some cases, students who have been victimized may develop feelings of being unsupported by or disconnected from peers and adults in their school community, placing them at even higher risk for negative health outcomes. Using a prospective design, the present study sought to explore this possibility by examining the direct and indirect associations between dating violence (DV) and sexual violence (SV) victimization, school connectedness, and alcohol and marijuana use at baseline (T1) and 2-month follow-up (T2) in a sample of high school students (N = 1752). Results of multiple regression analyses supported a hypothesized mediation model of these associations; both forms of victimization were positively associated with heavy drinking at T1 and marijuana use at T1 and T2, and negatively associated with school connectedness. Furthermore, school connectedness was negatively associated with both forms of substance use at T1 and T2, and partially mediated the effects of DV and SV victimization on heavy drinking at T1, and marijuana use at T1 and T2. These findings elucidate the importance of addressing intermediary cognitive processes such as perceptions of school connectedness in order to improve health and functional outcomes among high school victims of dating and sexual violence. abstract_id: PUBMED:30982385 Psychological Aggression, Attitudes About Violence, Violent Socialization, and Dominance in Dating Relationships. Psychological aggression is a widespread form of abuse in dating relationships, especially in collectivist societies with ties to patriarchal beliefs. Despite the prevalence of psychological aggression, it has seldom been studied in connection with known antecedents of interpersonal violence, including dominance, attitudes supportive of violence, and violence socialization processes during childhood. The present study sought to test relationships among these variables in young men and women. A total of 500 Mexican undergraduate students in northern Mexico reported on their experiences with psychological aggression, the dominance of a dating partner, and violent socialization during childhood, as well as on their approval of violence within and outside the family. The results indicate that the dominance of a dating partner is directly linked to male and female intimate partner violence (IPV) perpetration. Violent socialization and proviolent attitudes appear to be related to female dominance. Female and male psychological aggression victimization was predicted by the participant's own perpetration. In general, a dyadic approach appears to be useful for explaining psychological aggression perpetration and victimization in a collectivist society, in light of recent changes in normative beliefs held by young educated Mexicans. Implications for future research and public policy are discussed. abstract_id: PUBMED:32198599 Sexual Dating Violence, School-Based Violence, and Risky Behaviors Among U.S. High School Students. Sexual dating violence is associated with several risky health behaviors among adolescents. This study explored the associations between school-based violence, risky health behaviors, and sexual dating violence victimization among U.S. high school students using the 2017 Youth Behavior Risk Survey data. Results indicate a statistically significant correlation (p &lt; .05) between sexual dating violence, sex, sexual identity, and various risky behaviors including bullying, electronic bullying, alcohol use, and physical fighting. These additional behavioral risks experienced by sexual dating violence victims should be further researched to determine impact on overall quality of life and to help guide health education intervention development. abstract_id: PUBMED:25395224 Dating Violence and Substance Use: Exploring the Context of Adolescent Relationships. The connection between adolescent dating violence (ADV) and substance use is important to consider because of the serious consequences for teens who engage in these behaviors. Although prior research shows that these two health problems are related, the context in which they occur is missing, including when (i.e., the timeline) in the relationship these events occur. To fill this gap, eight sex-specific focus groups were conducted with 39 high school-aged teens, all of whom had experienced prior relationship violence. Adolescents discussed using alcohol and/or drugs at the start of the dating relationship and after the relationship ended as a way to cope with the break-up. Alcohol and drugs were also used throughout to cope with being in an abusive relationship. The intersection of ADV and substance use occurred during instances when both partners were using alcohol and/or drugs, as well as when only one partner was using. These findings provide support for prevention and intervention programs that consider the intersection of ADV and substance use. abstract_id: PUBMED:36519711 Prevalence and Correlates of Non-Dating Sexual Violence, Sexual Dating Violence, and Physical Dating Violence Victimization among U.S. High School Students during the COVID-19 Pandemic: Adolescent Behaviors and Experiences Survey, United States, 2021. The COVID-19 pandemic created an environment of disruption and adversity for many adolescents. We sought to establish the prevalence of non-dating sexual violence, sexual dating violence, and physical dating violence victimization among adolescents during the COVID-19 pandemic and to investigate whether experiences of disruption and adversity placed adolescents at greater risk for these forms of interpersonal violence. We conducted a secondary analysis of data from the Adolescent Behavior and Experiences Survey, collected January to June 2021 from a nationally representative sample of U.S. high school students (N = 7,705). Exposures included abuse by a parent; economic, housing, and food and nutrition insecurity; interpersonal connectedness; and personal well-being. Among female students, 8.0% experienced non-dating sexual violence; 12.5% experienced sexual dating violence; and 7.7% experienced physical dating violence. Among male students, 2.2% experienced non-dating sexual violence; 2.4% experienced sexual dating violence; and 4.9% experienced physical dating violence. Among female students, both emotional and physical abuse by a parent was related to non-dating sexual violence, emotional abuse was related to sexual dating violence, and physical abuse was related to physical dating violence. Among males, emotional abuse by a parent was related to physical dating violence and physical abuse by a parent was related to sexual dating violence. Hunger was associated with sexual and physical dating violence among female students and homeless was associated with physical dating violence among male students. Although there were differences by sex, abuse by a parent, hunger, and homelessness created precarity that may have increased the likelihood that adolescents would be exposed to risky peer or dating relationships. Adolescents need support that stops and prevents experiences of non-dating sexual and dating violence connected to interventions that address adversities experienced during the COVID-19 pandemic. abstract_id: PUBMED:12803450 Risk profiles of adolescent girls who were victims of dating violence. The prevalence of dating violence, as well as its relationship to potential risk factors, was examined among a nationally representative sample of 9th-through 12th-grade U.S. females (N = 7,824) who completed the 1999 Youth Risk Behavior Survey. The dependent variable was physical dating violence; the independent variables were violence, suicide, substance use, and sexual risk behavior. Unadjusted odds ratios (OR) and 95% confidence intervals were examined, and multivariate logistic regression analysis was conducted to identify the most significant risk factors. It was found that nearly one in ten of the adolescent girls reported dating violence. Twelfth-grade girls had the highest prevalence of dating violence. Girls who were victims of dating violence were more likely to be involved in other violent behaviors, to report extreme sadness and suicidal actions, to use illicit substances, and to engage in risky sexual behavior. Multivariate analysis revealed that dating violence was associated with sad/hopeless feelings (OR = 2.13), binge drinking (OR = 1.96), cocaine or inhalant use (OR = 2.90), multiple sex partners (OR = 2.38), nonuse of condoms (OR = 1.53), and ethnicity, specifically being Black (OR = 2.32) or Hispanic (OR = 1.82). These findings provide further evidence that there is a clustering of risk factors among certain adolescents, specifically girls who experience dating violence. Answer: Yes, substance use, psychosocial adjustment, and sexual experiences do vary for dating violence victims based on the type of violent relationships. According to the research, teens experiencing no dating violence reported less frequent substance use, higher psychosocial adjustment, and less sexual activity than victims of either intimate terrorism or situational couple violence (PUBMED:27866389). Intimate terrorism, which involves one physically violent and controlling perpetrator, is associated with higher levels of depression, anxiety, and anger/hostility compared to victims of situational couple violence. Victims of intimate terrorism also reported having sex and earlier sexual initiation (PUBMED:27866389). Furthermore, sexual minority youth (SMY) experience higher rates of teen dating violence victimization (TDVV) and are more likely to report using most types of substances. Disparities in TDVV were reduced for male gay and bisexual students as well as for female bisexual students once substance use was entered into the model, suggesting a relationship between substance use and some of gay and bisexual students' risk for experiences of TDVV (PUBMED:31485923). Additionally, sexual and gender minority (SGM) adolescents report higher rates of dating violence victimization and alcohol use compared with their heterosexual and cisgender peers. Experiences of dating, dating violence, and sexual harassment and assault were associated with both drinking frequency and heavy episodic drinking (PUBMED:33719695). In summary, the type of violent relationship and the sexual identity of the individuals involved can influence the extent and nature of substance use, psychosocial adjustment, and sexual experiences among dating violence victims.
Instruction: Statin adherence and LDL cholesterol levels. Should we assess adherence prior to statin upgrade? Abstracts: abstract_id: PUBMED:25770073 Statin adherence and LDL cholesterol levels. Should we assess adherence prior to statin upgrade? Background: Adherence to statin therapy has been shown to be suboptimal. In statin-treated patients with residual elevated low density lipoprotein cholesterol (LDL-C) levels the physician must decide whether to switch to a more potent statin or try and achieve better adherence. We examined the association between adherence and LDL-C within low, moderate and high intensity statin groups in a "real world" setting. Methods: We assessed annual adherence by the mean MPR (Medication Possession Ratio = number of purchased/prescribed daily doses) in unselected patient group. Statins were stratified (ACC/AHA Guideline) into low, moderate and high intensity groups. The impact of adherence on LDL levels was assessed by LOESS (locally weighted scatter plot smoothing). Results: Out of 1183 patients 173 (14.6%) were treated with low, 923 (78.0%) with moderate and 87 (7.4%) with high intensity statins. Statin intensity was inversely associated with adherence (MPR 77±21, 73±22 and 69±21% for low, moderate and high intensity respectively, p=0.018). Non-adjusted LDL levels decreased with higher adherence: a 10% adherence increase resulted in LDL decrease of 3.5, 5.8 and 7.1mg/dL in low, moderate and high intensity groups. Analysis of the adherence effect on LDL levels adjusted for age, DM and ischemic heart disease showed that MPR above 80% was associated with an additional decrease in LDL levels only in the high intensity group. Conclusions: Increased adherence to statins beyond an MPR of 80% improves LDL levels only among patients given high intensity therapy. Switching from lower to higher intensity therapy may be more effective than further efforts to increase adherence. abstract_id: PUBMED:28814932 Achievement of LDL Cholesterol Goal and Adherence to Statin by Diabetes Patients in Kelantan. Background: Statins are a class of potent drugs that can be used to reduce cholesterol, especially low-density lipoprotein cholesterol (LDL-C). However, their effectiveness is limited if adherence to treatment is poor. The objectives of the study are to estimate the proportion of diabetic patient who has achieved LDL-C goal and to determine the association of LDL-C achievement with socio demographic factors and statin therapy adherence. Methods: This is a cross-sectional study involving 234 patients with type 2 diabetes mellitus (T2DM) and dyslipidaemia attending an outpatient clinic in a hospital in Kelantan. Interviews and self-administered questionnaires were used to determine their sociodemographic and clinical characteristics. Adherence to therapy was assessed using the Medication Compliance Questionnaire (MCQ). The associations between the achievement of LDL targets and sociodemographic/clinical factors, including adherence, were analysed with simple logistic regression. Results: About 37.6% of patients achieved their LDL-C target. The percentage of patients who adhered to statin use was 98.3%, and 20.5% of these patients reported full adherence. There was no significant association between achievement of LDL-C targets with adherence or any other sociodemographic factors, such as age, gender and educational or economic status (all P-value &lt; 0.05). Conclusion: Despite a high level of adherence, the majority of patients failed to achieve LDL-C targets. More concerted efforts are needed to improve this. abstract_id: PUBMED:26359331 Adherence to standard-dose or low-dose statin treatment and low-density lipoprotein cholesterol response in type 2 diabetes patients. Objective: To determine the association between adherence, dose and low-density lipoprotein (LDL) cholesterol response in patients with type 2 diabetes initiating statin treatment. Research Design And Methods: This cohort study was performed using data for 2007-2012 from the Groningen Initiative to Analyse Type 2 Diabetes Treatment (GIANTT) database. The association between adherence to a standard-dose statin and LDL cholesterol response was assessed using linear regression, adjusting for covariates. The effect of low-dose versus standard-dose was assessed in a propensity-score matched cohort. Adherence rates, defined as the proportion of days covered (PDC), were estimated between statin initiation and LDL outcome measurement. Main Outcome Measure: LDL cholesterol level at follow-up. Results: The effect of adherence on LDL cholesterol response, measured in 2160 patients, was dependent on the baseline LDL cholesterol level. For patients with a baseline LDL cholesterol of 3.7 mmol/l and an adherence rate of 80%, a 40% reduction in LDL cholesterol was predicted. In the matched sample of 1144 patients, the treatment dose showed a difference in impact on the outcome for adherence rates higher than 50%. It was estimated that a patient with a baseline LDL cholesterol of 3.7 mmol/l will need an adherence rate of at least 76% on low-dose and 63% on standard-dose treatment to reach the LDL cholesterol target of 2.5 mmol/l. Limitations: Adherence was measured as the PDC, which is known to overestimate actual adherence. Also, we were not able to adjust for lifestyle factors. Conclusions: We determined the concurrent effect of treatment adherence and dose on LDL cholesterol outcomes. Given the adherence levels seen in clinical practice, diabetes patients initiating statin treatment are at high risk of not reaching the recommended cholesterol target, especially when they start on a low-dose statin. abstract_id: PUBMED:24917035 Self-reported adherence by MARS-CZ reflects LDL cholesterol goal achievement among statin users: validation study in the Czech Republic. Rationale, Aims And Objectives: Measuring self-reported adherence may contribute to minimizing the risk of therapy failure. Hence, the main aim of the study was to assess the psychometric properties of the Czech version of Medication Adherence Report Scale (MARS-CZ) and its appropriateness for use in long-term statin therapy where goal levels of low-density lipoprotein cholesterol (LDL-c) should be achieved. Methods: Anonymous structured interview was performed to determine self-reported adherence by MARS-CZ in outpatients chronically treated with statins. At the same time, medication records were reviewed for inclusion of patients into groups of those who achieved and do not achieved LDL-c goal according to cardiovascular risk level. Reliability and validity of MARS-CZ were tested as well as the relationship between adherence and LDL-c goal achievement was examined. Results: A total of 136 (86.6%) patients completed the interview; mean age was 66.1 years; 49.3% were male. The mean score of MARS-CZ was 24.4 and showed positive skewing. Satisfactory internal consistency (Cronbach's α=0.54), strong test-retest reliability (r=0.83, P&lt;0.001; intra-class correlation=0.63, 95% confidence interval: 0.35-0.81) and positive correlation with eight-item Morisky Medication Adherence Scale (r=0.62, P&lt;0.001) were indicated. Low validity values were found between MARS-CZ and 12-item Short Form Health Survey mental and physical subscales. MARS-CZ score significantly correlated with LDL-c goal achievement (P&lt;0.05) when all patients who achieved LDL-c goal (35%) reported high adherence to statin. MARS-CZ score also correlated with cardiovascular risk level and doctor's judgments on adjusting treatment targets for each patient. Conclusion: This study proved MARS-CZ as an acceptable self-reported adherence measure. In routine clinical practice, MARS-CZ could be helpful to reveal medication non-adherence before the alteration of drug regimen and thereby contributing to enhancement of statin therapy management. abstract_id: PUBMED:23744794 Association between statin adherence and cholesterol level reduction from baseline in a veteran population. Study Objective: To investigate the association between statin adherence and changes in lipid panel outcomes from baseline in a veteran population. Design: Retrospective cohort study using multiple linear regression models. Setting: Veterans Affairs health care system within the Veterans Integrated Service Network 22, a network of Veterans Affairs facilities in the southwest region of the United States that includes Los Angeles, San Diego, Loma Linda, and Long Beach, California, and Las Vegas, Nevada, with an enrollment of approximately 1.4 million veterans. Patients: A total of 5365 patients who were new statin users between December 1, 2006, and November 30, 2007; 2674 patients were in the adherent group and 2691 were in the nonadherent group. Measurements And Main Results: Adherence was determined by the medication possession ratio. Patients were categorized as adherent if the medication possession ratio at follow-up was 0.80 or more. Adherent patients exhibited significant differences in baseline demographic and clinical characteristics than nonadherent patients in our study sample. Baseline laboratory values for adherent patients were significantly lower for low-density lipoprotein cholesterol (LDL), high-density lipoprotein cholesterol (HDL), non-high-density lipoprotein cholesterol (non-HDL), and total cholesterol levels. The primary outcome was change in LDL level from baseline at 12 months. Secondary outcomes were changes in non-HDL and total cholesterol levels from baseline at 12 months. Independent variables controlled for in the multiple linear regression included age, sex, body mass index, race-ethnicity, baseline lipid panel (LDL, HDL, total cholesterol, and triglycerides), statin copayment status, income quintile (according to ZIP code median household income), baseline medication count, statin prescribed, and comorbidities. Multiple linear regression revealed that adherent patients demonstrated significantly greater reductions in LDL of 20.98 mg/dl versus nonadherent patients (p&lt;0.0001). Adherent patients similarly demonstrated larger reductions of 24.31 mg/dl in non-HDL and 24.06 mg/dl in total cholesterol versus nonadherent patients (p&lt;0.0001 for both comparisons). Conclusion: Patients adherent to statin therapy had significant associations with clinically relevant reductions in LDL, non-HDL, and total cholesterol from baseline at 12 months compared with nonadherent patients when controlling for potential confounders. Adherence to statin therapy may have important consequences in decreasing clinical outcomes such as myocardial infarctions, strokes, and mortality due to large reductions in lipid panel outcomes from baseline at 12 months. abstract_id: PUBMED:31886861 Adherence to statin therapy favours survival of patients with symptomatic peripheral artery disease. Aims: We hypothesized that adherence to statin therapy determines survival in patients with peripheral artery disease (PAD). Methods And Results: Single-centre longitudinal observational study with 691 symptomatic PAD patients. Mortality was evaluated over a mean follow-up of 50 ± 26 months. We related statin adherence and low-density lipoprotein cholesterol (LDL-C) target attainment to all-cause mortality. Initially, 73% of our PAD patients were on statins. At follow-up, we observed an increase to 81% (P &lt; 0.0001). Statin dosage, normalized to simvastatin 40 mg, increased from 50 to 58 mg/day (P &lt; 0.0001), and was paralleled by a mean decrease of LDL-C from 97 to 82 mg/dL (P &lt; 0.0001). The proportion of patients receiving a high-intensity statin increased over time from 38% to 62% (P &lt; 0.0001). Patients never receiving statins had a significant higher mortality rate (31%) than patients continuously on statins (13%) or having newly received a statin (8%; P &lt; 0.0001). Moreover, patients on intensified statin medication had a low mortality of 9%. Those who terminated statin medication or reduced statin dosage had a higher mortality (34% and 20%, respectively; P &lt; 0.0001). Multivariate analysis showed that adherence to or an increase of the statin dosage (both P = 0.001), as well as a newly prescribed statin therapy (P = 0.004) independently predicted reduced mortality. Conclusion: Our data suggest that adherence to statin therapy is associated with reduced mortality in symptomatic PAD patients. A strategy of intensive and sustained statin therapy is recommended. abstract_id: PUBMED:36697324 Adherence to statin treatment in patients with familial hypercholesterolemia: A dynamic prediction model. Background: Statins are the primary therapy in patient with heterozygous familial hypercholesterolemia (HeFH). Non-adherence to statin therapy is associated with increased cardiovascular risk. Objective: We constructed a dynamic prediction model to predict statin adherence for an individual HeFH patient for each upcoming statin prescription. Methods: All patients with HeFH, identified by the Dutch Familial Hypercholesterolemia screening program between 1994 and 2014, were eligible. National pharmacy records dated between 1995 and 2015 were linked. We developed a dynamic prediction model that estimates the probability of statin adherence (defined as proportion of days covered &gt;80%) for an upcoming prescription using a mixed effect logistic regression model. Static and dynamic patient-specific predictors, as well as data on a patient's adherence to past prescriptions were included. The model with the lowest AIC (Akaike Information Criterion) value was selected. Results: We included 1094 patients for whom 21,171 times a statin was prescribed. Based on the model with the lowest AIC, age at HeFH diagnosis, history of cardiovascular event, time since HeFH diagnosis and duration of the next statin prescription contributed to an increased adherence, while adherence decreased with higher untreated LDL-C levels and higher intensity of statin therapy. The dynamic prediction model showed an area under the curve of 0.63 at HeFH diagnosis, which increased to 0.85 after six years of treatment. Conclusion: This dynamic prediction model enables clinicians to identify HeFH patients at risk for non-adherence during statin treatment. These patients can be offered timely interventions to improve adherence and further reduce cardiovascular risk. abstract_id: PUBMED:37702065 Association of PCSK9 Inhibitor Initiation on Statin Adherence and Discontinuation. Background PCSK9is (proprotein convertase subtilisin/kexin type 9 inhibitors) are well tolerated, potently lower cholesterol, and decrease cardiovascular events when added to statins. However, statin adherence may decrease after PCSK9i initiation and alter clinical outcomes. We evaluate the association of PCSK9i initiation on statin discontinuation and adherence. Methods and Results In this retrospective pre-post difference-in-difference analysis, new PCSK9i claims were propensity matched with statin-alone users (April 2017-September 2019). The primary outcomes were statin adherence (proportion of days covered) and statin discontinuation (absence of statin coverage for at least 60 days) 12 months following PCSK9i initiation. Secondary outcomes included low-density lipoprotein cholesterol levels after 1 year. A total of 220 538 statin users and 700 PCSK9i users were identified, from which 178 on PCSK9i were included and matched to 712 on statins alone. At 12 months, mean statin proportion of days covered decreased from 67% to 48% in the PCSK9i group but increased from 68% to 86% in the statin-alone groups (P&lt;0.0001). Statin discontinuation rates increased from 11% to 39% in the PCSK9i group and from 7% to 9% in the statin-alone group (P=0.0041). Patients with low-density lipoprotein cholesterol &lt;70 mg/dL increased from 5% to 68% with PCSK9i but increased from 16% to 24% with statins alone (P&lt;0.0001). Changes in hospitalization rates were similar between both groups during the follow-up period. Conclusions PCSK9i initiation was associated with decreased low-density lipoprotein cholesterol, higher statin discontinuation, and reduced statin adherence. abstract_id: PUBMED:36345979 Statin adherence in patients with high cardiovascular risk: a cross-sectional study. Objectives: Statin adherence is an essential problem although lifetime medication is recommended especially in patients with high cardiovascular risk. The importance of perceived risk as a predictor of adherence among cardiology patients has not been fully explored. This study aimed to test the importance of perceived risk as a predictor of statin adherence amongst hypercholesterolemic patients to identify predictors associated with poor adherence. Methods: This cross-sectional study was conducted at cardiology outpatient clinics of the University hospital in Ankara, Turkey. A total of 327 consecutive patients with high CV risk were recruited. Self-reported Morisky Green Levine Medication Adherence Scale was used to assess statin adherence. Results: Of the patients studied, 34.5% had concerns about side effects. Also, the mean age was 63.85 ± 11.29 years, 66.1% were men, 32.4% applied non-drug alternative therapies, 53.2% had a Mediterranean-style diet and 20.8% checked their lipid values irregularly. Participants reported 50.2% high, 30% moderate, and 19.9% low statin medication adherence. Low-density lipoprotein cholesterol (LDL-C), Total Cholesterol (TC), Triglyceride (TG) and high-density lipoprotein (HDL) control rates were 44.6%, 74.3%, 61.5% and 41.6%. On multiple logistic regression, concern about side effects was associated with a statistically significant quadruple elevation of odds of non-adherence. Also, being male, former smokers, not having complementary interventions, having regular visits, being educated for more than 5 years, having low depressive symptom scores, living in a rural, being never or former smokers, employee were significant predictors of high medication adherence scores. Conclusion: Approximately half of the patients reported high medication adherence. Proper strategies to improve adherence would include patient education efforts focused on patients with concerns about side effects and those who are female, less educated, current smokers, interested in complementary interventions, have irregular follow-up visits, and have depressive symptoms. Brief medication adherence scales may facilitate the assessment of patients' adherence. abstract_id: PUBMED:27696318 Statin Adherence: Does Gender Matter? Purpose Of Review: Cardiovascular disease (CVD) continues to be the leading cause of death for men and women in the USA. Statins have contributed significantly to noted declines in cardiovascular-related mortality in the last decade; however, the benefit of statins is inequitable across genders. Women continue to be less likely to take statins and to meet target LDL goals than men. As a possible contributing factor to this disparity, we explore the evidence for gender-based differences in provision of, and adherence to statins. Recent Findings: Compared with men, women are less likely to adhere to statins. Potential reasons for this gender difference in use of statins can be observed across all phases of adherence including both intentional and unintentional non-adherence. Notable gender-specific contributing factors for statin non-adherence include decreased provider and patient awareness of CVD risk among women, higher risk of statin intolerance among women, and competing demands associated with family caregiving responsibilities. Similar to limitations in the broader CVD literature, there is inadequate inclusion of gender-specific analyses in statin-related trials. Gender-based disparities in statin adherence can be linked to both provider level, psychosocial, and medication intolerance factors. Interventions designed to improve statin adherence should take gender-specific challenges into consideration such as women being older at the time of increased CVD risk, higher rates of statin intolerance, and potentially greater caregiving responsibilities. Answer: The relationship between statin adherence and LDL cholesterol levels is complex and has been the subject of various studies. According to the research, adherence to statin therapy is crucial for achieving optimal LDL cholesterol levels, but the impact of adherence may vary depending on the intensity of the statin regimen and other patient-specific factors. One study found that increased adherence to statins beyond a Medication Possession Ratio (MPR) of 80% improves LDL levels only among patients given high-intensity therapy (PUBMED:25770073). This suggests that for patients on high-intensity statins, ensuring adherence is particularly important for LDL cholesterol reduction. However, for those on low or moderate-intensity statins, the study suggests that switching from lower to higher intensity therapy may be more effective than further efforts to increase adherence. Another study involving patients with type 2 diabetes mellitus (T2DM) and dyslipidemia found that despite a high level of adherence, the majority of patients failed to achieve LDL-C targets (PUBMED:28814932). This indicates that factors other than adherence, such as the potency of the statin regimen or individual patient response, may play a significant role in achieving LDL-C goals. Research also indicates that adherence to a standard-dose statin and LDL cholesterol response is dependent on the baseline LDL cholesterol level, with higher adherence rates needed on low-dose treatment to reach the LDL cholesterol target (PUBMED:26359331). This underscores the importance of considering both the dose and adherence when evaluating a patient's response to statin therapy. A study validating the Czech version of the Medication Adherence Report Scale (MARS-CZ) found that self-reported adherence correlates with LDL-C goal achievement (PUBMED:24917035). This suggests that assessing adherence could be helpful in clinical practice to identify non-adherence before altering the drug regimen. In a veteran population, adherence to statin therapy was associated with clinically relevant reductions in LDL, non-HDL, and total cholesterol from baseline at 12 months compared with nonadherent patients (PUBMED:23744794). This further supports the idea that adherence is a key factor in achieving cholesterol reduction. For patients with symptomatic peripheral artery disease (PAD), adherence to statin therapy was associated with reduced mortality (PUBMED:31886861). This highlights the potential life-saving benefits of statin adherence in certain high-risk populations.
Instruction: Are preoperative Kattan and Stephenson nomograms predicting biochemical recurrence after radical prostatectomy applicable in the Chinese population? Abstracts: abstract_id: PUBMED:23533351 Are preoperative Kattan and Stephenson nomograms predicting biochemical recurrence after radical prostatectomy applicable in the Chinese population? Purpose: Kattan and Stephenson nomograms are based on the outcomes of patients with prostate cancer recruited in the USA, but their applicability to Chinese patients is yet to be validated. We aim at studying the predictive accuracy of these nomograms in the Chinese population. Patients And Methods: A total of 408 patients who underwent laparoscopic or open radical resection of prostate from 1995 to 2009 were recruited. The preoperative clinical parameters of these patients were collected, and they were followed up regularly with PSA monitored. Biochemical recurrence was defined as two or more consecutive PSA levels &gt;0.4 ng/mL after radical resection of prostate or secondary cancer treatment. Results: The overall observed 5-year and 10-year biochemical recurrence-free survival rates were 68.3% and 59.8%, which was similar to the predicted values by the Kattan and Stephenson nomograms, respectively. The results of our study achieved a good concordance with both nomograms (Kattan: 5-years, 0.64; Stephenson: 5-years, 0.62, 10-years, 0.71). Conclusions: The incidence of prostate cancer in Hong Kong is increasing together with the patients' awareness of this disease. Despite the fact that Kattan nomograms were derived from the western population, it has been validated in our study to be useful in Chinese patients as well. abstract_id: PUBMED:24146147 External validation of preoperative nomograms predicting biochemical recurrence after radical prostatectomy. Objective: Preoperative nomograms can accurately predict the rate of biochemical recurrence after radical prostatectomy. Although these nomograms were shown to be valid in several external validation cohorts of Caucasian patients, they have not been validated in non-Caucasian patients from Asian countries. We therefore validated these preoperative nomograms in a Japanese cohort, using different cutoff values of prostate-specific antigen concentrations for biochemical recurrence. Methods: We analyzed 637 patients who underwent radical prostatectomy for clinically localized prostate cancer at the Tokyo Medical University Hospital between February 2000 and January 2011. We evaluated two prostate-specific antigen cutoff values for biochemical recurrence, 0.2 and 0.4 ng/ml. Using c-index and calibration plots, we validated the previously developed Kattan and Stephenson nomograms. Results: Overall, the mean 5-year non-biochemical recurrence rate was 72 ± 4%. Using a prostate-specific antigen cutoff values of 0.2 and 0.4 ng/ml, the c-indices for the Kattan nomogram were 0.714 and 0.733. Similarly, using a prostate-specific antigen cutoff values of 0.2 and 0.4 ng/ml, the c-indices for the Stephenson nomograms were 0.717 and 0.671. The calibration plots showed that the predictive value of the Stephenson nomogram at a prostate-specific antigen cutoff of 0.2 ng/ml was close to the actual outcomes compared with other combinations of nomograms and prostate-specific antigen cutoff levels. Conclusions: Because the c-indices of both nomograms were generally high, these nomograms can be applied to our cohort. The addition of biopsy information did not markedly improve the c-index but resulted in good calibration, indicating that the Stephenson nomogram may be a better fit for our patient cohort. abstract_id: PUBMED:19589584 Validation of two preoperative Kattan nomograms predicting recurrence after radical prostatectomy for localized prostate cancer in Turkey: a multicenter study of the Uro-oncology Society. Objectives: To examine, in a multicenter validation study designed under the guidance of the Uro-Oncology Society, the predictive accuracies of the 1998 and 2006 Kattan preoperative nomograms in Turkish patients. These 2 preoperative Kattan nomograms use preoperative parameters to estimate disease recurrence after radical prostatectomy. Methods: A total of 1261 men with clinically localized prostate cancer undergoing radical prostatectomy were included. The preoperative prostate-specific antigen level, biopsy Gleason score, clinical stage, number of positive and negative prostate biopsy cores, and postoperative recurrence status of all patients were studied. The predicted values using the Kattan nomograms and the observed values were compared. Results: The patient characteristics in the cohort were comparable with those of the cohorts used to create the Kattan nomograms. The 5-year probability of freedom from recurrence was 73% using Kaplan-Meier analysis and was similar to that of the 1998 Kattan nomogram cohort. However, the 10-year probability of freedom from recurrence was 67%, slightly lower than the same estimate from the 2006 nomogram cohort. The predicted values of recurrence using Kattan nomogram and the observed rates in our cohort were similar. The estimated concordance index value was 0.698 and 0.705 for 1998 and 2006 nomograms, respectively. Conclusions: The Kattan preoperative nomograms can be used with adequate success in Turkey, because the predicted and observed rates in our cohort were similar. Our results have demonstrated satisfactory concordance index values, suggesting that both the 1998 and the 2006 Kattan preoperative nomograms can safely be used in Turkish patients with similar accuracy. Although the 2006 nomogram had slightly better discrimination, the 1998 nomogram was a little more calibrated. abstract_id: PUBMED:12474529 A validation of two preoperative nomograms predicting recurrence following radical prostatectomy in a cohort of European men. Kattan et al. at Baylor College of Medicine and D'Amico et al. at Harvard Medical School have each developed preoperative nomograms for prostate cancer recurrence after radical prostatectomy based on readily available clinical variables. Calibration and validation of those tools was achieved using North American patient cohorts, and their validity has not yet been shown in patients from other continents. We investigated the predictive accuracy of these nomograms when applied to European men with localized prostate cancer. Clinical data from patients who underwent radical prostatectomy at the University-Hospital Hamburg and fitted the respective derivation criteria were used for external validation (n = 1003 for the Kattan-Nomogram, n = 932 men for the D'Amico-Nomogram). Nomogram predictions of the probability for 2-years and 5-years freedom from recurrence predicted by the D'Amico-Nomogram and the Kattan-Nomogram respectively were compared with actual follow-up. The predictive accuracy of the nomograms was tested using areas under the receiver-operating-characteristic curves (AUC). The D'Amico-Nomogram AUC predicting 2-years probability of freedom from PSA recurrence was 0.80 vs. Kattan-Nomogram 5-years prediction with an AUC of 0.83. Using the 932 patients who exactly fit the derivation criteria of both nomograms, the predictive accuracy of the Kattan-Nomogram was 0.81. The superiority in predictive accuracy of the Kattan-Nomogram was statistically significant (p = 0.0274) but of unclear clinical significance. The two nomograms predicted recurrence with similar accuracy when applied to men diagnosed with localized prostate cancer in Germany. The high predictive accuracy of both nomograms demonstrates that these predictive tools derived in the U.S. can be applied to non-U.S. patients. abstract_id: PUBMED:29228232 External validation of two web-based postoperative nomograms predicting the probability of early biochemical recurrence after radical prostatectomy: a retrospective cohort study. The present study aimed to validate and compare the predictive accuracies of the Memorial Sloan Kettering Cancer Center (MSKCC) and Johns Hopkins University (JHU) web-based postoperative nomograms for predicting early biochemical recurrence (BCR) after radical prostatectomy (RP) and to analyze clinicopathological factors to predict early BCR after RP using our dataset. The c-index was 0.72 (95% confidence (CI): 0.61-0.83) for the MSKCC nomogram and 0.71 (95% CI: 0.61-0.81) for the and JHU nomogram, demonstrating fair performance in the Japanese population. Furthermore, we statistically analyzed our 174 patients to elucidate prognostic factors for early BCR within 2 years. Lymphovascular invasion (LVI) including lymphatic vessel invasion (ly) was a significant predictor of early BCR in addition to common variables (pT stage, extraprostatic extension, positive surgical margin and seminal vesicle invasion). LVI, particularly ly, may provide a good predictor of early BCR after RP and improve the accuracy of the nomograms. abstract_id: PUBMED:30390642 Performance of prostate cancer recurrence nomograms by obesity status: a retrospective analysis of a radical prostatectomy cohort. Background: Obesity has been associated with aggressive prostate cancer and poor outcomes. It is important to understand how prognostic tools for that guide prostate cancer treatment may be impacted by obesity. The goal of this study was to evaluate the predicting abilities of two prostate cancer (PCa) nomograms by obesity status. Methods: We examined 1576 radical prostatectomy patients categorized into standard body mass index (BMI) groups. Patients were categorized into low, medium, and high risk groups for the Kattan and CaPSURE/CPDR scores, which are based on PSA value, Gleason score, tumor stage, and other patient data. Time to PCa recurrence was modeled as a function of obesity, risk group, and interactions. Results: As expected for the Kattan score, estimated hazard ratios (95% CI) indicated higher risk of recurrence for medium (HR = 2.99, 95% CI = 2.29, 3.88) and high (HR = 8.84, 95% CI = 5.91, 13.2) risk groups compared to low risk group. The associations were not statistically different across BMI groups. Results were consistent for the CaPSURE/CPDR score. However, the difference in risk of recurrence in the high risk versus low risk groups was larger for normal weight patients than the same estimate in the obese patients. Conclusions: We observed no statistically significant difference in the association between PCa recurrence and prediction scores across BMI groups. However, our study indicates that there may be a stronger association between high risk status and PCa recurrence among normal weight patients compared to obese patients. This suggests that high risk status based on PCa nomogram scores may be most predictive among normal weight patients. Additional research in this area is needed. abstract_id: PUBMED:12750804 Can nomograms derived in the U.S. applied to German patients? A study about the validation of preoperative nomograms predicting the risk of recurrence after radical prostatectomy In patients suffering from prostate cancer, preoperative nomograms, which predict the risk of recurrence may provide a helpful tool in regard to the counselling and planning of an appropriate therapy. The best known nomograms were published by the Baylor College of Medicine, Houston and the Harvard Medical School, Boston. We investigated these nomograms derived in the U.S. when applied to German patients. Data from 1003 patients who underwent radical prostatectomy at the University-Hospital Hamburg were used for validation. Nomogram predictions of the probability for 2-years (Harvard nomogram) and 5-years (Kattan nomogram) freedom from PSA recurrence were compared with actual follow-up recurrence data using areas under the receiver-operating-characteristic curves (AUC). The recurrence free survival after 2 and 5 years was 78% and 58%, respectively. The AUC of the Harvard nomogram predicting 2-years probability of freedom from PSA recurrence was 0.80 vs. Kattan-Nomogram 5-years prediction of 0.83. Thereby, the Kattan nomogram showed a significant higher predictive accuracy (p=0.0274). For that reason preoperative nomograms derived in the U.S. can be applied to german patients. However, we would recommend the utilization of the Kattan nomogram due to its higher predictive accuracy. abstract_id: PUBMED:17033209 How far is the preoperative Kattan nomogram applicable for the prediction of recurrence after prostatectomy in patients presenting with PSA levels of more than 20 ng/ml? A validation study. Objective: We present an external validation study investigating the applicability of the preoperative Kattan nomogram for predicting recurrence after prostatectomy in a population of patients with serum prostate-specific antigen (PSA) levels exceeding 20 ng/ml. Materials: In the evaluation of clinical parameters pooled from a total of 191 patients presenting with PSA levels ranging between 20.1 and 100 ng/ml, the PSA-free survival rate 60 months after surgery was calculated according to Kattan nomograms. Subsequently, the results were statistically compared with the corresponding actual survival rates obtained from Kaplan-Meier analysis. For this purpose, the patients were assigned to one of four different risk groups according to predictions derived from the Kattan nomograms, enabling a direct comparison of expected (as predicted by Kattan nomogram) versus actual survival of each patient investigated in our study. Results: Predicted PSA-free survival rates were determined to be as follows: 83% (low risk group); 66% (intermediate risk group); 39% (intermediate-high risk group), and 10% (high risk group) in comparison with the actual survival rates determined to be 63, 62, 40 and 21%, respectively. For PSA levels ranging between 20.1 and 30 ng/ml, 30.1 and 50 ng/ml, and 50.1 and 100 ng/dl, PSA-free survival rates were found to be 57, 37, and 27% (p=0.0017), respectively, during a 5-year post-prostatectomy follow-up. Conclusions: The Kattan nomogram shows good statistical concordance with actual survival rates in the mean risk quadrants, but considerable differences were demonstrated concerning individuals with either a high or with a low risk of cancer progression. abstract_id: PUBMED:27799510 Validation of the Kattan Nomogram for Prostate Cancer Recurrence After Radical Prostatectomy. Background: The Kattan postoperative radical prostatectomy (RP) nomogram is used to predict biochemical recurrence-free progression (BCRFP) after RP. However, external validation among contemporary patients using modern outcome definitions is limited. Methods: A total of 1,931 patients who underwent RP at Roswell Park Cancer Institute (RPCI) between 1993 and 2014 (median follow-up, 47 months; range, 0-244 months) were assessed for NCCN-defined biochemical failure (BF) and RPCI-defined treatment failure (TF). Actual rates of biochemical failure-free survival (BFS; defined as 1 - BF) and treatment failure-free survival (TFS; defined as 1 - TF) were compared with Kattan BCRFP nomogram predictions. Results: The Kattan BCRFP nomogram predictions at 5 and 10 years were predictive of BFS (area under the receiver operating characteristic curve [AUC], 0.772) and TFS (AUC, 0.774). The Kattan BCRFP nomogram tended to underestimate BFS and TFS compared with actual outcomes. The Kattan 5-year BCRFP predictions consistently overestimated actual 5-year BFS outcomes among subgroups of high- and intermediate-risk patients with at least 5-year outcomes. Conclusions: The Kattan BCRFP nomogram is a robust predictor of NCCN-defined BF in a large sample of patients with RP with substantial follow-up and modern, standardized failure definitions. abstract_id: PUBMED:34855228 Incorporating artificial intelligence in urology: Supervised machine learning algorithms demonstrate comparative advantage over nomograms in predicting biochemical recurrence after prostatectomy. Objective: After radical prostatectomy (RP), one-third of patients will experience biochemical recurrence (BCR), which is associated with subsequent metastasis and cancer-specific mortality. We employed machine learning (ML) algorithms to predict BCR after RP, and compare them with traditional regression models and nomograms. Methods: Utilizing a prospective Uro-oncology registry, 18 clinicopathological parameters of 1130 consecutive patients who underwent RP (2009-2018) were recorded, yielding over 20,000 data points for analysis. The data set was split into a 70:30 ratio for training and validation. Three ML models: Naïve Bayes (NB), random forest (RF), and support vector machine (SVM) were studied, and compared with traditional regression models and nomograms (Kattan, CAPSURE, John Hopkins [JHH]) to predict BCR at 1, 3, and 5 years. Results: Over a median follow-up of 70.0 months, 176 (15.6%) developed BCR, at a median time of 16.0 months (interquartile range [IQR]: 11.0-26.0). Multivariate analyses demonstrated strongest association of BCR with prostate-specific antigen (PSA) (p: 0.015), positive surgical margins (p &lt; 0.001), extraprostatic extension (p: 0.002), seminal vesicle invasion (p: 0.004), and grade group (p &lt; 0.001). The 3 ML models demonstrated good prediction of BCR at 1, 3, and 5 years, with the area under curves (AUC) of NB at 0.894, 0.876, and 0.894, RF at 0.846, 0.875, and 0.888, and SVM at 0.835, 0.850, and 0.855, respectively. All models demonstrated (1) robust accuracy (&gt;0.82), (2) good calibration with minimal overfitting, (3) longitudinal consistency across the three time points, and (4) inter-model validity. The ML models were comparable to traditional regression analyses (AUC: 0.797, 0.848, and 0.862) and outperformed the three nomograms: Kattan (AUC: 0.815, 0.798, and 0.799), JHH (AUC: 0.820, 0.757, and 0.750) and CAPSURE nomograms (AUC: 0.706, 0.720, and 0.749) (p &lt; 0.001). Conclusion: Supervised ML algorithms can deliver accurate performances and outperform nomograms in predicting BCR after RP. This may facilitate tailored care provisions by identifying high-risk patients who will benefit from multimodal therapy. Answer: Yes, preoperative Kattan and Stephenson nomograms predicting biochemical recurrence after radical prostatectomy are applicable in the Chinese population. A study conducted on 408 Chinese patients who underwent laparoscopic or open radical resection of the prostate from 1995 to 2009 showed that the overall observed 5-year and 10-year biochemical recurrence-free survival rates were similar to the predicted values by the Kattan and Stephenson nomograms. The study achieved good concordance with both nomograms (Kattan: 5-years, 0.64; Stephenson: 5-years, 0.62, 10-years, 0.71), indicating that despite the fact that Kattan nomograms were derived from the Western population, they have been validated to be useful in Chinese patients as well (PUBMED:23533351).
Instruction: Intrascrotal lithiasis: an infrequent finding? Abstracts: abstract_id: PUBMED:10380322 Intrascrotal lithiasis: an infrequent finding? Objective: To report on 10 additional cases of intrascrotal calculi and briefly review the literature and pathogenesis of this benign lesion. Methods: 10 patients that had consulted for diverse testicular conditions were evaluated by ultrasound using the 7.5 MHz probe. Results: All patients were found to have a hydrocele of a larger or smaller volume with a mobile hyperechoic focus that produced acoustic shadows. Conclusions: The ultrasound finding of intrascrotal calculi is becoming increasingly more frequent. In our view, this is due to the fact that more sonographic studies are currently performed. The possibility to diagnose this condition obviates the need for subsequent explorations or surgery. abstract_id: PUBMED:34049174 Post-pregnancy recurrent biliary colic with intraoperative diagnosis of limy bile syndrome. Introduction: Limy bile syndrome (LBS) is an unusual condition in which gallbladder and/or bile ducts are filled with paste-like radiopaque material with a high calcium carbonate content. It can be rarely associated with PTH disorder and hypercalcemia. Presentation Of Case: A 35-year-old woman presented with epigastric and right hypochondrium pain since a few hours. Similar attacks occurred in the past months soon after a pregnancy with vaginal delivery. Laboratory findings were not significant. The abdominal ultrasound highlighted a micro-lithiasis of gallbladder without complications. Considering the recurrent biliary attacks, laparoscopic cholecystectomy was performed with intraoperative diagnosis of LBS. A subsequent endocrinological screening highlighted a normocalcemic hyperparathyroidism associated with Vitamin D deficiency, likely related to the recent pregnancy and not to LBS. Discussion: LBS is a rare condition with not clear etiology, frequently associated with cholelithiasis, of which it shares clinical presentation and potential complications. Diagnosis of LBS is based on abdominal X-ray/computed tomography scan, or it could be an intraoperative finding. The gold standard treatment is represented by laparoscopic cholecystectomy. The pregnancy with its related cholestatic phenotype could facilitate the LBS manifestation. An endocrinological screening should be performed to rule out a concomitant calcium metabolism disorder. Conclusion: Knowledge of this rare condition could help general surgeons handle it properly. abstract_id: PUBMED:1771943 Gallbladder carcinoma as an incidental finding Between March 1982 and December 1990, 903 patients underwent elective cholecystectomy. In 40 patients cholecystectomy was performed for gallbladder carcinoma. 15 malignomas (1.7%) were found incidentally. Preoperatively no anamnestic or diagnostic tumor signs were found in this group of patients. An en-bloc-resection of the gallbladder with resection of the bordering liver segments was performed when gallbladder carcinoma was diagnosed intraoperatively. When the diagnosis was established by postoperative histology, a relaparotomy with liver resection and lymphnode-dissection was done, except in one case (T-1a stage). Histology showed adenocarcinoma in 11 out of 15 cases. No significant difference in the course of the disease was observed in patients with gallbladder carcinoma of different types. The hospital mortality rate was 0% after curative and palliative surgical treatment of gallbladder carcinoma. Patients with T-1 and T-2 stage have survived without tumor-recurrence up to now. The median survival time after surgery for gallbladder carcinoma in T-3 stage was 17 months, and 8 months in T-4 stage. The morbidity rate after elective cholecystectomy is low and hospital mortality is 0%. According to the short survival times in advanced stages of gallbladder carcinoma, a prompt cholecystectomy in symptomatic gallbladder lithiasis or chronic cholecystitis is advocated. abstract_id: PUBMED:37303394 Juxta-Vesical Urinary Stones: An Extremely Rare Finding Secondary to Bladder Rupture and Squamous Cell Carcinoma in a Patient on Clean Intermittent Self-Catheterization. We present a rare case of juxta-vesical urinary stones in the lesser pelvis, incidentally diagnosed during the investigation of a urinary tract infection (UTI). The patient (male) had a history of neurogenic bladder and performed self-catheterizations. After the initial workup, the patient was admitted with a complicated UTI diagnosis. CT scan of the abdomen and pelvis depicted multiple bladder stones, some calculi lying juxta- and retro-vesically, an abscess cavity, and diffuse thickening of the bladder. The abscess was adherent to the bladder wall, containing calculi, too. We presumed that the patient self-inflicted a bladder rupture when performing clean intermittent self-catheterization (CISC) and stones dislodged in the pelvis due to his poor bladder sensation. Flexible cystoscopy was attempted but was not completed due to stone obstruction and poor bladder compliance. The patient underwent open surgical exploration. Several calculi were removed, the abscess was drained, and bladder wall biopsies were taken. Pathology results revealed invasive squamous bladder carcinoma; the patient was listed for radical cystectomy. We aim to familiarize the clinician with rare complications that should be taken into consideration when treating patients on CISC and present an extremely rare clinical finding of juxta-vesical lithiasis. abstract_id: PUBMED:24570035 Thoracolithiasis: a unique autopsy finding. N/A abstract_id: PUBMED:19907938 Enterolithiasis: An uncommon finding in abdominal tuberculosis. Enterolithiasis, the formation of intestinal calculi due to stasis, is known to occur with abdominal tuberculosis. It has uncommonly been reported from India and only rarely in older children. abstract_id: PUBMED:34217588 The prognostic value of testicular microlithiasis as an incidental finding for the risk of testicular malignancy in children and the adult population: A systematic review. On behalf of the EAU pediatric urology guidelines panel. Introduction: The exact correlation of testicular microlithiasis (TM) with benign and malignant conditions remains unknown, especially in the paediatric population. The potential association of TM with testicular malignancy in adulthood has led to controversy regarding management and follow-up. Objective: To determine the prognostic importance of TM in children in correlation to the risk of testicular malignancy or infertility and compare the differences between the paediatric and adult population. Study Design: We performed a literature review of the Medline, Embase and Cochrane controlled trials databases until November 2020 according to the Preferred Reporting Items of Systematic Reviews and Meta-Analyses (PRISMA) Statement. Twenty-six publications were included in the analysis. Results: During the follow-up of 595 children with TM only one patient with TM developed a testicular malignancy during puberty. In the other 594 no testicular malignancy was found, even in the presence of risk factors. In the adult population, an increased risk for testicular malignancy in the presence of TM was found in patients with history of cryptorchidism (6% vs 0%), testicular malignancy (22% vs 2%) or sub/infertility (11-23% vs 1.7%) compared to TM-free. The difference between paediatric and adult population might be explained by the short duration of follow-up, varying between six months and three years. With an average age at inclusion of 10 years and testicular malignancies are expected to develop from puberty on, testicular malignancies might not yet have developed. Conclusion: TM is a common incidental finding that does not seem to be associated with testicular malignancy during childhood, but in the presence of risk factors is associated with testicular malignancy in the adult population. Routine monthly self-examination of the testes is recommended in children with contributing risk factors from puberty onwards. When TM is still present during transition to adulthood a more intensive follow-up could be considered. abstract_id: PUBMED:21546312 Implications of incidental finding of testicular microlithiasis in paediatric patients. Objective: To improve understanding of the implications of testicular microlithiasis (TM) in paediatric patients diagnosed incidentally on scrotal ultrasonography (US). Patients And Methods: 31 boys aged 4-14 years diagnosed with TM based on US between February 2000 and September 2007 were retrospectively reviewed. Demographic data, indications for US, associated inguino-scrotal pathologies and follow-up data were collated. Results: A total of 59 testes were evaluated. Fifty-four had TM at US. The most common indication for US was scrotal swelling (n = 17). Twenty patients (65%) had at least one associated inguino-scrotal pathology related to patent processus vaginalis, with cryptorchidism being the most frequently observed (39%). On follow up (mean 39.6 months), 6 patients documented decrease or resolution of TM, while 2 showed increase in TM density. No testicular malignancy was observed during follow up. Conclusions: TM is commonly diagnosed incidentally on US in paediatric patients. A literature review revealed few case reports on its association with testicular malignancy and a prevalence of 4.2% among asymptomatic boys. The spontaneous resolution of TM supports degeneration of seminiferous tubules as the possible cause and further questions its malignant implication. An appropriate surveillance scheme would require involvement of a well-informed patient and parent with good compliance on testicular self-examination. abstract_id: PUBMED:13621930 A peculiar fluorographic finding: asymptomatic renal calculosis in a child. N/A abstract_id: PUBMED:10406265 Pseudo Sister Mary Joseph's nodule. The Sister Mary Joseph's nodule is a significant finding in the physical examination. It is sometimes the only indication of an intra-abdominal metastatic malignancy. We report a patient who presented with an umbilical nodule that was discovered to be an omphalith. A review of the literature discusses the Sister Mary Joseph's nodule and this unusual finding. Answer: Intrascrotal lithiasis, while considered a rare condition, is becoming an increasingly more frequent finding due to the higher number of sonographic studies being performed. A study reported on 10 cases of intrascrotal calculi, where all patients were found to have a hydrocele of varying volume with a mobile hyperechoic focus that produced acoustic shadows on ultrasound using a 7.5 MHz probe. The authors concluded that the ultrasound finding of intrascrotal calculi is becoming more common, and the ability to diagnose this condition with ultrasound can eliminate the need for further explorations or surgery (PUBMED:10380322).
Instruction: Advancing health disparities research: can we afford to ignore measurement issues? Abstracts: abstract_id: PUBMED:14583684 Advancing health disparities research: can we afford to ignore measurement issues? Background: Research on racial and ethnic health disparities in the United States requires that self-report measures, developed primarily in mainstream samples, are appropriate when applied in diverse groups. To compare groups, mean scores must reflect true scores and have minimal bias, assumptions that have not been tested for many self-report measures used in this research. Objective: To identify conceptual and psychometric issues that need to be addressed to assure the quality of self-report measures being used in health disparities research. Methods: We present 2 broad conceptual frameworks for health disparities research and describe the main research questions and measurement issues for 4 key concepts hypothesized as potential mechanisms of health disparities: socioeconomic status, discrimination, acculturation, and quality of care. This article is based on a small conference convened by 6 Resource Centers for Minority Aging Research (RCMAR) measurement cores. We integrate written materials prepared for the conference by quantitative and qualitative measurement specialists and cross-cultural researchers, conference discussions, and current literature. Results: Problems in the quality of the conceptualizations and measures were found for all 4 concepts, and little is known about the extent to which measures of these concepts can be interpreted similarly across diverse groups. Many problems also apply to other concepts relevant to health disparities. We propose an agenda for accomplishing this challenging measurement research. Conclusions: The current national commitment to reduce health disparities may be compromised without more research on measurement quality. Integrated, systematic efforts are needed to move this work forward, including collaborative efforts and special initiatives. abstract_id: PUBMED:21054368 Data and measurement issues in the analysis of health disparities. Objective: To describe measurement challenges and strategies in identifying and analyzing health disparities and inequities. Methods: We discuss the limitations of existing data sources for measuring health disparities and inequities, describe current strategies to address those limitations, and explore the potential of emerging strategies. Principal Findings: Larger national sample sizes are necessary to identify disparities for major population subgroups. Collecting self-reported race and granular ethnicity data may reduce some measurement errors, but it raises other methodological questions. The assessment of health inequities presents particular challenges, requiring analysis of the interactive effects of multiple determinants of health. Indirect estimation and modeling methods are likely to be important tools for estimating health disparities and inequities for the foreseeable future. Conclusions: Interdisciplinary training and collaborative research models will be essential for future disparities research. Evaluation of evolving methodologies for assessing health disparities should be a priority for health services researchers in the next decade. abstract_id: PUBMED:25566524 Advancing research on racial-ethnic health disparities: improving measurement equivalence in studies with diverse samples. To conduct meaningful, epidemiologic research on racial-ethnic health disparities, racial-ethnic samples must be rendered equivalent on other social status and contextual variables via statistical controls of those extraneous factors. The racial-ethnic groups must also be equally familiar with and have similar responses to the methods and measures used to collect health data, must have equal opportunity to participate in the research, and must be equally representative of their respective populations. In the absence of such measurement equivalence, studies of racial-ethnic health disparities are confounded by a plethora of unmeasured, uncontrolled correlates of race-ethnicity. Those correlates render the samples, methods, and measures incomparable across racial-ethnic groups, and diminish the ability to attribute health differences discovered to race-ethnicity vs. to its correlates. This paper reviews the non-equivalent yet normative samples, methodologies and measures used in epidemiologic studies of racial-ethnic health disparities, and provides concrete suggestions for improving sample, method, and scalar measurement equivalence. abstract_id: PUBMED:32791524 Introduction to the Special Issue: Addressing Health Disparities in Pediatric Psychology. This introduction to the special issue on Addressing Health Disparities in Pediatric Psychology provides context for why this special issue is needed, reviews key findings of the accepted articles, and discusses future directions for advancing the field. This special issue, one of three on this topic area that has been put forth in the history of this journal, comes at a critical point in our world. This is a time when the COVID-19 pandemic is systematically infecting Black, Indigenous, and People of Color and when there has been increased attention to systemic racism and intersecting violence inherent in multiple systems, including the justice, health, and educational systems. Using Kilbourne et al. (2016) framework, this special issue focuses on Phase 2 and Phase 3 research. Rather than only identifying health disparities (Phase 1), this issue focuses on understanding mechanisms and translating such understanding into interventions and policy changes. The accepted articles span a wide gamut from obesity to autism to rural populations. Furthermore, the articles provide methods for advancing the field beyond simply noting that systematic differences exist toward strategies to address these inequities. We conclude this introduction by discussing next steps for future research, with hopes that it inspires the next generation to study issues of disparities and inequity in deeper, more meaningful, and impactful ways. abstract_id: PUBMED:24561776 Challenges of health measurement in studies of health disparities. Health disparities are increasingly studied in and across a growing array of societies. While novel contexts and comparisons are a promising development, this commentary highlights four challenges to finding appropriate and adequate health measures when making comparisons across groups within a society or across distinctive societies. These challenges affect the accuracy with which we characterize the degree of inequality, limiting possibilities for effectively targeting resources to improve health and reduce disparities. First, comparisons may be challenged by different distributions of disease and second, by variation in the availability and quality of vital events and census data often used to measure health. Third, the comparability of self-reported information about specific health conditions may vary across social groups or societies because of diagnosis bias or diagnosis avoidance. Fourth, self-reported overall health measures or measures of specific symptoms may not be comparable across groups if they use different reference groups or interpret questions or concepts differently. We explain specific issues that make up each type of challenge and show how they may lead to underestimates or inflation of estimated health disparities. We also discuss approaches that have been used to address them in prior research, note where further innovation is needed to solve lingering problems, and make recommendations for improving future research. Many of our examples are drawn from South Africa or the United States, societies characterized by substantial socioeconomic inequality across ethnic groups and wide disparities in many health outcomes, but the issues explored throughout apply to a wide variety of contexts and inquiries. abstract_id: PUBMED:16179000 Measurement issues in health disparities research. Background: Racial and ethnic disparities in health and health care have been documented; the elimination of such disparities is currently part of a national agenda. In order to meet this national objective, it is necessary that measures identify accurately the true prevalence of the construct of interest across diverse groups. Measurement error might lead to biased results, e.g., estimates of prevalence, magnitude of risks, and differences in mean scores. Addressing measurement issues in the assessment of health status may contribute to a better understanding of health issues in cross-cultural research. Objective: To provide a brief overview of issues regarding measurement in diverse populations. Findings: Approaches used to assess the magnitude and nature of bias in measures when applied to diverse groups include qualitative analyses, classic psychometric studies, as well as more modern psychometric methods. These approaches should be applied sequentially, and/or iteratively during the development of measures. Conclusions: Investigators performing comparative studies face the challenge of addressing measurement equivalence, crucial for obtaining accurate results in cross-cultural comparisons. abstract_id: PUBMED:35599536 Perspectives From the National Institutes of Health on Multidimensional Mental Health Disparities Research: A Framework for Advancing the Field. Racial, ethnic, and other mental health disparities have been documented for several decades. However, progress in reducing or eliminating these disparities has been slow. In this review, the authors argue that understanding and addressing mental health disparities requires using a multidimensional lens that encompasses a wide array of social determinants of health at individual, interpersonal, organizational, community, and societal levels. However, much of the current research on mental health disparities, including research funded by the National Institutes of Health, is characterized by a narrower focus on a small number of determinants. The authors offer a research framework, adapted from the National Institute on Minority Health and Health Disparities Research Framework, that provides examples of determinants that may cause or sustain mental health disparities and that can serve as intervention targets to reduce those disparities. They also discuss different types of mental health disparities research to highlight the need for more research testing and implementing interventions that directly modify social determinants of health and promote mental health equity. abstract_id: PUBMED:29505363 Promoting Health Equity And Eliminating Disparities Through Performance Measurement And Payment. Current approaches to health care quality have failed to reduce health care disparities. Despite dramatic increases in the use of quality measurement and associated payment policies, there has been no notable implementation of measurement strategies to reduce health disparities. The National Quality Forum developed a road map to demonstrate how measurement and associated policies can contribute to eliminating disparities and promote health equity. Specifically, the road map presents a four-part strategy whose components are identifying and prioritizing areas to reduce health disparities, implementing evidence-based interventions to reduce disparities, investing in the development and use of health equity performance measures, and incentivizing the reduction of health disparities and achievement of health equity. To demonstrate how the road map can be applied, we present an example of how measurement and value-based payment can be used to reduce racial disparities in hypertension among African Americans. abstract_id: PUBMED:34459747 Advancing Intersectional Discrimination Measures for Health Disparities Research: Protocol for a Bilingual Mixed Methods Measurement Study. Background: Guided by intersectionality frameworks, researchers have documented health disparities at the intersection of multiple axes of social status and position, particularly race and ethnicity, gender, and sexual orientation. To advance from identifying to intervening in such intersectional health disparities, studies that examine the underlying mechanisms are required. Although much research demonstrates the negative health impacts of perceived discrimination along single axes, quantitative approaches to assessing the role of discrimination in generating intersectional health disparities remain in their infancy. Members of our team recently introduced the Intersectional Discrimination Index (InDI) to address this gap. The InDI comprises three measures of enacted (day-to-day and major) and anticipated discrimination. These attribution-free measures ask about experiences of mistreatment because of who you are. These measures show promise for intersectional health disparities research but require further validation across intersectional groups and languages. In addition, the proposal to remove attributions is controversial, and no direct comparison has ever been conducted. Objective: This study aims to cognitively and psychometrically evaluate the InDI in English and Spanish and determine whether attributions should be included. Methods: The study will draw on a preliminary validation data set and three original sequentially collected sources of data: qualitative cognitive interviews in English and Spanish with a sample purposively recruited across intersecting social status and position (gender, sexual orientation, race and ethnicity, socioeconomic status, age, and nativity); a Spanish quantitative survey (n=500; 250/500, 50% sexual and gender minorities); and an English quantitative survey (n=3000), with quota sampling by race and ethnicity (Black, Latino/a/x, and White), sexual or gender minority status, and gender. Results: The study was funded by the National Institute on Minority Health and Health Disparities in May 2021, and data collection began in July 2021. Conclusions: The key deliverables of the study will be bilingual measures of anticipated, day-to-day, and major discrimination validated for multiple health disparity populations using qualitative, quantitative, and mixed methods. International Registered Report Identifier (irrid): PRR1-10.2196/30987. abstract_id: PUBMED:35316118 Evidence-Based Medicine in Otolaryngology, Part XIII: Health Disparities Research and Advancing Health Equity. Objective: To provide a contemporary resource for clinicians and researchers on health equity research and implementation strategies to mitigate or eliminate disparities in health care. Data Sources: Published studies and literature on health disparities, applicable research methodologies, and social determinants of health in otolaryngology. Review Methods: Literature through October 2021 was reviewed, including consensus statements, guidelines, and scientific publications related to health care equity research. This research focus provides insights into existing disparities, why they occur, and the outcomes of interventions designed to resolve them. Progress toward equity requires intentionality in implementing quality improvement initiatives, tracking data, and fostering culturally competent care. Priority areas include improving access, removing barriers to care, and ensuring appropriate and effective treatment. Although research into health care disparities has advanced significantly in recent years, persistent knowledge gaps remain. Applying the lens of equity to data science can promote evidence-based practices and optimal strategies to reduce health inequities. Conclusions: Health disparities research has a critical role in advancing equity in otolaryngology-head and neck surgery. The phases of disparities research include detection, understanding, and reduction of disparities. A multilevel approach is necessary for understanding disparities, and health equity extensions can improve the rigor of evidence-based data synthesis. Finally, applying an equity lens is essential when designing and evaluating health care interventions, to minimize bias. Implications For Practice: Understanding the data and practices related to disparities research may help promote an evidence-based approach to care of individual patients and populations, with the potential to eventually surmount the negative effects of health care disparities. Answer: No, we cannot afford to ignore measurement issues when advancing health disparities research. Measurement issues are critical to the quality and validity of research in this field. If self-report measures developed primarily in mainstream samples are not appropriate or biased when applied to diverse groups, this can lead to inaccurate comparisons and conclusions about health disparities (PUBMED:14583684). Problems in the quality of conceptualizations and measures have been found across key concepts hypothesized as potential mechanisms of health disparities, such as socioeconomic status, discrimination, acculturation, and quality of care (PUBMED:14583684). Moreover, measurement challenges can limit the ability to identify and analyze health disparities and inequities effectively. Issues such as larger national sample sizes, collecting self-reported race and granular ethnicity data, and assessing the interactive effects of multiple determinants of health are among the challenges that need to be addressed (PUBMED:21054368). Without measurement equivalence, studies of racial-ethnic health disparities are confounded by unmeasured, uncontrolled correlates of race-ethnicity, which can render the samples, methods, and measures incomparable across racial-ethnic groups (PUBMED:25566524). Additionally, health disparities research must consider the challenges of finding appropriate and adequate health measures when making comparisons across groups within a society or across distinctive societies. Issues such as different distributions of disease, variation in data quality, diagnosis bias or avoidance, and the comparability of self-reported health information can lead to underestimates or inflation of estimated health disparities (PUBMED:24561776). Investigators face the challenge of addressing measurement equivalence, which is crucial for obtaining accurate results in cross-cultural comparisons (PUBMED:16179000). Therefore, addressing measurement issues is essential for the progress in reducing or eliminating health disparities and promoting health equity (PUBMED:29505363). Interdisciplinary training, collaborative research models, and evaluation of evolving methodologies are necessary for future disparities research (PUBMED:21054368).
Instruction: Assessment of Autism Symptoms During the Neonatal Period: Is There Early Evidence of Autism Risk? Abstracts: abstract_id: PUBMED:26114457 Assessment of Autism Symptoms During the Neonatal Period: Is There Early Evidence of Autism Risk? Objective: To define neonatal social characteristics related to autism risk. Method: Sixty-two preterm infants underwent neonatal neurobehavioral testing. At age 2 yr, participants were assessed with the Modified Checklist for Autism in Toddlers and Bayley Scales of Infant and Toddler Development, 3rd edition. Results: Positive autism screening was associated with absence of gaze aversion, χ=5.90, p=01, odds ratio=5.05, and absence of endpoint nystagmus, χ=4.78, p=.02, odds ratio=8.47. Demonstrating gaze aversion was related to better language outcomes, t(55)=-3.07, p≤.003. Displaying endpoint nystagmus was related to better language outcomes, t(61)=-3.06, p=.003, cognitive outcomes, t(63)=-5.04, p&lt;.001, and motor outcomes, t(62)=-2.82, p=.006. Conclusion: Atypical social interactions were not observed among infants who later screened positive for autism. Instead, the presence of gaze aversion and endpoint nystagmus was related to better developmental outcomes. Understanding early behaviors associated with autism may enable early identification and lead to timely therapy activation to improve function. abstract_id: PUBMED:21746727 Perinatal and neonatal risk factors for autism: a comprehensive meta-analysis. Background: The etiology of autism is unknown, although perinatal and neonatal exposures have been the focus of epidemiologic research for over 40 years. Objective: To provide the first review and meta-analysis of the association between perinatal and neonatal factors and autism risk. Methods: PubMed, Embase, and PsycInfo databases were searched for studies that examined the association between perinatal and neonatal factors and autism through March 2007. Forty studies were eligible for the meta-analysis. For each exposure, a summary effect estimate was calculated using a random-effects model. Heterogeneity in effect estimates across studies was examined, and, if found, a meta-regression was conducted to identify measured methodological factors that could explain between-study variability. Results: Over 60 perinatal and neonatal factors were examined. Factors associated with autism risk in the meta-analysis were abnormal presentation, umbilical-cord complications, fetal distress, birth injury or trauma, multiple birth, maternal hemorrhage, summer birth, low birth weight, small for gestational age, congenital malformation, low 5-minute Apgar score, feeding difficulties, meconium aspiration, neonatal anemia, ABO or Rh incompatibility, and hyperbilirubinemia. Factors not associated with autism risk included anesthesia, assisted vaginal delivery, postterm birth, high birth weight, and head circumference. Conclusions: There is insufficient evidence to implicate any 1 perinatal or neonatal factor in autism etiology, although there is some evidence to suggest that exposure to a broad class of conditions reflecting general compromises to perinatal and neonatal health may increase the risk. Methodological variations were likely sources of heterogeneity of risk factor effects across studies. abstract_id: PUBMED:23816633 Prenatal, perinatal and neonatal risk factors of Autism Spectrum Disorder: a comprehensive epidemiological assessment from India. Incidence of Autism Spectrum Disorder (ASD) is increasing across the globe and no data is available from India regarding the risk factors of ASD. In this regard a questionnaire based epidemiological assessment was carried out on prenatal, perinatal and neonatal risk factors of ASD across 8 cities in India. A retrospective cohort of 942 children was enrolled for the study. 471 children with ASD, under age of 10, were analyzed for pre-, peri-, and neonatal factors and were compared with the observations from equal number of controls. The quality control of the questionnaire and data collection was done thoroughly and the observations were computed statistically. A total of 25 factors were evaluated by unadjusted and adjusted analysis in this study. Among the prenatal factors considered, advanced maternal age, fetal distress and gestational respiratory infections were found to be associated with ASD and had an odds ratio of 1.8. Evaluation of perinatal and neonatal risk factors showed labor complications, pre-term birth, neonatal jaundice, delayed birth cry and birth asphyxia to be associated with ASD with an odds ratio greater than 1.5. This important study, first of its kind in Indian population gives a firsthand account of the relation of pre-, peri- and neonatal risk factors on ASD from an ethnically and socially diverse country like India, the impact of which was unknown earlier. This advocates additional focused investigations on physiological and genetic changes contributed by these risk factor inducing environments. abstract_id: PUBMED:35800274 Risk factors for neonatal hyperbilirubinemia: a systematic review and meta-analysis. Background: Hyperbilirubinemia is the most common cause of neonatal hospitalization and, although it generally has a good prognosis, a significant percentage of neonatal patients maintain a high bilirubin level, which can lead to severe complications, including lifelong disability such as growth retardation, encephalopathy, autism and hearing impairment. The study of risk factors for neonatal hyperbilirubinemia has been controversial. Therefore, we evaluated the risk factors of neonatal hyperbilirubinemia using a meta-analysis. Methods: Relevant English and Chinese studies that discussed risk factors for neonatal hyperbilirubinemia were retrieved from the PubMed, EMBASE, Medline, Central, China National Knowledge Infrastructure (CNKI), Wanfang and China Science Digital Library (CSDL). The literature took newborns as the research object, set up a control group, and observed the relationship between exposure factors and neonatal hyperbilirubinemia. The combined effect size was expressed by odds ratio (OR) and 95% confidence interval (CI). The Chi-square test was used to test heterogeneity of the studies, and if it existed, subgroup analyses were used to explore the source of heterogeneity, and the random-effects model was selected for the combined analysis. The fixed-effects model was chosen for the combined analysis if there was no heterogeneity. Publication bias was assessed using Egger's test and funnel plot. Results: Risk factors for neonatal hyperbilirubinemia were exclusive breastfeeding (BF: OR =1.74, 95% CI: 1.42, 2.12, Z=5.43, P&lt;0.00001); glucose-6-phosphate dehydrogenase deficiency (G6PD: OR =1.62, 95% CI: 1.44, 1.81, Z=8.39, P&lt;0.00001); maternal-fetal ABO blood group incompatibility (OR =1.64, 95% CI: 1.42, 1.89, Z=6.75, P&lt;0.00001); and preterm birth (PTB: OR =1.31, 95% CI: 1.17, 1.47, Z=4.60, P&lt;0.00001); there was no heterogeneity or publication bias among the studies (BF: χ2=5.34, P=0.25, I2=25%; G6PD: χ2=4.40, P=0.49, I2=0%; ABO: χ2=1.91, P=0.75, I2=0%; PTB: χ2=0.81, P=0.67, I2=0%). Conclusions: Exclusive breastfeeding, G6PD deficiency, ABO incompatibility and premature birth were confirmed as risk factors for neonatal hyperbilirubinemia. Pregnant women with risk factors should be monitored more closely and clinical intervention should be given in a timely manner. abstract_id: PUBMED:19000294 Neonatal jaundice: a risk factor for infantile autism? In a previous study, we found that infants transferred to a neonatal ward after delivery had an almost twofold increased risk of being diagnosed with infantile autism later in childhood in spite of extensive controlling of obstetric risk factors. We therefore decided to investigate other reasons for transfer to a neonatal ward, in particular hyperbilirubinaemia and neurological abnormalities. We conducted a population-based matched case-control study of 473 children with autism and 473 matched controls born from 1990 to 1999 in Denmark. Cases were children reported with a diagnosis of infantile autism in the Danish Psychiatric Central Register. Conditional logistic regression was used to calculate odds ratios (OR) and 95% confidence intervals [CI] and likelihood ratio tests were used to test for effect modification. We found an almost fourfold risk for infantile autism in infants who had hyperbilirubinaemia after birth (OR 3.7 [95% CI 1.3, 10.5]). In stratified analysis, the association appeared limited to term infants (&gt;or=37 weeks gestation). A strong association was also observed between abnormal neurological signs after birth and infantile autism, especially hypertonicity (OR 6.7 [95% CI 1.5, 29.7]). No associations were found between infantile autism and low Apgar scores, acidosis or hypoglycaemia. Our findings suggest that hyperbilirubinaemia and neurological abnormalities in the neonatal period are important factors to consider when studying causes of infantile autism. abstract_id: PUBMED:33526883 Neonatal jaundice and autism spectrum disorder: a systematic review and meta-analysis. Background: Two meta-analyses concluded that jaundice was associated with an increased risk of autism. We hypothesize that these findings were due to methodological limitations of the studies included. Neonatal jaundice affects many infants and risks of later morbidity may prompt physicians towards more aggressive treatment. Methods: To conduct a systematic literature review and a meta-analysis of the association between neonatal jaundice and autism with particular attention given to low risk of bias studies. Pubmed, Scopus, Embase, Cochrane, and Google Scholar were searched for publications until February 2019. Data was extracted by use of pre-piloted structured sheets. Low risk of bias studies were identified through predefined criteria. Results: A total of 32 studies met the inclusion criteria. The meta-analysis of six low risk of bias studies showed no association between neonatal jaundice and autism; cohort studies risk ratio 1.09, 95% CI, 0.99-1.20, case-control studies odds ratio 1.29 95% CI 0.95, 1.76. Funnel plot of all studies suggested a high risk of publication bias. Conclusions: We found a high risk of publication bias, selection bias, and potential confounding in all studies. Based on the low risk of bias studies there was no convincing evidence to support an association between neonatal jaundice and autism. Impact: Meta-analysis of data from six low risk of bias studies indicated no association between neonatal jaundice and autism spectrum disorder. Previous studies show inconsistent results, which may be explained by unadjusted confounding and selection bias. Funnel plot suggested high risk of publication bias when including all studies. There is no evidence to suggest jaundice should be treated more aggressively to prevent autism. abstract_id: PUBMED:29178513 Relationship Between Neonatal Vitamin D at Birth and Risk of Autism Spectrum Disorders: the NBSIB Study. Previous studies suggested that lower vitamin D might be a risk factor for autism spectrum disorders (ASDs). The aim of this study was to estimate the prevalence of ASDs in 3-year-old Chinese children and to examine the association between neonatal vitamin D status and risk of ASDs. We conducted a study of live births who had taken part in expanded newborn screening (NBS), with outpatient follow-up when the children 3-year old. The children were confirmed for ASDs in outpatient by the Autism Diagnostic Interview-Revised and Diagnostic and Statistical Manual of Mental Disorders (DSM)-5 criteria. Intellectual disability (ID) status was defined by the intelligence quotient (IQ &lt; 80) for all the participants. The study design included a 1:4 case to control design. The concentration of 25-hydroxyvitamin D3 [25(OH)D3] in children with ASD and controls were assessed from neonatal dried blood samples. A total of 310 children were diagnosed as having ASDs; thus, the prevalence was 1.11% (95% CI, 0.99% to 1.23%). The concentration of 25(OH)D3 in 310 ASD and 1240 controls were assessed. The median 25(OH)D3 level was significantly lower in children with ASD as compared to controls (p &lt; 0.0001). Compared with the fourth quartiles, the relative risk (RR) of ASDs was significantly increased for neonates in each of the three lower quartiles of the distribution of 25(OH)D3, and increased risk of ASDs by 260% (RR for lowest quartile: 3.6; 95% CI, 1.8 to 7.2; p &lt; 0.001), 150% (RR for second quartile: 2.5; 95% CI, 1.4 to 3.5; p = 0.024), and 90% (RR for third quartile: 1.9; 95% CI, 1.1 to 3.3; p = 0.08), respectively. Furthermore, the nonlinear nature of the ID-risk relationship was more prominent when the data were assessed in deciles. This model predicted the lowest relative risk of ID in the 72rd percentile (corresponding to 48.1 nmol/L of 25(OH)D3). Neonatal vitamin D status was significantly associated with the risk of ASDs and intellectual disability. The nature of those relationships was nonlinear. © 2017 American Society for Bone and Mineral Research. abstract_id: PUBMED:34344142 Risk of autism spectrum disorder in children with a history of hospitalization for neonatal jaundice. Background: Limited research has focused explicitly on the association between neonatal jaundice and autism spectrum disorder (ASD), and inconclusive evidence exists in the literature within this framework. This study aimed specifically to investigate whether neonatal jaundice is a potential risk factor for ASD and whether there is a connection between the types of neonatal jaundice and the severity of ASD. Methods: This study involved 119 children with ASD [90 males (75.6%), 29 females (24.4%), mean age: 45.39 ± 11.29 months] and 133 healthy controls [100 males (75.2%), 33 females (24.8%), mean age: 46.92 ± 11.42 months]. Psychiatric disorders were diagnosed through the Diagnostic and Statistical Manual of Mental Disorders criteria. Childhood Autism Rating Scale (CARS) was used to assess the screening and diagnosis of autism. A specially prepared personal information sheet was employed to investigate sociodemographic characteristics and birth and clinical histories. Results: The rate of the history of jaundice and pathological jaundice requiring hospitalization and phototherapy were significantly higher in the ASD group compared to the controls. CARS total score and the mean scores of nearly all items were statistically higher in children with a history of pathological jaundice than those with a history of physiological jaundice. Discussion: Neonatal jaundice, depends on its severity, seems to be one of the possible biological factors associated with subsequent development of and the severity of ASD. Establishing a causal relationship between neonatal jaundice and ASD by more comprehensive studies may contribute to alleviating of the severity of ASD for individuals at risk. abstract_id: PUBMED:34225371 The Effect of Neonatal Sepsis on Risk of Autism Diagnosis. Objective: The study aimed to examine the association between neonatal sepsis and autism risk among children and whether the risk varied with the timing of exposure, child's sex, and race/ethnicity. Study Design: We conducted a retrospective cohort study using electronic health records (EHR) extracted from Kaiser Permanente Southern California Health Care System. Mother-child dyads were constructed by linking records of children born to member mothers and continuing to receive care through the system during the follow-up period with those of their biological mothers (n = 469,789). Clinical health records were used to define neonatal sepsis. Diagnosis of autism was made by medical specialists. Potential confounders included maternal sociodemographic factors, obstetrical history, child's age, sex, race/ethnicity, and maternal and child medical history. Incident rates and adjusted hazard ratios (aHR) were used to estimate the associations. Results: Compared with children without the diagnosis of autism, children with the condition were more likely to be from Asian/Pacific Islander descent and male sex. Exposed children showed higher rates of autism as compared with unexposed children (3.43 vs. 1.73 per 1,000 person-years, aHR: 1.67-95% confidence interval [CI]: 1.39-2.00). Both preterm (aHR: 1.47; 95% CI: 1.09-1.98) and term (aHR: 1.63; 95% CI: 1.29-2.06) births were associated with increased risk for autism. Although the magnitude of the HRs and incidence ratios for neonatal sepsis to increase autism risk varied between race ethnicities, neonatal sepsis was associated with significantly increased likelihood of autism diagnosis for all race-ethic groups except for Asian/Pacific Islanders. Although neonatal sepsis was associated with significantly increased autism risk for both boys and girls, incident rates and HR point estimates suggested that the effect may be stronger in girls. Conclusion: Neonatal sepsis is associated with increased risk of autism diagnosis in preterm- and term-born children. The association was significant for both girls and boys and all race ethnicities except for Asian-Pacific Islanders. Key Points: · Neonatal sepsis is associated with increased risk of autism diagnosis.. · The association was significant in preterm- and term-born children.. · The association was significant for all race/ethnicities except for Asian-Pacific Islanders.. abstract_id: PUBMED:31279535 An Exploratory Examination of Neonatal Cytokines and Chemokines as Predictors of Autism Risk: The Early Markers for Autism Study. Background: The identification of an early biomarker for autism spectrum disorder (ASD) would improve the determination of risk, leading to earlier diagnosis and, potentially, earlier intervention and improved outcomes. Methods: Data were generated from the Early Markers for Autism study, a population-based case-control study of prenatal and neonatal biomarkers of ASD. Newborn bloodspots of children with ASD (n = 370), children with developmental delay (n = 140), and general population (GP) controls (n = 378) were analyzed for 42 different immune markers using a Luminex multiplex platform. Comparisons of immune marker concentrations between groups were examined using logistic regression and partial least squares discriminant analysis. Results: Children with ASD had significantly increased neonatal levels of interleukin-6 (IL-6) and IL-8 compared with GP controls. An increase in IL-8 was especially significant in the ASD group with early onset compared with the GP group, with an adjusted odds ratio of 1.97 (95% confidence interval, 1.39-2.83; p = .00014). In addition, children with ASD had significantly elevated levels of eotaxin-1, interferon-γ, and IL-12p70 relative to children with developmental delay. We observed no significant differences in levels of immune markers between the developmental delay and GP groups. Conclusions: Elevated levels of some inflammatory markers in newborn bloodspots indicated a higher degree of immune activation at birth in children who were subsequently diagnosed with ASD. The data from this exploratory study suggest that with further expansion, the development of neonatal bloodspot testing for cytokine/chemokine levels might lead to the identification of biomarkers that provide an accurate assessment of ASD risk at birth. Answer: The assessment of autism symptoms during the neonatal period has been a subject of research to determine if there is early evidence of autism risk. Various studies have explored different neonatal factors and their association with later autism risk. One study found that positive autism screening at age 2 was associated with the absence of gaze aversion and absence of endpoint nystagmus during the neonatal period. Infants who demonstrated gaze aversion or displayed endpoint nystagmus had better developmental outcomes in language, cognitive, and motor skills. This suggests that certain social characteristics in the neonatal period may be related to autism risk (PUBMED:26114457). A comprehensive meta-analysis reviewed over 60 perinatal and neonatal factors and found that several factors, including abnormal presentation, umbilical-cord complications, fetal distress, and birth injury or trauma, were associated with an increased risk of autism. However, the evidence was not sufficient to implicate any single factor in autism etiology, suggesting that a broad class of conditions reflecting general compromises to perinatal and neonatal health may increase the risk (PUBMED:21746727). Another study from India reported that prenatal factors such as advanced maternal age, fetal distress, and gestational respiratory infections, as well as perinatal and neonatal factors like labor complications, pre-term birth, and neonatal jaundice, were associated with an increased risk of Autism Spectrum Disorder (ASD) (PUBMED:23816633). Research has also indicated that neonatal hyperbilirubinemia is a risk factor for neonatal hospitalization and can lead to severe complications, including autism (PUBMED:35800274). A Danish study found an almost fourfold risk for infantile autism in infants who had hyperbilirubinemia after birth, particularly in term infants (PUBMED:19000294). However, a systematic review and meta-analysis found no convincing evidence to support an association between neonatal jaundice and autism when considering low risk of bias studies, suggesting that previous findings might be due to methodological limitations (PUBMED:33526883). Additionally, lower neonatal vitamin D levels have been associated with an increased risk of ASDs and intellectual disability, with the relationship being nonlinear (PUBMED:29178513).
Instruction: Skin graft for 2-stage treatment of severe hypospadias: back to the future? Abstracts: abstract_id: PUBMED:35883065 Buccal versus skin graft for two-stage repair of complex hypospadias: an Egyptian center experience. Background: Urethral reconstruction in complex hypospadias poses a significant challenge. We report our 10-year experience with buccal mucosa graft (BMG) in the two-stage repair of complex hypospadias and compare its results to the skin graft. Methods: We retrieved the data of 15 patients with complex hypospadias who underwent two-stage repair using the BMG at our institution. The data were compared to 13 patients who underwent skin graft during the same period. Results: The median follow-up duration was 14 (12-17) months in the BMG group and 16 (13.5-22.5) months in the skin graft group. Patients in the BMG had a numerically lower incidence of the diverticulum, wound dehiscence, fistula, and infection than the skin graft group, however, without statistically significant difference (p &gt; 0.05). On the other hand, the incidence of meatal stenosis and urethral stricture was significantly lower in the BMG group (0% each) compared to the skin graft group (30.8% each; p = 0.02). At the same time, there were no reported cases of graft contracture. The frequency of donor site morbidity was significantly higher in the skin graft group compared to the BMG group (p = 0.003). The BMG led to a lower incidence of postoperative straining than the skin graft (0% vs. 38.5%, p = 0.03). Only one patient needed revision surgery after skin graft, compared to no case in the BMG (p = 0.27). Conclusion: The present study demonstrates the feasibility and durable outcomes of the BMG in the setting of two-stage repair of complex hypospadias. abstract_id: PUBMED:33953506 Bracka Urethroplasty with Buccal Mucosa Graft: Ergonomic Management of Penile Skin Dartos in the First Stage to Facilitate Second-stage Neourethral Coverage. Aims: The aim of the study was to report a new technique of ergonomic penile skin-dartos management during buccal mucosa graft (BMG) to provide adequate penile skin-dartos for neourethral coverage at the time of second-stage tubularization. Materials And Methods: Ten proximal hypospadias with severe chordee underwent first-stage surgery with a new technique. An incision along the urethral plate margin and preputial edge was used to split inner prepuce off preputial dartos and penile degloving leaving inner prepuce attached to corona. Urethral plate was divided into the subfascial plane. Penile dartos was bisected in the dorsal midline. Distal half of penile skin-dartos bifurcated and joined to inner preputial edges. Mobilized and lateralized penile skin-dartos was sutured flanking edges of BMG. The second-stage tubularization after 6 months provided neourethral double dartos coverage with eccentric suture lines. Results: Adequate dartos for neourethral coverage during second-stage tubularization was available in all. Subcoronal urethrocutaneous fistula occurred in one that was repaired. Conclusions: Ergonomic management of inner-preputial skin and ventral transfer of penile skin-dartos helps in providing neourethral coverage during subsequent second-stage tubularization to minimize the occurrence of complications. abstract_id: PUBMED:16075751 Free graft two-stage urethroplasty for hypospadias repair. Objective: To evaluate the effectiveness of free graft transplantation two-stage urethroplasty for hypospadias repair. Methods: Fifty-eight cases with different types of hypospadias including 10 subcoronal, 36 penile shaft, 9 scrotal, and 3 perineal were treated with free full-thickness skin graft or (and) buccal mucosal graft transplantation two-stage urethroplasty. Of 58 cases, 45 were new cases, 13 had history of previous failed surgeries. Operative procedure included two stages: the first stage is to correct penile curvature (chordee), prepare transplanting bed, harvest and prepare full-thickness skin graft, buccal mucosal graft, and perform graft transplantation. The second stage is to complete urethroplasty and glanuloplasty. Results: After the first stage operation, 56 of 58 cases (96.6%) were successful with grafts healing well, another 2 foreskin grafts got gangrened. After the second stage operation on 56 cases, 5 cases failed with newly formed urethras opened due to infection, 8 cases had fistulas, 43 (76.8%) cases healed well. Conclusions: Free graft transplantation two-stage urethroplasty for hypospadias repair is a kind of effective treatment with broad indication, comparatively high success rate, less complications and good cosmatic results, indicative of various types of hypospadias repair. abstract_id: PUBMED:12971872 Comparison of prepucial skin, postauricular skin and buccal mucosal graft results in hypospadias repair. Objective: To compare the results of the prepucial, postauricular skin grafts and buccal mucosal graft in two-stage hypospadias repair. Design: Comparative study. Place And Duration Of Study: The department of Plastic Surgery, Hayatabad Medical Complex, Peshawar. The duration of the study was three years (from January 1999 to December 2001). Patients And Methods: The study subjects included 242 patients with hypospadias who underwent two-stage Aivor Bracka repair. Results: The best results were obtained with prepucial skin graft in which the graft take was 95.3% and fistula rate was 3.2%. The incidence of graft contracture and graft loss was 20.5% and 11.7% respectively in buccal mucosal graft. Conclusion: The prepucial skin graft, postauricular skin graft and buccal mucosal graft gives excellent results in terms of graft acceptance in stage I and graft contracture and fistula rate in stage II. abstract_id: PUBMED:16600785 Interim outcome of the single stage dorsal inlay skin graft for complex hypospadias reoperations. Purpose: Despite high success rates for primary hypospadias repair, some cases require multiple procedures for ultimate reconstruction. We report our experience with single stage dorsal inlay urethroplasty using skin grafts for complex reoperations. Materials And Methods: A total of 31 patients (mean age 13.8 years) with failed previous hypospadias surgery were included in the study. Indications included fistulas, strictures, diverticula and repair breakdown. The urethral plate had been removed or was severely scarred in all patients. A free penile or groin skin graft was sutured and quilted to the corpora cavernosa, guaranteeing sufficient blood supply. The neourethra was tubularized and covered with a tunica vaginalis or dartos flap, followed by glanuloplasty. Outcome analysis included urethrograms, urethral ultrasound and flow measurements. Results: Foreskin was used in 15 cases, penile skin in 12 and inguinal skin in 4. Average graft length was 3.92 cm. A total of 20 patients required glanuloplasty with a skin graft extended to the tip of the glans. After a mean followup of 30.71 months 5 patients underwent redo surgery, for a complication rate of 16.1%. Urethral stricture of the proximal anastomosis was the most frequent finding. Conclusions: This single stage approach using dorsal skin grafts is a reliable method to create a substitute urethral plate for tubularization. Complication rates are equivalent to those of staged procedures. Foreskin should be used as a graft donor site to optimize the outcome if available. This approach represents a safe option for reoperations even if the urethral plate or midline penile skin is grossly scarred. abstract_id: PUBMED:30293245 Single-stage repair of obliterated anterior urethral strictures using buccal mucosa graft and dorsal penile skin flap. Objective: To present a single-stage repair of obliterative urethral strictures by simultaneous use of a buccal mucosa graft and longitudinal dorsal penile skin flap. Methods: Between February 2007 and October 2016, 51 patients with obliterative anterior urethral stricture underwent single-stage substitution urethroplasty. A buccal mucosa graft was harvested and fixed to the corpora cavernosa as the dorsal part of the neourethra, and a vascularized dorsal penile skin flap was created, transposed ventrally and sutured to the buccal mucosa graft to form ventral part of the neourethra. Results: The follow-up period was 12-129 months (mean 49 months). The mean age of the patients was 48 years (range 15-71 years). The mean length of the obliterated urethral segment, measured during the operative procedure, was 5.2 cm. The etiology of strictures was: unknown, hypospadias and trauma in 19, 27 and five patients, respectively. Five patients were lost to follow up, and 46 patients were analyzed for the outcome. At the end of the follow-up period, recurrence of the stricture occurred in seven (15.2%) patients, whereas 39 (84.8%) patients did not develop stricture. An additional three (6.5%) patients developed fistula, resulting in overall successful voiding in 36 (78.3%) patients. Conclusions: A combined buccal mucosa graft and longitudinal dorsal penile skin flap could be a good choice for one-stage substitution urethroplasty in complex obliterative urethral strictures, with an acceptable complication rate. abstract_id: PUBMED:30931285 Midline Incision of a Graft in Staged Hypospadias Repair-Feasible and Durable? Purpose: In severe hypospadias staged repair is commonly used and it is regarded as feasible, safe, and durable. In this article we want to describe the results of a modification of the staged repair: a midline incision of the graft during the second stage. Materials and Methods: This is a consecutive single team (2 surgeons) retrospective series. Between 2014 and 2017, 250 patients underwent hypospadias repair, among them 35 patients that had primary staged hypospadias surgery with completed first and second stage repair. 24 (68.6%) cases received a preputial skin graft and 11 (31.4%) buccal mucosa graft. Median age at first stage was 1.5 (0.5-22.1) years, mean time between first and second stage operation was 0.72 (0.4-1.76) years. Follow up rate was 100%, mean follow up period was 1.50 (0.4-3.8) years. Results: The total complication rate was 22.9%. In buccal mucosa repair the complication rate was 36.4% and in preputial graft repair the complication rate was 16.7%, respectively. In 23 patients (65.7%) during second stage urethroplasty a midline incision was performed (8 glandular graft, 15 penile graft, 6 at level of urethral opening). Complication rate in non-incised urethroplasty was 8.3%, in incision at glandular level 37.5%, in incision at penile level 13.3% and in incision at urethral opening 16.7%, respectively. Conclusions: Two stage repair is the method of choice in the correction of severe hypospadias. In selected cases a midline incision of the graft is feasible and can be applied if needed. Randomized studies will be needed to evaluate the true benefit of incising the graft. abstract_id: PUBMED:31303449 Outcomes of staged lingual mucosal graft urethroplasty for redo hypospadias repair. Background: The objective of this study was to present the outcomes for redo hypospadias repair using lingual mucosal graft (LMG). Patients And Methods: Between June 2012 and February 2017, 47 patients underwent staged LMG urethroplasty for redo hypospadias repair. The inclusion criteria were previous failed hypospadias repair with a paucity of local skin that interferes with correction using skin flaps and demands graft urethroplasty. During the first stage, a well-vascularized bed on the tunica albuginea was created. Then, the harvested LMG was secured to the prepared bed. The second-stage urethroplasty was carried out after six months. In this stage, tubularization of the previously implanted LMG was performed. In four cases, tubularization was difficult owing to graft contracture. This difficulty was managed by using the dorsally degloved penile skin as the onlay island flap in three cases and the buccal mucosa onlay graft in the fourth case. In all cases, a second protective layer from the dartos or tunica vaginalis was developed to cover the neourethra. Results: The median (interquartile range [IQR]) age of patients at the first stage was 5 (4-6) years, and the median (IQR) duration between both stages was 7 (6-8) months. The median (IQR) follow-up after the second stage was 15 (13-16) months. The median (IQR) number of previous operations was 2 (2-3). The median (IQR) length of the LMG was 3 (2.5-4) cm, and the median (IQR) width was 1 (1-2) cm. No donor-site major complications, but mild oral discomfort in the first week after graft harvesting, were reported in 39 (83%) patients. After the second stage, complications were reported in nine (19.2%) patients, meatal stenosis in five and fistula in four. The reported success rate was 80.9%. Discussion: Reconstruction of previously failed hypospadias is a challenge owing to local tissue scarring and a paucity of adjacent healthy tissue. In this study, the LMG was used in two-stage redo hypospadias repair after previous repair failure. In the present study, a success rate of 80.9% was reported after the second stage. According to this study and the published series, harvesting the LMG is associated with minimal immediate donor-site complications and no long-term morbidity. Another advantage of the LMG is easy harvesting with effortlessly reachable tongue in comparison with the buccal mucosa that is deep and requires application of a mouth retractor. Conclusions: Two-stage LMG urethroplasty is a reliable procedure for salvage urethroplasty. Lingual mucosal graft harvesting is easy, with minor oral complications. abstract_id: PUBMED:12352346 Skin graft for 2-stage treatment of severe hypospadias: back to the future? Purpose: Despite the introduction of more refined surgical techniques, the optimal treatment of the most severe forms of hypospadias remains to be determined. Single stage procedures, whether with the use of flaps or grafts, have long been regarded as the best approach, although the complication rate is nonnegligible with all procedures. Materials And Methods: We report the use of a 2-stage repair with preputial graft interposition and subsequent tubularization of the urethral plate applied it to all severe cases of hypospadias with significant chordee or small glans. Results: Both stages of the procedure were completed in 34 patients. Complications in 8 cases (23.5%) included 4 glans disruption in 4, coronal grove fistula in 2, urethral diverticulum in 1 and urethral stenosis due to balanitis xerotica obliterans in 1. Two pinhole fistulas also occurred which closed spontaneously. No complete disruptions or postoperative hematomas/bleeding was noted. Cosmetic and functional outcome after a minimum followup of 1 month was optimal in all cases with a normally located "slit" meatus and straight penile shaft. Conclusions: Although the controversy between use of grafts and flaps will probably continue forever, we believe that our 2-stage approach should be considered as a valid alternative for the most severe forms of hypospadias. Long-term results appear to outnumber the necessity of a learning curve for appropriate graft manipulation. abstract_id: PUBMED:26228042 One-stage dorsal lingual mucosal graft urethroplasty for the treatment of failed hypospadias repair. The aim of this study was to retrospectively investigate the outcomes of patients who underwent one-stage onlay or inlay urethroplasty using a lingual mucosal graft (LMG) after failed hypospadias repairs. Inclusion criteria included a history of failed hypospadias repair, insufficiency of the local skin that made a reoperation with skin flaps difficult, and necessity of an oral mucosal graft urethroplasty. Patients were excluded if they had undergone a failed hypospadias repair using the foreskin or a multistage repair urethroplasty. Between January 2008 and December 2012, 110 patients with failed hypospadias repairs were treated in our center. Of these patients, 56 underwent a one-stage onlay or inlay urethroplasty using LMG. The median age was 21.8 years (range: 4-45 years). Of the 56 patients, one-stage onlay LMG urethroplasty was performed in 42 patients (group 1), and a modified Snodgrass technique using one-stage inlay LMG urethroplasty was performed in 14 (group 2). The median LMG urethroplasty length was 5.6 ± 1.6 cm (range: 4-13 cm). The mean follow-up was 34.7 months (range: 10-58 months), and complications developed in 12 of 56 patients (21.4%), including urethrocutaneous fistulas in 7 (6 in group 1, 1 in group 2) and neourethral strictures in 5 (4 in group 1, 1 in group 2). The total success rate was 78.6%. Our survey suggests that one-stage onlay or inlay urethroplasty with LMG may be an effective option to treat the patients with less available skin after failed hypospadias repairs; LMG harvesting is easy and safe, irrespective of the patient's age. Answer: The use of skin grafts in the two-stage treatment of severe hypospadias has been revisited as a potential approach, as indicated by the title "Skin graft for 2-stage treatment of severe hypospadias: back to the future?" (PUBMED:12352346). This approach involves a first stage where a preputial graft is interposed, followed by a second stage where the urethral plate is tubularized. The study reported that both stages of the procedure were completed in 34 patients, with complications occurring in 8 cases (23.5%), which included glans disruption, coronal groove fistula, urethral diverticulum, and urethral stenosis due to balanitis xerotica obliterans. Despite these complications, the cosmetic and functional outcomes were deemed optimal, with a normally located "slit" meatus and straight penile shaft after a minimum follow-up of 1 month. The study suggests that this two-stage approach with skin grafts should be considered a valid alternative for the most severe forms of hypospadias, although there is a learning curve for appropriate graft manipulation. The discussion around the use of grafts versus flaps in hypospadias repair is ongoing, and while some surgeons may prefer single-stage procedures, the two-stage approach with skin grafts presents a viable option, especially in severe cases with significant chordee or a small glans. The long-term results of this method may outweigh the initial learning curve required for handling the grafts effectively.
Instruction: Aortic valve sclerosis: a marker of significant obstructive coronary artery disease in patients with chest pain? Abstracts: abstract_id: PUBMED:17543740 Aortic valve sclerosis: a marker of significant obstructive coronary artery disease in patients with chest pain? Background: Previous reports suggested a relationship between coronary artery disease (CAD) and aortic valve sclerosis (AVS). However, whether AVS can be used as a marker of obstructive CAD (obCAD) in patients with chest pain is unknown. We hypothesized that AVS is a predictive marker for obCAD in patients hospitalized for chest pain. Methods: We studied 93 consecutive patients with chest pain undergoing coronary angiography. All had negative cardiac enzymes and no previous diagnosis of cardiac ischemic disease. AVS was detected by transthoracic echocardiography. Resting electrocardiography, left ventricular systolic function, wall-motion abnormalities, and stress test results were considered. We calculated the diagnostic value for obCAD of AVS, stress test, and combination of the two methods. Results: ObCAD was present in 29 patients (31%). Patients with obCAD had a higher prevalence of AVS (38 vs 14%, P = .02) and positive stress test (67 vs 28%, P = .02). The odds ratio for obCAD in the presence of AVS was 3.7 (95% confidence interval 1.3-10.4, P = .01). AVS (P = .01) and a positive stress test (P = .002) were independent predictors for obCAD at the multivariate analysis. AVS had sensitivity of 38% and specificity of 86%. Stress test had sensitivity of 67% and specificity of 72%. When echocardiographic detection of AVS was combined with stress test, the sensitivity and negative predictive value improved to 93% and 96%, respectively. Conclusions: AVS is an independent predictor for obCAD in patients with chest pain, thus, it should be considered in the risk stratification of these patients. abstract_id: PUBMED:1894991 A case of quadricuspid aortic valve associated with coronary arterial legion A case of quadricuspid aortic valve is reported in a patient with coronary artery disease and abdominal aortic aneurysms. A 54-year-old male who had undergone aortic replacement because of abdominal aortic aneurysms three years before presentation was readmitted due to complaints of angina pectoris and palpitations. Aortography and coronary arteriography revealed severe aortic regurgitation and proximal occlusion of LAD and RCA. Surgical correction consisted of aortic valve replacement with a Björk-Shilely valve and coronary revascularization of LAD. During the operation, a quadricuspid aortic valve with one smaller and three larger cusps that showed mild myxomatous degeneration without dystrophic calcification and normal coronary arterial orifices were noted. Accordingly, severe aortic regurgitation may have resulted from the dysfunction of congenital malformed cusps and acquired sclerotic coronary disease was the main cause of the chest pain. abstract_id: PUBMED:14736432 Adverse outcome in aortic sclerosis is associated with coronary artery disease and inflammation. Objectives: The present study was designed to evaluate the relationship between the presence of aortic sclerosis, serologic markers of inflammation, and adverse cardiovascular outcomes. Background: Aortic sclerosis is associated with adverse cardiovascular outcomes. However, the mechanism by which such nonobstructive valve lesions impart excess cardiovascular risk has not been delineated. Method: In 425 patients (mean age 68 +/- 15 years, 54% men) presenting to the emergency room with chest pain, we studied the relationship among aortic sclerosis, the presence and acuity of coronary artery disease, serologic markers of inflammation, and cardiovascular outcomes. Patients underwent echocardiography and serologic testing including C-reactive protein (CRP). Aortic valves were graded for the degree of sclerosis, and cardiovascular outcomes including cardiac death and nonfatal myocardial infarction (MI) were analyzed over one year. Results: Aortic sclerosis was identified in 203 patients (49%), whereas 212 (51%) had normal aortic valves. On univariate analysis at one year, patients with aortic sclerosis had a higher incidence of cardiovascular events (16.8% vs. 7.1%, p = 0.002) and worse event-free survival (normal valves = 93%, mild aortic sclerosis = 85%, and moderate to severe aortic sclerosis = 77%, p = 0.002). However, by multivariable analysis aortic sclerosis was not independently associated with adverse cardiovascular outcomes; the only independent predictors of cardiac death or MI at one year were coronary artery disease (hazard ratio [HR] 3.23, p = 0.003), MI at index admission (HR 2.77, p = 0.008), ascending tertiles of CRP (HR 2.2, p = 0.001), congestive heart failure (HR 2.15, p = 0.02) and age (HR 1.03, p = 0.04). Conclusions: The increased incidence of adverse cardiovascular events in patients with aortic sclerosis is associated with coronary artery disease and inflammation, not a result of the effects of valvular heart disease per se. abstract_id: PUBMED:11738305 Association of mitral annulus calcification, aortic valve sclerosis and aortic root calcification with abnormal myocardial perfusion single photon emission tomography in subjects age &lt; or =65 years old. Objectives: We examined the hypothesis that mitral annulus calcification (MAC), aortic valve sclerosis (AVS) and aortic root calcification (ARC) are associated with coronary artery disease (CAD) in subjects age &lt; or =65 years. Background: Mitral annulus calcification, AVS and ARC frequently coexist and are associated with coronary risk factors and CAD in the elderly. Methods: We studied 338 subjects age &lt; or =65 years who underwent evaluation of chest pain with myocardial perfusion single photon emission computed tomography (SPECT) and a two-dimensional transthoracic echocardiogram for other indications. The association of MAC, AVS and ARC with abnormal SPECT was evaluated by using chi-square analyses and logistic regression analyses. Results: Compared with no or one calcium deposit and no or one coronary risk factor other than diabetes, multiple (&gt; or =2) calcium (or sclerosis) deposits with diabetes or multiple (&gt; or =2) coronary risk factors were significantly associated with abnormal SPECT in women age &lt; or =55 years old (odds ratio [OR], 20.00), in women age &gt;55 years old (OR, 10.00) and in men age &lt; or =55 years old (OR, 5.55). Multivariate analyses identified multiple calcium deposits as a significant predictor for an abnormal SPECT in women (p &lt; 0.001), younger subjects age &lt; or =55 years (p &lt; 0.05) and the total group of subjects (p &lt; 0.01). Conclusions: When coronary risk factors are also taken into consideration, the presence of multiple calcium deposits in the mitral annulus, aortic valve or aortic root appears to be a marker of CAD in men &lt; or =55 years old and women. abstract_id: PUBMED:19949612 Aortic Valve Sclerosis on Echocardiography is a Good Predictor of Coronary Artery Disease in Patients With an Inconclusive Treadmill Exercise Test. Background And Objectives: The treadmill exercise test (TMT) is used as a first-line test for diagnosing coronary artery disease (CAD). However, the findings of a TMT can be inconclusive, such as incomplete or equivocal results. Aortic valve sclerosis (AVS) is known to be a good predictor of CAD. We determined the usefulness of assessing AVS on 2-dimensional (2D) echocardiography for making the diagnosis of CAD in patients with inconclusive results on a TMT. Subjects And Methods: This prospective study involved 165 consecutive patients who underwent a TMT that resulted in inconclusive findings, 2D echocardiography to detect AVS, and coronary angiography to detect CAD. Following echocardiography, AVS was classified as none, mild, or severe. CAD was defined as &gt;/=70% narrowing of the luminal diameter on coronary angiography. Results: CAD was more common in patients with AVS than in patients without AVS (75% vs. 47%, respectively, p&lt;0.01). Multiple logistic regression analysis showed that AVS was the only independent predictor of CAD {odds ratio=8.576; 95% confidence interval (CI), 3.739-19.672}. The sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of the presence of AVS for predicting CAD in a patient with an inconclusive TMT were 62%, 67%, 64%, 75%, and 53%, respectively. During a 1-year clinical follow-up, patients with and without AVS were similar in terms of event-free survival rates. Conclusion: If the results of TMT for patients with chest pain on exertion are inconclusive, the presence of AVS on echocardiography is a good predictor of CAD. abstract_id: PUBMED:19088396 Marked aortic valve stenosis progression after receiving long-term aggressive cholesterol-lowering therapy using low-density lipoprotein apheresis in a patient with familial hypercholesterolemia. In 1982, a 49-year-old Japanese woman had been referred to our hospital for further investigation of her hypercholesterolemia. She was diagnosed as heterozygous familial hypercholesterolemia, because of Achilles tendon xanthoma and a family history of primary hypercholesterolemia. Three years later, she had chest pain on effort and angina pectoris was diagnosed by coronary angiography. At that time, she underwent coronary artery bypass grafting surgery with 2 saphenous vein grafts (SVG). Because more aggressive cholesterol-lowering therapy was needed for secondary prevention of coronary artery disease (CAD), weekly low-density lipoprotein (LDL) apheresis was started postoperatively, combined with drug therapy. Since 1986, her serum total cholesterol levels before and after LDL apheresis remained approximately 200 mg/dl and 90 mg/dl, respectively. Although her coronary sclerosis, including the SVG, did not progress appreciably for a period of 20 years, stenotic changes of the aortic valve developed rapidly at age 70, leading to aortic valve replacement surgery in 2005 at age 72. These findings suggest that careful attention to the progression of aortic valve stenosis is needed for extreme hypercholesterolemic patients even under optimal cholesterol-lowering therapy for the secondary prevention of CAD. abstract_id: PUBMED:15799912 Congenital fistula between the left internal mammary artery (LIMA) and the pulmonary artery: cause of LIMA bypass occlusion? Congenital fistulas from the left internal mammary artery to the pulmonary artery are rare. We describe a 49-year-old patient with severe aortic valve regurgitation and coronary artery disease. Percutaneous transluminal coronary angioplasty and left anterior descending artery (LAD) stenting had been performed because of a significant proximal LAD lesion. Repeated coronary angiogram 3 months later revealed a patent stent but severe sclerosis up to a 40% stenosis of the LAD after the area of stenting. An aortic valve replacement and a left internal mammary artery (LIMA) bypass to LAD were performed during standard cardiopulmonary bypass (CPB). Because of patient chest pain, a control angiogram was carried out 2 years after surgery and revealed a LIMA-bypass occlusion and a large fistula deriving from the proximal part of the LIMA to the pulmonary artery. The fistula was occluded by coils during an interventional cardiological procedure. Diminished flow in the LIMA bypass due to the fistula in combination with a nonsignificant proximal LAD stenosis are possible reasons for IMA-bypass occlusion. From this case we conclude that angiography of the IMA to detect malformations preoperatively should be mandatory in all cases of arterial coronary revascularization using IMA bypasses. abstract_id: PUBMED:2089171 Coronary risk factors in angiographically defined patients with chest pain. Coronary risk factors were assessed in 186 consecutive patients who received coronary angiography. The severity of coronary luminal narrowing was scored as the coronary sclerosis index (CSI). Patients were divided into normal coronary arteries (N, n = 72), coronary sclerosis without infarction (C, n = 73) and previous myocardial infarction (MI, n = 41). The CSI increased with age. A significant difference in serum triglycerides, HDL cholesterol and atherogenic index was observed between Groups C or MI and N. Multivariate analysis revealed that CSI had correlated with total- and HDL-cholesterol, uric acid and age in subjects under 55 years; and with age, blood sugar, factor H and HDL cholesterol in those of 55 years or over. When patients were classified by their total and LDL cholesterol level, a significantly different CSI was found between the desirable and high cholesterol levels in subjects under the age of 55, but it was not significant in those over 55. Therefore, disorders in lipid metabolism should be corrected in early middle age. abstract_id: PUBMED:2119522 Plasminogen activator inhibitor-1 levels in patients with chronic angina pectoris with or without angiographic evidence of coronary sclerosis. Increased plasma levels of plasminogen activator inhibitor-1 (PAI-1) have been shown to exist in 40 to 60% of patients with stable coronary artery disease and have been suggested to be responsible for the development of coronary thrombotic complications. However, it is also discussed whether PAI-1 elevation might mainly be due to variables like increased age or to reactive mechanisms caused e.g. by the chest pain itself. To exclude age dependent or pain related influences, age-matched patients with stable angina pectoris (NHYA II) and angiographically proven coronary artery disease (CAD, n = 16) or without evidence for coronary sclerosis (variant angina, n = 10; angina-like syndrome with normal coronary angiogram, n = 5; non-CAD, n = 15) have been investigated for their plasma PAI-1 activity and t-PA antigen levels. The mean PAI activity in CAD patients (17.5 U/ml) was significantly higher than in non-CAD patients (9.6 U/ml) (p less than 0.0001). In the CAD patients no significant variation in plasma PAI-1 values could be demonstrated when related to the extent of the disease or to a history of previous myocardial infarction. t-PA antigen was also elevated in CAD patients as compared to the non-CAD group (p less than 0.02). The results suggest therefore a strong correlation between coronary artery disease itself and elevated levels of components of the plasma fibrinolytic system. abstract_id: PUBMED:28597214 Diagnosis of ischemia and revascularization in patients with ventricular tachyarrhythmia Sustained ventricular tachyarrhythmia usually occurs on the basis of structural heart disease, particularly coronary heart disease (CAD). Although monomorphic ventricular tachycardia (VT) appears mainly in patients with CAD, it is typically not triggered by acute ischemia, in contrast to polymorphic VT or ventricular fibrillation (VF). To judge if VT is caused by acute ischemia is even more difficult in context with an elevated highly sensitive troponin T which is generally elevated in sustained VT because tachycardia in chronic stable coronary artery sclerosis causes a mismatch between increased oxygen demand and limited oxygen supply. Therefore, acute coronary angiography and revascularization may frequently not be necessary in monomorphic VT, will usually not improve rhythm stabilization, and may lead to misinterpretation of monomorphic VT being caused by a coronary stenosis. This can lead to withholding antiarrhythmic therapy after revascularization since it is assumed that the cause of VT has been treated. On the other hand, acute coronary angiography and revascularization are useful in polymorphic VT/VF, ECG signs of ischemia, or typical chest pain before occurrence of VT/VF. Coronary angiography should also be performed in patients with VT with newly diagnosed reduced left ventricular function and before catheter ablation. Answer: Yes, aortic valve sclerosis (AVS) has been identified as a marker of significant obstructive coronary artery disease (obCAD) in patients with chest pain. A study involving 93 consecutive patients with chest pain undergoing coronary angiography found that AVS was an independent predictor for obCAD. Patients with obCAD had a higher prevalence of AVS compared to those without, and AVS combined with a stress test improved the sensitivity and negative predictive value for diagnosing obCAD (PUBMED:17543740). Another study with 425 patients presenting with chest pain showed that the increased incidence of adverse cardiovascular events in patients with AVS was associated with coronary artery disease and inflammation, rather than the effects of valvular heart disease per se (PUBMED:14736432). Additionally, a study of 338 subjects under 65 years old found that the presence of multiple calcium deposits in the mitral annulus, aortic valve, or aortic root, especially when combined with coronary risk factors, was a significant marker of CAD (PUBMED:11738305). Furthermore, in patients with inconclusive treadmill exercise test results, AVS on echocardiography was a good predictor of CAD (PUBMED:19949612). These findings suggest that AVS should be considered in the risk stratification of patients with chest pain as it may indicate the presence of significant obCAD.
Instruction: The rational clinical examination. Does this patient have strep throat? Abstracts: abstract_id: PUBMED:11147989 The rational clinical examination. Does this patient have strep throat? Context: Sore throat is a common complaint, and identifying patients with group A beta-hemolytic streptococcal pharyngitis (strep throat) is an important task for clinicians. Previous reviews have not systematically reviewed and synthesized the evidence. Objective: To review the precision and accuracy of the clinical examination in diagnosing strep throat. Data Source: MEDLINE search for articles about diagnosis of strep throat using history-taking and physical examination. Study Selection: Large blinded, prospective studies (having &gt; or =300 patients with sore throat) reporting history and physical examination data and using throat culture as the reference standard were included. Of 917 articles identified by the search, 9 met all inclusion criteria. Data Extraction: Pairs of authors independently reviewed each article and used consensus to resolve discrepancies. Data Synthesis: The most useful findings for evaluating the likelihood of strep throat are presence of tonsillar exudate, pharyngeal exudate, or exposure to strep throat infection in the previous 2 weeks (positive likelihood ratios, 3.4, 2.1, and 1.9, respectively) and the absence of tender anterior cervical nodes, tonsillar enlargement, or exudate (negative likelihood ratios, 0.60, 0.63, and 0.74, respectively). No individual element of history-taking or physical examination is accurate enough by itself to rule in or rule out strep throat. Three validated clinical prediction rules are described for adult and pediatric populations. Conclusions: While no single element of history-taking or physical examination is sufficiently accurate to exclude or diagnose strep throat, a well-validated clinical prediction rule can be useful and can help physicians make more informed use of rapid antigen tests and throat cultures. abstract_id: PUBMED:31357633 Novel Image Processing Method for Detecting Strep Throat (Streptococcal Pharyngitis) Using Smartphone. In this paper, we propose a novel strep throat detection method using a smartphone with an add-on gadget. Our smartphone-based strep throat detection method is based on the use of camera and flashlight embedded in a smartphone. The proposed algorithm acquires throat image using a smartphone with a gadget, processes the acquired images using color transformation and color correction algorithms, and finally classifies streptococcal pharyngitis (or strep) throat from healthy throat using machine learning techniques. Our developed gadget was designed to minimize the reflection of light entering the camera sensor. The scope of this paper is confined to binary classification between strep and healthy throats. Specifically, we adopted k-fold validation technique for classification, which finds the best decision boundary from training and validation sets and applies the acquired best decision boundary to the test sets. Experimental results show that our proposed detection method detects strep throats with 93.75% accuracy, 88% specificity, and 87.5% sensitivity on average. abstract_id: PUBMED:33178623 Diagnostic Methods, Clinical Guidelines, and Antibiotic Treatment for Group A Streptococcal Pharyngitis: A Narrative Review. The most common bacterial cause of pharyngitis is infection by Group A β-hemolytic streptococcus (GABHS), commonly known as strep throat. 5-15% of adults and 15-35% of children in the United States with pharyngitis have a GABHS infection. The symptoms of GABHS overlap with non-GABHS and viral causes of acute pharyngitis, complicating the problem of diagnosis. A careful physical examination and patient history is the starting point for diagnosing GABHS. After a physical examination and patient history is completed, five types of diagnostic methods can be used to ascertain the presence of a GABHS infection: clinical scoring systems, rapid antigen detection tests, throat culture, nucleic acid amplification tests, and machine learning and artificial intelligence. Clinical guidelines developed by professional associations can help medical professionals choose among available techniques to diagnose strep throat. However, guidelines for diagnosing GABHS created by the American and European professional associations vary significantly, and there is substantial evidence that most physicians do not follow any published guidelines. Treatment for GABHS using analgesics, antipyretics, and antibiotics seeks to provide symptom relief, shorten the duration of illness, prevent nonsuppurative and suppurative complications, and decrease the risk of contagion, while minimizing the unnecessary use of antibiotics. There is broad agreement that antibiotics with narrow spectrums of activity are appropriate for treating strep throat. But whether and when patients should be treated with antibiotics for GABHS remains a controversial question. There is no clearly superior management strategy for strep throat, as significant controversy exists regarding the best methods to diagnose GABHS and under what conditions antibiotics should be prescribed. abstract_id: PUBMED:2204901 Throat culture or rapid strep test? Clearly, rapid tests for streptococci identification are here to stay, and development of the technology is likely to continue. The most rational use of these tests is to identify streptococcal pharyngitis when patients have severe symptoms or when special situations warrant early detection. Throat culture alone is sufficient for most other patients, and all negative rapid tests should be confirmed by throat culture. Specific antistreptococcal therapy should be initiated if either the rapid test or culture is positive. If the physician decides on the basis of clinical criteria to treat pharyngitis with an antibiotic that covers group A beta-hemolytic streptococci, a rapid test is not necessary. If confirmation of the infection is warranted in these cases, throat culture alone should suffice. No rapid strep test kit clearly outperforms others. With any test, good results depend on the quality of the specimen. abstract_id: PUBMED:36941540 Influence of a guideline or an additional rapid strep test on antibiotic prescriptions for sore throat: the cluster randomized controlled trial of HALS (Hals und Antibiotika Leitlinien Strategien). Background: Pharyngitis due to Group A beta-hemolytic streptococci (GAS) is seen as the main indication for antibiotics for sore throat. In primary care settings prescription rates are much higher than the prevalence of GAS. Recommendations in international guidelines differ considerably. A German guideline suggested to consider antibiotics for patients with Centor or McIsaac scores ≥ 3, first choice being penicillin V for 7 days, and recommended analgesics for all. We investigated, if the implementation of this guideline lowers the antibiotic prescription rate, and if a rapid antigen detection strep-test (RADT) in patients with scores ≥ 3 lowers the rate further. Methods: HALS was an open pragmatic parallel group three-arm cluster-randomized controlled trial. Primary care practices in Northern Germany were randomized into three groups: Guideline (GL-group), modified guideline with a RADT for scores ≥ 3 (GL-RADT-group) or usual care (UC-group). All practices were visited and instructed by the study team (outreach visits) and supplied with material according to their group. The practices were asked to recruit 11 consecutive patients ≥ 2 years with an acute sore throat and being at least moderately impaired. A study throat swab for GAS was taken in every patient. The antibiotic prescription rate at the first consultation was the primary outcome. Results: From October 2010 to March 2012, 68 general practitioners in 61 practices recruited 520 patients, 516 could be analyzed for the primary endpoint. Antibiotic prescription rates did not differ between groups (p = 0.162) and were about three times higher than the GAS rate: GL-group 97/187 patients (52%; GAS = 16%), GL-RADT-group 74/172 (43%; GAS = 16%) and UC-group 68/157 (43%; GAS = 14%). In the GL-RADT-group 55% of patients had scores ≥ 3 compared to 35% in GL-group (p &lt; 0.001). After adjustment, in the GL-RADT-group the OR was 0.23 for getting an antibiotic compared to the GL-group (p = 0.010), even though 35 of 90 patients with a negative Strep-test got an antibiotic in the GL-RADT-group. The prescription rates per practice covered the full range from 0 to 100% in all groups. Conclusion: The scores proposed in the implemented guideline seem inappropriate to lower antibiotic prescriptions for sore throat, but better adherence of practitioners to negative RADTs should lead to fewer prescriptions. Trial Registration: DRKS00013018, retrospectively registered 28.11.2017. abstract_id: PUBMED:38027289 Are clinicians overdiagnosing strep throat and overprescribing antibiotics? N/A abstract_id: PUBMED:33494718 High diagnostic accuracy of automated rapid Strep A test reduces antibiotic prescriptions for children in the United Arab Emirates. Background: Diagnosis of Group A Streptococcus (GAS) pharyngitis in children is hindered by variable sensitivity of clinical criteria and rapid Strep A tests (SAT), resulting in reliance on throat cultures as the gold standard for diagnosis. Delays while awaiting culture reports result in unnecessary antibiotic prescriptions among children, contributing to the spread of antimicrobial resistance (AMR). Methods: Diagnostic accuracy study of an automated SAT (A-SAT) in children up to 16 years of age presenting to an emergency room with signs and symptoms of pharyngitis between March and June 2019. Paired throat swabs for A-SAT and culture were collected. Sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for A-SAT were calculated. Results: Two hundred and ninety-one children were included in this study. 168 (57.7%) were boys and the mean age was 4.2 years. A-SAT was positive in 94 (32.3%) and throat culture was positive in 90 (30.9%) children. A-SAT and throat culture results showed a high level of consistency in our cohort. Only 6 (2%) children had inconsistent results, demonstrating that the A-SAT has a high sensitivity (98.9%), specificity (97.5%), PPV (94.7%) and NPV (99.5%) for the diagnosis of GAS pharyngitis in children. Only 92 (32%) children were prescribed antibiotics while the vast majority (68%) were not. Conclusions: A-SAT is a quick and reliable test with diagnostic accuracy comparable to throat culture. Its widespread clinical use can help limit antibiotic prescriptions to children presenting with pharyngitis, thus limiting the spread of AMR. abstract_id: PUBMED:25296661 Randomised, double-blind, placebo-controlled studies on flurbiprofen 8.75 mg lozenges in patients with/without group A or C streptococcal throat infection, with an assessment of clinicians' prediction of 'strep throat'. Background: Diagnosing group A streptococcus (Strep A) throat infection by clinical examination is difficult, and misdiagnosis may lead to inappropriate antibiotic use. Most patients with sore throat seek symptom relief rather than antibiotics, therefore, therapies that relieve symptoms should be recommended to patients. We report two clinical trials on the efficacy and safety of flurbiprofen 8.75 mg lozenge in patients with and without streptococcal sore throat. Methods: The studies enrolled adults with moderate-to-severe throat symptoms (sore throat pain, difficulty swallowing and swollen throat) and a diagnosis of pharyngitis. The practitioner assessed the likelihood of Strep A infection based on historical and clinical findings. Patients were randomised to flurbiprofen 8.75 mg or placebo lozenges under double-blind conditions and reported the three throat symptoms at baseline and at regular intervals over 24 h. Results: A total of 402 patients received study medication (n = 203 flurbiprofen, n = 199 placebo). Throat culture identified Strep A in 10.0% of patients and group C streptococcus (Strep C) in a further 14.0%. The practitioners' assessments correctly diagnosed Strep A in 11/40 cases (sensitivity 27.5%, and specificity 79.7%). A single flurbiprofen lozenge provided significantly greater relief than placebo for all three throat symptoms, lasting 3-4 h for patients with and without Strep A/C. Multiple doses of flurbiprofen lozenges over 24 h also led to symptom relief, although not statistically significant in the Strep A/C group. There were no serious adverse events. Conclusions: The results highlight the challenge of identifying Strep A based on clinical features. With the growing problem of antibiotic resistance, non-antibiotic treatments should be considered. As demonstrated here, flurbiprofen 8.75 mg lozenges are an effective therapeutic option, providing immediate and long-lasting symptom relief in patients with and without Strep A/C infection. abstract_id: PUBMED:26096503 Strep-Tagged Protein Purification. The Strep-tag system can be used to purify recombinant proteins from any expression system. Here, protocols for lysis and affinity purification of Strep-tagged proteins from E. coli, baculovirus-infected insect cells, and transfected mammalian cells are given. Depending on the amount of Strep-tagged protein in the lysate, a protocol for batch binding and subsequent washing and eluting by gravity flow can be used. Agarose-based matrices with the coupled Strep-Tactin ligand are the resins of choice, with a binding capacity of up to 9 mg ml(-1). For purification of lower amounts of Strep-tagged proteins, the use of Strep-Tactin magnetic beads is suitable. In addition, Strep-tagged protein purification can also be automated using prepacked columns for FPLC or other liquid-handling chromatography instrumentation, but automated purification is not discussed in this protocol. The protocols described here can be regarded as an update of the Strep-Tag Protein Handbook (Qiagen, 2009). abstract_id: PUBMED:31080532 Cost-minimization analysis of community pharmacy-based point-of-care testing for strep throat in 5 Canadian provinces. Background: Strep throat point-of-care (POC) testing in community pharmacies will enable pharmacist-based care for this condition. Our objective was to conduct an economic evaluation of treating severe sore throat when this service was offered in pharmacies in 5 Canadian provinces. Methods: We conducted 5 separate cost-minimization analyses for the provinces of Alberta, British Columbia, Nova Scotia, Ontario and Saskatchewan, from the public payer perspective, to estimate mean cost per patient of treating severe sore throat in 2 scenarios: 1) physician-based usual care in a family physician's office, a walk-in clinic or an emergency room (ER) and 2) a new scenario where patients received care described above or in a pharmacy offering strep throat POC testing. One-way sensitivity analyses were conducted to account for model uncertainty. Results: Mean cost per patient for each pathway in the base-case analyses for the 5 provinces ranged from 1) $37.55 to $61.57 for family physician, 2) $37.55 to $61.57 for walk-in clinic, 3) $38.88 to $57.56 for ER and 4) $19.12 to $21.83 for pharmacy, representing savings ranging from $12.47 to $24.36 per patient for the new scenario. Approximate total cost savings range from $1.3 million to $2.6 million per year across the 5 provinces. All sensitivity analyses yielded cost savings for the new scenario. Discussion: Across 5 provinces, strep throat POC testing in pharmacies was cost saving compared to physician-based care. Sensitivity analyses demonstrated the robustness of these results. Conclusions: Funding strep throat POC testing in community pharmacies in these 5 provinces would lead to public health system cost savings and potentially improve patients' access to care for severe sore throat. Answer: The rational clinical examination for diagnosing strep throat involves evaluating the likelihood of group A beta-hemolytic streptococcal pharyngitis (GABHS) based on clinical findings and history-taking. The most useful findings for assessing the likelihood of strep throat include the presence of tonsillar exudate, pharyngeal exudate, or recent exposure to strep throat infection, with positive likelihood ratios of 3.4, 2.1, and 1.9, respectively. Conversely, the absence of tender anterior cervical nodes, tonsillar enlargement, or exudate suggests a lower likelihood of strep throat, with negative likelihood ratios of 0.60, 0.63, and 0.74, respectively (PUBMED:11147989). No single element of the clinical examination is accurate enough to definitively rule in or rule out strep throat on its own. However, three validated clinical prediction rules are described for adult and pediatric populations that can aid in making more informed decisions regarding the use of rapid antigen tests and throat cultures (PUBMED:11147989). In addition to clinical examination, various diagnostic methods can be employed, such as rapid antigen detection tests, throat cultures, nucleic acid amplification tests, and machine learning and artificial intelligence. Clinical guidelines from professional associations can assist medical professionals in selecting the appropriate diagnostic techniques, although there is evidence that many physicians do not adhere to these guidelines (PUBMED:33178623). The use of rapid strep tests is recommended when patients present with severe symptoms or when early detection is warranted by special circumstances. Negative rapid tests should be confirmed by throat culture, and specific antistreptococcal therapy should be initiated if either test is positive (PUBMED:2204901). In summary, while the clinical examination provides valuable information, it should be used in conjunction with validated prediction rules and diagnostic tests to accurately diagnose strep throat and inform treatment decisions.
Instruction: Blood cultures for women hospitalized with acute pelvic inflammatory disease. Are they necessary? Abstracts: abstract_id: PUBMED:11584483 Blood cultures for women hospitalized with acute pelvic inflammatory disease. Are they necessary? Objective: To determine the incidence of positive blood cultures and if the results affect the clinical management or the duration of hospital stay in patients with acute pelvic inflammatory disease (PID). Study Design: Retrospective study of all patients hospitalized with a diagnosis of acute PID from January 1, 1996, to December 31, 1997. Results: Of 93 patients in the study, 3 had significant bacterial growth from blood culture specimens. The results of blood culture specimens did not affect clinical management. Conclusion: Routine specimens for blood culture may not be needed from patients hospitalized with acute PID. abstract_id: PUBMED:7803638 Comparison of three regimens recommended by the Centers for Disease Control and Prevention for the treatment of women hospitalized with acute pelvic inflammatory disease. This six-center, prospective, open-label clinical trial compared the efficacy and safety of three regimens recommended by the Centers for Disease Control and Prevention (CDC) for the treatment of women hospitalized for acute pelvic inflammatory disease (PID). The study focused on the response to inpatient therapy, not on long-term prevention of sequelae. A severity score was used for objective comparison of the degree of illness before and after therapy. Women were randomly assigned (in a 1:1:1 ratio) to treatment with cefoxitin plus doxycycline, clindamycin plus gentamicin, or cefotetan plus doxycycline. Two hundred seventy-five (94.2%) of 292 evaluable women required no alteration in therapeutic regimen. The three regimens produced almost identical cure rates. No serious adverse clinical or laboratory events were observed. In short, the three regimens recommended by the CDC for inpatient therapy of acute PID were similarly effective and safe. abstract_id: PUBMED:3162653 Treatment of hospitalized patients with acute pelvic inflammatory disease: comparison of cefotetan plus doxycycline and cefoxitin plus doxycycline. Acute pelvic inflammatory disease remains the major medical and economic consequence of sexually transmitted diseases among young women. The polymicrobial origins of pelvic inflammatory disease have been well documented and the major organisms recovered from the upper genital tract in patients with pelvic inflammatory disease include Chlamydia trachomatis, Neisseria gonorrhoeae, and mixed anaerobic and aerobic bacteria. This study was undertaken to compare the efficacy and safety of cefotetan plus doxycycline with that of cefoxitin plus doxycycline in the treatment of hospitalized patients with acute pelvic inflammatory disease. A total of 68 hospitalized patients with acute pelvic inflammatory disease were entered and randomized into two treatment groups: cefotetan (n = 32) and cefoxitin (n = 36). There were six tuboovarian abscesses in each group. C. trachomatis was recovered from 7 (10%) and N. gonorrhoeae from 48 (71%) of the patients. Anaerobic and aerobic bacteria were recovered from the upper genital tract in 53 (78%) of the patients. Cefotetan plus doxycycline and cefoxitin plus doxycycline demonstrated high rates of initial clinical response in the treatment of acute pelvic inflammatory disease. Clinical cure was noted in 30 (94%) of the cefotetan plus doxycycline group and 33 (92%) of the cefoxitin plus doxycycline group. Four failures were sonographically diagnosed tuboovarian abscesses that responded to clindamycin plus gentamicin therapy. The fifth failure was an uncomplicated case that did not respond to cefoxitin and doxycycline and required additional therapy. At 1 week and 3 weeks, respectively, the posttreatment cultures demonstrated eradication, in all instances, of N. gonorrhoeae and C. trachomatis. These regimens also were very effective in eradicating anaerobic and aerobic pathogens from the endometrial cavity. Both regimens were well tolerated by the patients, and few adverse drug affects were noted. abstract_id: PUBMED:1477250 Evaluation of new anti-infective drugs for the treatment of acute pelvic infections in hospitalized women. Infectious Diseases Society of America and the Food and Drug Administration. This set of guidelines deals with evaluation of anti-infective drugs for treatment of acute pelvic infections in hospitalized women. The clinical entities include infectious complications of cesarean section; elective hysterectomy; and septic, incomplete abortion. Conditions including endomyometritis, cuff cellulitis, pelvic cellulitis, parametritis, phlegmon, and pelvic abscesses may arise due to a variety of bacterial species, both aerobic and anaerobic, that comprise the endogenous flora of the lower reproductive tract. Anaerobic bacteria have assumed particular importance, and therapy should be directed against such organisms. The roles of enterococci, chlamydiae, and mycoplasmas remain uncertain. Culture samples must be obtained under conditions assuring minimal vaginal contamination. Before a new drug may be used for treatment of human pelvic infections, considerable information is necessary about its antimicrobial spectrum as well as its safety and efficacy. Placebo-controlled trials are considered unethical. Historical controls may be used, but concurrent active control comparative trials are preferred. Parenteral administration is recommended for at least the initial 4 days of therapy, but orally administered drugs may be evaluated for completion of longer courses. The expected cure rate is approximately 90%. Uncomplicated infections should be treated for at least 4 days; more complicated infections may require prolonged therapy. Although clinical cure is paramount, microbiologic response must also be taken into account. In the final assessment, outcome will be classified as cure, failure, or indeterminate. abstract_id: PUBMED:8511708 Gonorrhea, genital chlamydial infection, and nonspecific urethritis in male partners of women hospitalized and treated for acute pelvic inflammatory disease. Background And Objectives: Acute pelvic inflammatory disease (PID) is often a complication to a sexually transmitted disease (STD), the most important agents being Neisseria gonorrhoeae and Chlamydia trachomatis. However, very little is known of the genitourinary status of the male partners of women with acute pelvic inflammatory disease (PID). Goal Of This Study: To determine the presence of N. gonorrhoeae and/or C. trachomatis infection or nonspecific urethritis (NSU) in regular sexual male partners of women with acute PID. Study Design: Two hundred regular sexual male partners to 196 women admitted to a hospital for treatment of acute PID were referred by contact tracing to the sexually transmitted disease outpatient clinic for clinical and laboratory examination regarding N. gonorrhoeae and/or C. trachomatis infection, or NSU defined as the presence of &gt; 5 polymorphonuclear leukocytes per high-power field (x1,000) in &gt; 4 fields and with negative laboratory tests for N. gonorrhoeae and C. trachomatis. Results: The majority of the males was in the age group 20 to 29 years of age, female sexual partners in 15 to 24 years years of age. N. gonorrhoeae was demonstrated in 42.9% of the male partners to women with acute PID and concomitant gonorrhea. The corresponding figure for C. trachomatis was 43.7%. Nonspecific urethritis was diagnosed in 26 (33.8%) of the male partners to 77 women were diagnosed with N. gonorrhoeae and/or C. trachomatis infection, and in 45 (37.8%) partners of 119 women without such an infection. In all, N. gonorrhoeae, C. trachomatis or NSU were demonstrated in 117 (59.7%) of the 196 male partners, but only 32% of the males with N. gonorrhoeae or C. trachomatis and 8.5% of those with NSU presented subjective symptoms of urethritis. Conclusion: The findings of the study stress the need for routine clinical and laboratory examination and treatment of sexual male partners to women with acute PID. abstract_id: PUBMED:7256494 Pelvic inflammatory disease in the United States. Epidemiology and trends among hospitalized women. The Hospital Discharge Survey, conducted by the National Center for Health Statistics (Rockville, Md.), provides national estimates for conditions causing hospitalization in short-stay hospitals in the United States. The venereal Disease Control Division of the Centers for Disease Control (Atlanta, Ga.) obtained survey data for 1970-1975 and analyzed the epidemiology of pelvic inflammatory disease (PID) in women hospitalized for this disease. An average of greater than 211,000 female patients older than 10 years of age were hospitalized annually for PID. Acute salpingitis occurred predominantly in women younger than 30 years of age. Women of all races other than white had a PID rate 3.3 times greater than that of white women. Data obtained from the Commission on Professional and Hospital Activities were used for determination of the trend in hospitalizations for PID. In all races other than white, the trend appears stable; however the trend among white women is increasing. abstract_id: PUBMED:11015277 The incidence of positive cultures in women suspected of having PID/Salpingitis OBJECTIVES: To determine the incidence of GC and chlamydia in women with suspected acute pelvic inflammatory disease. A secondary objective was to investigate the clinical usefulness of physical exam findings of vaginal discharge, cervical motion, and adnexal tenderness. METHODS: This was a retrospective chart review of patients seen in the ED. Patients were entered into the study if they had PID/salpingitis as their discharge diagnosis and cultures performed for GC and chlamydia. RESULTS: A total of 133 charts were reviewed. 13 patients were excluded due to incomplete charting of history and physical exam or inability to obtain culture results. Of the remaining 120 patients, 70 cultures were negative for any growth. 10 cultures were positive for GC and 10 cultures were positive for chlamydia. In reviewing the physical exam findings in women with negative cultures, 74% had discharge, 93% had cmt, and 66% had adnexal tenderness. For women with cultures positive for GC and chlamydia, 90% had discharge, 80% had cmt, and 75% had adnexal tenderness. For vaginal discharge the sens, spec, ppv, npv were 0.90, 0.26, 0.35, 0.90. For cmt the sens, spec, ppv, npv were 0.80, 0.07, 0.20, 0.55. For adnexal tenderness the sens, spec, ppv, npv were 0. 75, 0.34, 0.25, 0.83. No combination of the three physical exam findings increased the ppv. CONCLUSIONS: The incidence of positive cultures in women suspected of having PID/salpingitis is 16%. The physical exam findings were not predictive of disease secondary to the high number of false positives. Although 16% positives may warrant empiric treatment, patients should not be told they have PID/salpingitis until the cultures are available. abstract_id: PUBMED:6441287 Acute salpingitis and thiamphenicol: a microbiologic and therapeutic study. Eighty-five sexually active women with clinically suspected adnexitis and illness severe enough to require hospitalization were studied. The clinical diagnosis, based on anamnestic data and physical and pelvic examination, was confirmed by laparoscopy and by cultures for aerobic and anaerobic bacteria and for Chlamydia in both cervical canal and intraperitoneal secretions. A ten-day course of thiamphenicol was begun on an empirical basis after laparoscopy. The results showed that fever, an elevated erythrocyte sedimentation rate, and leukocytosis are unreliable diagnostic parameters and that laparoscopy in conjunction with microbial cultures is the only method by which a definite etiologic diagnosis can be established. Positive results of cultures of specimens from the cervical canal are sufficient for the diagnosis of infection due to Neisseria gonorrhoeae, whereas positive culture results for specimens from the intraperitoneal cavity are necessary for the diagnosis of infection caused by Chlamydia trachomatis. Primary treatment with thiamphenicol was successful in 77 (91%) of the 85 patients. Thus, thiamphenicol proved to be effective in the treatment of acute adnexitis. abstract_id: PUBMED:19596611 Urgent care in gynaecology: resuscitation and management of sepsis and acute blood loss. Sepsis and/or acute blood loss can be encoutered as an emergency condition in gynaecology, especially in women with ectopic pregnancy/miscarriage, acute pelvic inflammatory disease (PID)/tuboovarian abscesses, post-puerperal sepsis/haemorrhage and even in postoperative scenarios. If underestimated or suboptimally treated, both can lead to an inadequate tissue perfusion (defined as shock) and the development of multi-organ failure. Morbidity and mortality after development of one of the shock syndromes (septic or haemorrhagic) correlates directly with the duration and severity of the malperfusion. The patient's prognosis depends on a prompt diagnosis of the presence of shock and immediate resuscitation to predefined physiological end-points, often before the cause of the shock has been identified. In septic shock, hypotension is primarily treated with fluid administration and eventually vasopressors, if required, in order to improve the circulation. Timely administration of antibiotics, control of infectious foci, appropriate use of corticoids and recombinant human activated protein C, tight glucose control, prophylaxis of deep vein thrombosis and stress ulcer prevention complete the therapy of septic shock. In haemorrhagic shock, the treatment primarily involves controlling haemorrhage, reversal of possible coagulopathy and administration of sufficient volumes of fluids and blood products to restore normal tissue perfusion. abstract_id: PUBMED:15715539 Reduction in hospitalized women with pelvic inflammatory disease in Oslo over the past decade. Background: A tuboovarian abscess (TOA) is a common complication of pelvic inflammatory disease (PID), occurring world-wide in 15-30% of women with PID. The aim of the study was to identify changes during the last 10 years in the number of women hospitalized with PID in Oslo, as well as a change regarding the frequencies of the subdiagnoses salpingitis and tuboovarian abscess. Methods: We performed a review of computerized diagnosis lists and manual check of the medical records of women hospitalized with PID in Oslo. The years 1990-92 and 2000-02 were included, resulting in information from two time periods 10 years apart. Cases were registered as salpingitis, oophoritis or tuboovarian abscess. Medical and demographic variables from the medical records of women diagnosed during 4 out of the 6 years were described in detail. Results: We identified 523 women with the diagnosis of PID hospitalized during 1990-92 and 2000-02. There was a 35% reduction in hospitalized cases of salpingitis over the period of 10 years, but the number of cases of tuboovarian abscesses among women admitted for PID remained unchanged from 1990-92 to 2000-02. We found low frequencies of Chlamydia trachomatis and Neisseria gonorrhoea infections, although documented bacteriological sampling was insufficient. Conclusions: Fewer patients were hospitalized in Oslo for PID during the time period of 2000-02 compared with 10 years earlier, but a higher percentage of patients had developed TOA compared with the first time period (43% compared with 26%, p = 0.013), indicating a changing clinical panorama of PID. Answer: Based on the provided abstracts, the necessity of blood cultures for women hospitalized with acute pelvic inflammatory disease (PID) is questioned. Specifically, a retrospective study aimed to determine the incidence of positive blood cultures and their impact on clinical management or hospital stay duration in patients with acute PID (PUBMED:11584483). The results from this study showed that out of 93 patients, only 3 had significant bacterial growth from blood culture specimens, and the blood culture results did not affect clinical management. The conclusion drawn from this study was that routine specimens for blood culture may not be needed for patients hospitalized with acute PID (PUBMED:11584483). This suggests that while blood cultures can identify the presence of bacteria in the bloodstream, their low incidence of positive results and lack of impact on treatment decisions in acute PID cases may render them unnecessary in routine clinical practice for this specific patient group.
Instruction: Bodyweight gain under pregabalin therapy in epilepsy: mitigation by counseling patients? Abstracts: abstract_id: PUBMED:18060813 Bodyweight gain under pregabalin therapy in epilepsy: mitigation by counseling patients? Objective: To evaluate bodyweight gain during pregabalin therapy for epilepsy and the utility of a short counseling program to prevent this side effect. Methods: Randomized controlled trial on the effects of extended versus standard patient counseling on the risk of bodyweight gain with 3- and 6-month follow-up including a consecutive sample of adult outpatients with epilepsy eligible for pregabalin add-on treatment (N=98). Results: The seizure response rate was about 30%, the seizure freedom rate was 5% at the 6-month follow-up (intent-to-treat sample, N=98). The median bodyweight gain for the according-to-protocol sample (N=62) was 4.0 kg with no effect of extended counseling. Bodyweight gain was correlated with number of anticonvulsant drugs (r=.32, p&lt;.05). Conclusions: Pregabalin treatment is associated with a high risk for bodyweight gain which in part depends on total anticonvulsant drug load. This side effect cannot be prevented by extended patient counseling within a standard clinical setting. abstract_id: PUBMED:18047602 Weight issues for people with epilepsy--a review. Weight gain or loss is not an integral part of epilepsy although a sedentary lifestyle can contribute to weight gain. Pharmacological treatment for epilepsy may be associated with substantial weight changes that may increase morbidity and impair adherence to the treatment regimen. Antiepileptic drugs (AEDs) associated with weight loss are felbamate, topiramate, and zonisamide. AEDs associated with weight gain are gabapentin, pregabalin, valproic acid, and vigabatrin and possibly, carbamazepine. Weight neutral AEDs are lamotrigine, levetiracetam, and phenytoin. In clinical practice it is critical to weigh patients regularly and AED selection should be based on each patient's profile without sacrificing therapeutic efficacy. abstract_id: PUBMED:25513768 Cosmetic side effects of antiepileptic drugs in adults with epilepsy. Objective: Cosmetic side effects (CSEs) such as weight gain and alopecia are common, undesirable effects associated with several AEDs. The objective of the study was to compare the CSE profiles in a large specialty practice-based sample of patients taking both older and newer AEDs. Methods: As part of the Columbia and Yale AED Database Project, we reviewed patient records including demographics, medical history, AED use, and side effects for 1903 adult patients (≥16years of age) newly started on an AED. Cosmetic side effects were determined by patient or physician report in the medical record and included acne, gingival hyperplasia, hair loss, hirsutism, and weight gain. We compared the overall rate of CSEs and intolerable CSEs (ICSEs-CSEs that led to dosage reduction or discontinuation) between different AEDs in both monotherapy and polytherapy. Results: Overall, CSEs occurred in 110/1903 (5.8%) patients and led to intolerability in 70/1903 (3.7%) patients. Weight gain was the most commonly reported CSE (68/1903, 3.6%) and led to intolerability in 63 (3.3%) patients. Alopecia was the second most common patient-reported CSE (36/1903, 1.9%) and was intolerable in 33/1903 (1.7%) patients. Risk factors for CSEs included female sex (7.0% vs. 4.3% in males; p&lt;0.05) and any prior CSE (37% vs. 2.9% in patients without prior CSE; p&lt;0.001). Significantly more CSEs were attributed to valproic acid (59/270; 21.9%; p&lt;0.001) and pregabalin (14/143; 9.8%; p&lt;0.001) than to all other AEDs. Significantly less CSEs were attributed to levetiracetam (7/524; 1.3%; p=0.002). Weight gain was most frequently associated with valproic acid (35/270; 13.0%; p&lt;0.001) and pregabalin (12/143; 8.4%; p&lt;0.001). Hair loss was most commonly reported among patients taking valproic acid (24/270; 8.9%; p&lt;0.001). Finally, gingival hyperplasia was most commonly reported in patients taking phenytoin (10/404; 2.5%; p&lt;0.001). Cosmetic side effects leading to dosage change or discontinuation occurred most frequently with pregabalin and valproic acid compared with all other AEDs (13.3 and 5.6% vs. 2.3%; p&lt;0.001). For patients who had been on an AED in monotherapy (n=677), CSEs and ICSEs were still more likely to be attributed to valproic acid (30.2% and 17.1%, respectively) than to any other AED (both p&lt;0.001). Significance: Weight gain and alopecia were the most common patient-reported CSEs in this study, and weight gain was the most likely cosmetic side effect to result in dosage adjustment or medication discontinuation. Particular attention should be paid to pregabalin, phenytoin, and valproic acid when considering cosmetic side effects. Female patients and patients who have had prior CSE(s) to AED(s) were more likely to report CSEs. Knowledge of specific CSE rates for each AED found in this study may be useful in clinical practice. abstract_id: PUBMED:35596110 Impact of Antiseizure Medications on Appetite and Weight in Children. There are numerous potential factors that may affect growth in children with epilepsy, and these must be evaluated in any child with appetite and weight concerns. Antiseizure medications (ASMs) have potential adverse effects, and many may affect appetite, thus impacting normal growth and weight gain. The aim of this review is to focus on the impact of both epilepsy and ASMs on appetite and weight in children. We systematically reviewed studies using Medline assessing the impact of ASMs on appetite and weight in children. Eligible studies included randomized controlled trials and open-label studies (open-label extension and interventional) that targeted or included the pediatric population (0-18 years of age). Each study was classified using the American Academy of Neurology (AAN) Classification of Evidence for Therapeutic Studies, and the level of evidence for impact on appetite and weight in children was graded. ASMs associated with decreased appetite and/or weight loss include fenfluramine, topiramate, zonisamide, felbamate, rufinamide, stiripentol, cannabidiol, brivaracetam and ethosuximide; ASMs with minimal impact on weight and appetite in children include oxcarbazepine, eslicarbazepine, lamotrigine, levetiracetam, lacosamide, carbamazepine, vigabatrin and clobazam. The ASM most robustly associated with increased appetite and/or weight gain is valproic acid; however, both pregabalin and perampanel may also lead to modest weight gain or increased appetite in children. Certain ASMs may impact both appetite and weight, which may lead to increased morbidity of the underlying disease and impaired adherence to the treatment regimen. abstract_id: PUBMED:24308788 Weight change, genetics and antiepileptic drugs. Weight gain caused by antiepileptic drugs (AEDs) constitutes a serious problem in the management of people with epilepsy. AEDs associated with weight gain include sodium valproate, pregabalin and vigabatrin. Excessive weight gain can lead to non-compliance with treatment and to an exacerbation of obesity-related conditions. The mechanisms by which AEDs cause weight gain are not fully understood. It is likely that weight change induced by some AEDs has a genetic underpinning, and recent developments in DNA sequencing technology should speed the understanding, prediction and thus prevention of serious weight change associated with AEDs. This review focuses on the biology of obesity in the context of AEDs. Future directions in the investigations of the mechanism of weight change associated with these drugs and the use of such knowledge in tailoring the treatment of specific patient groups are explored. abstract_id: PUBMED:19213577 Pregabalin as adjunctive therapy for partial epilepsy: an audit study in 96 patients from the South East of England. Introduction: Pregabalin (PGB) was licensed in the EU in 2004 as an adjunctive therapy in partial epilepsy. It is also licensed for neuropathic pain and generalised anxiety. Aims: To identify the clinical usefulness and side effects of add-on PGB in out-patient epilepsy clinics. Methods: We performed an audit on 96 consecutive patients (44 male) prescribed PGB for refractory epilepsy. Mean follow-up, for those who remained on PGB, was 23 months (range 12-39 months). Results: Fifty patients remained on PGB, 37 of whom reported clear improvement in seizure frequency. Among these 37 patients, 1 was seizure free for 15 months; 29 had a seizure reduction of &gt;50%; and 7 improved by &lt;50%. Eight patients reported a decrease in seizure severity without change in seizure frequency. Nine patients reported an incidental improvement in anxiety. Side effects were reported by 25 patients out of the 50 patients still on treatment: 12 reported drowsiness or tiredness, 8 weight gain, 7 dizziness, 2 headache, 2 cognitive side effects, 1 irritability, 1 itchiness, 1 anxiety, and 1 transient rash. Among the 46 patients who discontinued treatment, 9 had worsening of seizure frequency, 27 lack of efficacy and 9 intolerable side effects necessitating withdrawal (4 dizziness or drowsiness, 2 weight gain, 1 peripheral oedema, 1 pain in arms and legs, 1 irritability and cognitive side effects). One patient had a seizure related death (probably drowning) within 1 month of starting PGB. Conclusion: Pregabalin seems to be an effective and well-tolerated anti-epileptic drug when used as add-on treatment in patients with refractory partial epilepsy. abstract_id: PUBMED:32298454 Assessment of the adequacy of counselling regarding reproductive-related issues in women of childbearing age on anti-epileptic drugs. Background: The use of anti-epileptic drugs (AEDs) in women of childbearing age (WCBA) necessitates careful counselling regarding reproductive-related issues. Aim: (i) To compare documentation of appropriate counselling regarding reproductive-related issues in WCBA prescribed AEDs for non-epilepsy vs. epilepsy indications, and (ii) to examine whether the frequency of counselling improved after introduction of 'standardized typed advice'. Design: Retrospective audit and quality assessment and improvement programme. Methods: We analysed medical records of all WCBA prescribed gabapentin, pregabalin, topiramate, valproate or carbamazepine by a general neurology clinical service before (Study period A) and after (Study period B) introduction of standardized typed passages regarding potential teratogenicity ± interactions with hormonal contraception at a university teaching hospital. The χ2 test or the Fisher's exact test was employed, as appropriate. Results: In WCBA prescribed AEDs for non-epilepsy indications, documentation of appropriate counselling regarding potential teratogenicity improved from 49% (17/35 patients) in Period A to 79% (27/34 patients) in Period B (P = 0.008). The frequency of counselling regarding teratogenicity was higher in patients prescribed AEDs for epilepsy compared with non-epilepsy indications in Study period A (100% vs. 49%, P = 0.002), but was no longer significantly different in Study period B (86% vs. 79%, P = 0.64). Documentation of counselling regarding potential interaction of enzyme-inducing AEDs with hormonal contraception did not significantly change between study periods. Conclusion: Significant improvements in documentation regarding potential teratogenicity of AEDs prescribed for non-epilepsy indications can be achieved by introducing standardized, typed passages copied to patients. Such a practice change is practical and widely applicable to neurological and non-neurological practice worldwide. abstract_id: PUBMED:19734010 The long-term retention of pregabalin in a large cohort of patients with epilepsy at a tertiary referral centre. Pregabalin (PGB) is a new antiepileptic drug (AED) which is a structural, non-functional analogue of gamma-aminobutyric acid. It acts at presynaptic calcium channels to modulate neurotransmitter release in the CNS. While the efficacy and tolerability of PGB have been demonstrated in several randomised controlled trials, few studies have addressed long-term outcome in large groups of patients. A cohort of patients attending a tertiary referral centre for epilepsy was identified as having started taking PGB. Patients' data were obtained through medical records. Of 402 patients included, 42% of patients were still taking PGB at last follow-up. The estimated 2.5-year retention rate was 32%. Males appeared more likely to continue on PGB therapy than females. The common adverse experiences (AEs) leading to withdrawal were CNS-related, psychiatric AEs and weight gain. Published retention rates for levetiracetam appear to be higher, and those for gabapentin lower, than the rates estimated for PGB. abstract_id: PUBMED:18579443 Adjunctive pregabalin therapy in mentally retarded, developmentally delayed patients with epilepsy. This retrospective study evaluated the efficacy and tolerability of adjunctive pregabalin (PGB) therapy in mentally retarded, developmentally delayed patients. The primary efficacy measure was the change in the median frequency of seizure days per week between the baseline (8 weeks prior to initiating PGB) and treatment (12 weeks of titration and maintenance) periods. Inclusion criteria were: documented epilepsy treated with antiepileptic drug, at least one seizure during the baseline period, and lack of prior exposure to PGB. Seven patients (four female, three male, mean age=43) with multiple seizure types (generalized tonic-clonic, tonic, partial, and atypical absence) met the inclusion criteria. The mean dose of PGB was 293 mg/day (range=150-350 mg/day). PGB was efficacious, resulting in a significant reduction in the median frequency of seizure days/week between baseline and treatment (1.38 vs 0.50, P=0.018). The 50% responder rate was 71%. The adverse effects at last follow-up (mean 13 months) included weight gain, myoclonus, and sedation. abstract_id: PUBMED:18554985 Efficacy and tolerability of pregabalin in patients with difficult-to-treat epilepsy and intellectual disability. In a retrospective evaluation of 32 inpatients with therapy-resistant epilepsy and intellectual disability, the efficacy of pregabalin (PGB) treatment was assessed after 6 and 12 months. The combined efficacy measure included the percentage reduction in seizure frequency, as well as the Clinical Global Impression (CGI) scale. Tolerability was assessed using a list of the 10 adverse effects most frequently observed in the regulatory studies and also by the CGI scale. After 6 months, the retention rate was 75%. Six patients (18.75%) were responders (50% seizure reduction and/or "good" or "very good" effect on CGI). No patient was seizure free. Seven patients had adverse effects that were not impairing. Eight patients had side effects that were essentially impairing. Weight gain, somnolence, asthenia, and ataxia were the most frequent adverse effects. Rare adverse events were severe mental slowing and loss of daily life capacities on a low dose of PGB in one patient and increase in auto-aggression in another patient. After 12 months, the retention rate was 40.6%, the responder rate was 25%, and one patient was seizure free. Statistical analysis did not identify any predictor of outcome (seizure type, epilepsy syndrome, co-medication, degree of intellectual disability). In this highly selected population, the efficacy of PGB was only moderate. Answer: The bodyweight gain associated with pregabalin therapy in epilepsy patients appears to be a significant concern and is not mitigated by counseling patients. A randomized controlled trial aimed at evaluating the effects of extended versus standard patient counseling on the risk of bodyweight gain found that the median bodyweight gain was 4.0 kg for the according-to-protocol sample, with no effect of extended counseling. The study concluded that pregabalin treatment is associated with a high risk for bodyweight gain, which partly depends on the total anticonvulsant drug load, and that this side effect cannot be prevented by extended patient counseling within a standard clinical setting (PUBMED:18060813). Additionally, weight gain is a known side effect of certain antiepileptic drugs (AEDs), including pregabalin. It is important to regularly weigh patients and consider AED selection based on each patient's profile without sacrificing therapeutic efficacy (PUBMED:18047602). Cosmetic side effects such as weight gain and alopecia are common with AEDs, with weight gain being the most commonly reported cosmetic side effect and the most likely to result in dosage adjustment or medication discontinuation. Pregabalin is specifically noted for its association with weight gain (PUBMED:25513768). In children, pregabalin is also associated with increased appetite and/or weight gain, which may lead to increased morbidity of the underlying disease and impaired adherence to the treatment regimen (PUBMED:35596110). The mechanisms by which AEDs cause weight gain are not fully understood, but it is likely that weight change induced by some AEDs, including pregabalin, has a genetic underpinning (PUBMED:24308788). Overall, the evidence suggests that while pregabalin is an effective AED, its use is associated with a significant risk of weight gain, and counseling alone does not appear to be an effective strategy to mitigate this side effect.
Instruction: Distributed learning or medical tourism? Abstracts: abstract_id: PUBMED:20922033 Students' perception of the learning environment in a distributed medical programme. Background: The learning environment of a medical school has a significant impact on students' achievements and learning outcomes. The importance of equitable learning environments across programme sites is implicit in distributed undergraduate medical programmes being developed and implemented. Purpose: To study the learning environment and its equity across two classes and three geographically separate sites of a distributed medical programme at the University of British Columbia Medical School that commenced in 2004. Method: The validated Dundee Ready Educational Environment Survey was sent to all students in their 2nd and 3rd year (classes graduating in 2009 and 2008) of the programme. The domains of the learning environment surveyed were: students' perceptions of learning, students' perceptions of teachers, students' academic self-perceptions, students' perceptions of the atmosphere, and students' social self-perceptions. Mean scores, frequency distribution of responses, and inter- and intrasite differences were calculated. Results: The perception of the global learning environment at all sites was more positive than negative. It was characterised by a strongly positive perception of teachers. The work load and emphasis on factual learning were perceived negatively. Intersite differences within domains of the learning environment were more evident in the pioneer class (2008) of the programme. Intersite differences consistent across classes were largely related to on-site support for students. Conclusions: Shared strengths and weaknesses in the learning environment at UBC sites were evident in areas that were managed by the parent institution, such as the attributes of shared faculty and curriculum. A greater divergence in the perception of the learning environment was found in domains dependent on local arrangements and social factors that are less amenable to central regulation. This study underlines the need for ongoing comparative evaluation of the learning environment at the distributed sites and interaction between leaders of these sites. abstract_id: PUBMED:32196092 Accounting for data variability in multi-institutional distributed deep learning for medical imaging. Objectives: Sharing patient data across institutions to train generalizable deep learning models is challenging due to regulatory and technical hurdles. Distributed learning, where model weights are shared instead of patient data, presents an attractive alternative. Cyclical weight transfer (CWT) has recently been demonstrated as an effective distributed learning method for medical imaging with homogeneous data across institutions. In this study, we optimize CWT to overcome performance losses from variability in training sample sizes and label distributions across institutions. Materials And Methods: Optimizations included proportional local training iterations, cyclical learning rate, locally weighted minibatch sampling, and cyclically weighted loss. We evaluated our optimizations on simulated distributed diabetic retinopathy detection and chest radiograph classification. Results: Proportional local training iteration mitigated performance losses from sample size variability, achieving 98.6% of the accuracy attained by centrally hosting in the diabetic retinopathy dataset split with highest sample size variance across institutions. Locally weighted minibatch sampling and cyclically weighted loss both mitigated performance losses from label distribution variability, achieving 98.6% and 99.1%, respectively, of the accuracy attained by centrally hosting in the diabetic retinopathy dataset split with highest label distribution variability across institutions. Discussion: Our optimizations to CWT improve its capability of handling data variability across institutions. Compared to CWT without optimizations, CWT with optimizations achieved performance significantly closer to performance from centrally hosting. Conclusion: Our work is the first to identify and address challenges of sample size and label distribution variability in simulated distributed deep learning for medical imaging. Future work is needed to address other sources of real-world data variability. abstract_id: PUBMED:33901356 The trends and perspectives of development of medical tourism The article demonstrates that over the past few years medical tourism market goes through significant changes . This especially relates to regional economy and its aspects in the field of tourism. The article presents an important conclusion that nowadays the top-priority factor in the development of medical tourism is a number of crisis points in provision of health tourism services both in the regions of Russia and in the capital region that are related to pandemic and its consequences. The article considers complex of factors related just to epidemiological crisis and its consequences? including economic and social factors related to health-preserving technologies of medical tourism industry. The actual condition of tourism industry, as a branch of the Russian economy,demonstrates that it was among the first ones hit by the pandemic. The article emphasizes that due to emerging problematic trends during epidemics and their aftermaths the possibilities of providing medical tourism services and their concentration in the country, costs and conditions are changing that undoubtedly impact the economic component and health ecology aspects. The conclusion is made that among main conditions of adjustment of medical tourism industry to the new economic conditions are to be truly multidimensional and structured directions and tools that can be applied to look for way out of difficult situations when sales of medical services have fallen to zero, and companies are forced to work out on solutions emerging problems and to make plans of operational way out of existing crisis. abstract_id: PUBMED:37579550 The applications of machine learning techniques in medical data processing based on distributed computing and the Internet of Things. Medical data processing has grown into a prominent topic in the latest decades with the primary goal of maintaining patient data via new information technologies, including the Internet of Things (IoT) and sensor technologies, which generate patient indexes in hospital data networks. Innovations like distributed computing, Machine Learning (ML), blockchain, chatbots, wearables, and pattern recognition can adequately enable the collection and processing of medical data for decision-making in the healthcare era. Particularly, to assist experts in the disease diagnostic process, distributed computing is beneficial by digesting huge volumes of data swiftly and producing personalized smart suggestions. On the other side, the current globe is confronting an outbreak of COVID-19, so an early diagnosis technique is crucial to lowering the fatality rate. ML systems are beneficial in aiding radiologists in examining the incredible amount of medical images. Nevertheless, they demand a huge quantity of training data that must be unified for processing. Hence, developing Deep Learning (DL) confronts multiple issues, such as conventional data collection, quality assurance, knowledge exchange, privacy preservation, administrative laws, and ethical considerations. In this research, we intend to convey an inclusive analysis of the most recent studies in distributed computing platform applications based on five categorized platforms, including cloud computing, edge, fog, IoT, and hybrid platforms. So, we evaluated 27 articles regarding the usage of the proposed framework, deployed methods, and applications, noting the advantages, drawbacks, and the applied dataset and screening the security mechanism and the presence of the Transfer Learning (TL) method. As a result, it was proved that most recent research (about 43%) used the IoT platform as the environment for the proposed architecture, and most of the studies (about 46%) were done in 2021. In addition, the most popular utilized DL algorithm was the Convolutional Neural Network (CNN), with a percentage of 19.4%. Hence, despite how technology changes, delivering appropriate therapy for patients is the primary aim of healthcare-associated departments. Therefore, further studies are recommended to develop more functional architectures based on DL and distributed environments and better evaluate the present healthcare data analysis models. abstract_id: PUBMED:33957813 Bibliometrix analysis of medical tourism. Medical tourism is an expanding phenomenon. Scientific studies address the changes and challenges of the present and future trend. However, no research considers the study of bibliometric variables and area of business, management and accounting. This bibliometric analysis discovered the following elements: (1) The main articles are based on guest services, management, leadership principles applied, hotel services associated with healthcare, marketing variables and elements that guide the choice in medical tourism; (2) The main authors do not deal with tourism but are involved in various ways in the national health system of the countries of origin or in WHO; (3)cost-efficiency and analytical accounting linked to medical tourism structures and destination choices are not yet developed topics. abstract_id: PUBMED:35935084 Medical tourism in ophthalmology - review. During the last decade, it seems that we have witnessed an upsurge in medical tourism and its documentation. Definitions often refer more to the terms themselves rather than to the medical tourists and frequently approach discussions using the principles of motivation, procedures, and tourism. The aim of this review was to investigate the existing literature on medical tourism in health care services, and in Ophthalmology, and to assess whether the principles of medical tourism are successfully applied in health care services, with a specific interest in Ophthalmology services. abstract_id: PUBMED:34364262 Privacy preserving distributed learning classifiers - Sequential learning with small sets of data. Background: Artificial intelligence (AI) typically requires a significant amount of high-quality data to build reliable models, where gathering enough data within a single institution can be particularly challenging. In this study we investigated the impact of using sequential learning to exploit very small, siloed sets of clinical and imaging data to train AI models. Furthermore, we evaluated the capacity of such models to achieve equivalent performance when compared to models trained with the same data over a single centralized database. Methods: We propose a privacy preserving distributed learning framework, learning sequentially from each dataset. The framework is applied to three machine learning algorithms: Logistic Regression, Support Vector Machines (SVM), and Perceptron. The models were evaluated using four open-source datasets (Breast cancer, Indian liver, NSCLC-Radiomics dataset, and Stage III NSCLC). Findings: The proposed framework ensured a comparable predictive performance against a centralized learning approach. Pairwise DeLong tests showed no significant difference between the compared pairs for each dataset. Interpretation: Distributed learning contributes to preserve medical data privacy. We foresee this technology will increase the number of collaborative opportunities to develop robust AI, becoming the default solution in scenarios where collecting enough data from a single reliable source is logistically impossible. Distributed sequential learning provides privacy persevering means for institutions with small but clinically valuable datasets to collaboratively train predictive AI while preserving the privacy of their patients. Such models perform similarly to models that are built on a larger central dataset. abstract_id: PUBMED:32335226 Building machine learning models without sharing patient data: A simulation-based analysis of distributed learning by ensembling. The development of machine learning solutions in medicine is often hindered by difficulties associated with sharing patient data. Distributed learning aims to train machine learning models locally without requiring data sharing. However, the utility of distributed learning for rare diseases, with only a few training examples at each contributing local center, has not been investigated. The aim of this work was to simulate distributed learning models by ensembling with artificial neural networks (ANN), support vector machines (SVM), and random forests (RF) and evaluate them using four medical datasets. Distributed learning by ensembling locally trained agents improved performance compared to models trained using the data from a single institution, even in cases where only a very few training examples are available per local center. Distributed learning improved when more locally trained models were added to the ensemble. Local class imbalance reduced distributed SVM performance but did not impact distributed RF and ANN classification. Our results suggest that distributed learning by ensembling can be used to train machine learning models without sharing patient data and is suitable to use with small datasets. abstract_id: PUBMED:32827378 The particular aspects of development of medical and health promoting tourism The expansion of types of medical care and the development of tourism activities in medical services market in Russia actively gains momentum. The certain successes were achieved in this sector of economy. Medical and health tourism is more than complex and complicated category as compared to classical types of tourism and requires additional investigation related to improving efficiency of market business activities of related services and implementation of optimal state policy related to regulation and support of development of this segment of tourism industry. The article analyzes current state of market of medical services provided to foreign citizens and considers trends of further development of market of medical and health tourism. The topic of development of medical and health tourism in context of need is to ensure sustainable development of national economy and integration cooperation in the field of recreation and tourism. abstract_id: PUBMED:34769723 Identification of Factors for the Development of Medical Tourism in the World. The overall objective of the given paper was to study the relationship of inbound medical tourism destinations with international tourism, economic development of recipient countries, the development of national healthcare systems and the institutional features of their environment, in terms of protection of the rights and freedoms of both business and citizens. In order to achieve this objective, the authors used methods of grouping, as well as correlation and regression analysis. The conducted study revealed that the formation of medical tourism destinations in countries with high social and economic development occurs in a balanced and unidirectional manner; simultaneously, one can see that the countries with "new economic development" form a sufficiently powerful and competitive market for medical tourism. All these countries have one thing in common: namely, there is a link between medical tourism and healthcare funding, international tourism and development of political and civil freedoms. Nevertheless, the noted aspects are not dominant enough, and this indicates that there are other internal factors and their configurations which shape a positive image of countries for medical tourism development. This finding leads to the necessity of further analysis in this field with a breakdown into separate countries or destinations. Answer: The provided abstracts discuss two distinct topics: distributed learning in medical contexts and medical tourism. Distributed learning refers to the practice of training machine learning models across multiple institutions without sharing patient data directly. This approach is particularly relevant in the medical field, where privacy concerns and regulatory constraints make it challenging to centralize patient data. Abstracts PUBMED:20922033, PUBMED:32196092, PUBMED:33901356, PUBMED:37579550, PUBMED:34364262, and PUBMED:32335226 discuss various aspects of distributed learning, including its application in medical imaging, the learning environment in medical education, privacy-preserving techniques, and the challenges of data variability across institutions. On the other hand, medical tourism involves traveling to another country or region to receive medical care, often combining healthcare services with tourism activities. Abstracts PUBMED:33901356, PUBMED:35935084, PUBMED:33957813, PUBMED:32827378, and PUBMED:34769723 explore the trends, development, and factors influencing medical tourism, including economic and social aspects, the impact of the COVID-19 pandemic, and the relationship between medical tourism and the healthcare systems of various countries. In summary, distributed learning is a method of collaborative machine learning in medical data processing that addresses privacy and data sharing concerns (PUBMED:32196092, PUBMED:37579550, PUBMED:34364262, PUBMED:32335226), while medical tourism is a growing industry where individuals seek medical services outside their home country, often influenced by factors such as cost, quality of care, and the availability of certain treatments (PUBMED:33901356, PUBMED:35935084, PUBMED:33957813, PUBMED:32827378, PUBMED:34769723).
Instruction: Do marginal changes in PHI membership accurately predict marginal changes in PHI use in Western Australia? Abstracts: abstract_id: PUBMED:16055226 Do marginal changes in PHI membership accurately predict marginal changes in PHI use in Western Australia? Objective: Annual trends in the rate of utilisation of PHI in three different clinical categories were compared with published trends in PHI membership to assess the degree to which PHI membership predicts PHI use in Western Australia. Methods: The WA Data Linkage System was used to extract all hospital morbidity records in Western Australia from 1981 to 2001. The adjusted annual incidence rate ratio of hospitalisation as a privately insured patient versus a public (Medicare) patient was estimated using Poisson regression in each clinical category across three age groups in each year. The rate ratios were graphed as segmented trend lines and compared with published data for trends in PHI membership. Results: The most significant changes in the use of PHI versus the public system occurred between 1981 and 1984 overall clinical categories. These changes were consistent with those documented for PHI membership. From 1992 onwards, significant changes in the trend were observed in the surgical clinical category, compared with the medical and obstetric clinical categories. Further, the trend observed in the surgical clinical category at this time was inconsistent with that documented for PHI membership. Between 2000 and 2001, only the surgical clinical category showed a similar change in trend as that documented for PHI membership. Conclusion: Between 1981 and 1991 the timing and direction of changes in PHI membership were found to be congruent with that of PHI use in all three clinical categories. However, between 2000 and 2001 trends in PHI membership were only congruent with trends in PHI use in the surgical clinical category. We conclude that investigating marginal changes in PHI membership represents an incomplete method for assessing the effectiveness of policies aimed at reducing the pressure on the public system. abstract_id: PUBMED:16831484 Modelling changes in the determinants of PHI utilisation in Western Australia across five health care policy eras between 1981 and 2001. Objective: In this study of the Western Australian population we analysed changes in the demographic determinants of PHI use across health policy eras. Specifically, we aimed to predict the probability that an individual, defined by a pre-determined set of characteristics, would utilise PHI for in-patient hospitalisation in WA in each of five health policy eras spanning 1981-2001. Methods: The WA Data Linkage System was used to extract hospital morbidity data from 1 January 1981 to 31 December 2001. Random effects logistic regression analysis was used to estimate the likelihood of utilising private health insurance in each of five health policy eras based on the timing and composition of changes in federal health care policy. Results: The use of PHI for in-patient hospitalisation fell significantly from 1981 to 1997 (61% above to 53% below the odds of being a public patient). From 1999, however, the odds of using PHI substantially increased to 16% above that of being a public patient. The likelihood of using PHI in all age fell approximately exponentially across successive health policy eras compared with that in the oldest (70+ years) age group. From 1997 onwards, the relative probabilities of average and disadvantaged individuals using PHI substantially increased compared with extremely advantaged individuals. Conclusion: Our study found that the overall likelihood of utilising PHI versus utilising Medicare for in-patient hospitalisation, adjusted for all demographic characteristics, decreased between 1981 and 1998 but increased precipitously after 1999. We also found that the determinants of using PHI have changed significantly across health policy eras. The most significant changes occurred with respect to age (the probability of PHI use by older individuals increased) and socio-economic status (the probability of PHI use by average and disadvantaged individuals increased). This shift in the effects of determinants of PHI suggests that the introduction of the recent health policies were associated with a change in both the age and socio-economic profile of individuals who utilise PHI. abstract_id: PUBMED:6688046 Radioimmunoassay and intramural distribution of PHI-IR in human intestine. The objective of this study was to develop a radioimmunoassay for PHI and use this to assess its intramural distribution in the human intestine. The antibody was harvested following immunization with porcine PHI conjugated to bovine serum albumin by glutaraldehyde, and the iodinated PHI tracer was prepared by the Iodo-gen method. The assay system showed no cross-reaction with other members of the glucagon-secretin family of peptides and was sensitive to changes of PHI of 2 fmol/tube (95% confidence). High concentrations of immunoreactive PHI were found in the human intestine, exclusively localized in the nonendocrine gut layers, suggesting a possible neuroendocrinological or neurotransmitter role for PHI. abstract_id: PUBMED:15577749 Neuroprotective role of PACAP, VIP, and PHI in the central nervous system Pituitary adenylate cyclase-activating polypeptide (PACAP), vasoactive intestinal peptide (VIP), and peptide histidine-isoleucine (PHI) belong to a structurally related family of polypeptides present in many regions of the central and peripheral nervous system. The neuroprotective potential of PACAP, VIP, and PHI has become a matter of intensive investigations in many animal models. In vitro studies revealed that PACAP protects neurons against apoptosis occurring naturally during CNS development and apoptosis induced by a series of neurotoxins, such as ethanol, hydrogen peroxide (H2O2), prion protein, beta-amyloid, HIV envelope glycoprotein (gp120), potassium ion deficit, and high glutamate concentrations. Similarly, in vivo investigations conducted in models of ischemia and Parkinson's disease confirmed the neuroprotective properties of PACAP. It was revealed that the anti-apoptotic action of PACAP can be directly associated with the activation of signal transduction pathways preventing apoptosis in neurons or involve glial cells capable of releasing other neuroprotective factors affecting neurons. In contrast to PACAP, the neuroprotective action of VIP depends mainly on stimulation of astrocytes to produce and secrete factors of extremely high neuroprotective potential, including activity-dependent neurotrophic factor (ADNF) and activity-dependent neuroprotective protein (ADNP). It was shown that ADNF and ADNP, as well as their shortened derivatives ADNF-9 and NAP, prevent neurons from electrical blockade, excitotoxicity, apoE deficiency, glucose deficit, ischemia, toxic action of ethanol, beta-amyloid, and gp120. The neuroprotective potential of PHI has not been as thoroughly investigated yet, but recent data have confirmed that this peptide can also function as a neuroprotectant. It is thought that PACAP, VIP, and possibly PHI may serve as a goal of modern therapeutic strategies in various neurodegenerative disorders. abstract_id: PUBMED:36893880 Do PHI and PHI density improve detection of clinically significant prostate cancer only in the PSA gray zone? Objectives: Prostate health index (PHI) is a predictive biomarker of positive prostate biopsy. The majority of evidence refers to its use in the PSA gray zone (4-10 ng/mL) and negative digital rectal exam (DRE). We aim to evaluate and compare the predictive accuracy of PHI and PHI density (PHId) with PSA, percentage of free PSA and PSA density, in a wider range of patients for the detection of clinically significant prostate cancer (csPCa). Methods: Multicenter prospective study that included patients suspicious of harboring prostate cancer. Non-probabilistic convenience sampling, where men who attended the urology consultation were tested for PHI before prostate biopsy. To evaluate and compare diagnostic accuracy AUC and decision curve analysis (DCA) were calculated. All these procedures were performed for the overall sample and the following subsamples: PSA &lt; 4 ng/ml; PSA 4-10 ng/ml; PSA 4-10 ng/ml plus negative DRE and PSA &gt; 10 ng/ml. Results: Among the 559 men included, 194 (34.7%) were diagnosed of csPCa. PHI and PHId outperfomed PSA in all subgroups. PHI best diagnostic performance was found in PSA 4-10 ng/ml with negative DRE (sensitivity 93.33, NPV 96.04). Regarding AUC, significant differences were found between PHId and PSA in the subgroup of PSA 4-10 ng/ml, whatever DRE status. In DCA, PHI density shows the highest net benefit. Conclusions: PHI and PHId outperfom PSA in csPCa detection, not only in the PSA grey zone with negative DRE, but also in a wider range of PSA values. There is an urgent need of prospective studies to established a validated threshold and its incorporation in risk calculators. abstract_id: PUBMED:6687824 Ontogeny of PHI in the rat brain. The regional distribution of immunoreactive PHI (IR-PHI) was investigated in rat brain between postcoitum (pc) and day 60 postpartum (pp). IR-PHI was undetectable in all regions of the foetal brain, and only very small amounts were found at day 7 pp. However, there was a dramatic increase thereafter reaching a peak at day 20 pp (e.g. in the hippocampus there was a 12-fold increase in the PHI concentration). Highest concentrations were found in the cortex (40 +/- 5 pmol/g) and hippocampus (35 +/- 8 pmol/g), with lower concentrations in the diencephalon (11 +/- 4 pmol/g) and mesencephalon (10 +/- 3 pmol/g). The brainstem and cerebellum contained very low amounts of IR-PHI. Permeation analysis of brain extracts, on Sephadex G50-superfine, indicated the presence of one major form of IR-PHI which eluted in a similar position to pure intestinal porcine PHI and human intestinal PHI. abstract_id: PUBMED:1648712 Relaxant effect of rat PHI, PHI-Gly and PHV(1-42) in the rat gastric fundus. It was previously shown that porcine PHI is 30 times less potent than VIP in relaxing the rat gastric fundus; the relaxant potency of rat PHI and its 2 C-terminally extended forms PHI-Gly and PHV(1-42) in the rat gastric fundus was compared here with that of VIP, porcine PHI and PHM. The rank order of potency in relaxing the precontracted fundus tissues was VIP greater than rat PHI greater than PHM greater than PHV greater than PHI-Gly greater than porcine PHI, rat PHI being only 2 times less potent than VIP. In the presence of antioxidants, the potency and efficacy of porcine PHI increased, but the peptide was still the least potent of the series tested. The results illustrate the importance of using species-related peptides and are compatible with a cotransmitter role of rat PHI in nonadrenergic noncholinergic neurotransmission of the rat gastric fundus. abstract_id: PUBMED:2983161 Interaction of PHM, PHI and 24-glutamine PHI with human VIP receptors from colonic epithelium: comparison with rat intestinal receptors. PHM, the human counterpart of porcine Peptide Histidine Isoleucine amide (PHI), is shown to be a VIP agonist with low potency on human VIP receptors located in colonic epithelial cell membranes. Its potency is identical to that of PHI but by 3 orders of magnitude lower than that of VIP itself in inhibiting 125I-VIP binding and in stimulating adenylate cyclase activity. This contrasts markedly with the behaviour of PHI on rat VIP receptors located in intestinal epithelial cell membranes where PHI is a potent agonist with a potency that is 1/5 that of VIP. In another connection, we show that 24-glutamine PHI has the same affinity as 24-glutamic acid PHI (the natural peptide) for rat or human VIP receptors. These results indicate that while PHI may exert some physiological function through its interaction with VIP receptors in rodents, its human counterpart PHM is a very poor agonist of VIP in human. Furthermore, they show that the drastic change in position 24 of PHI (neutral versus acid residue) does not affect the activity of PHI, at least on VIP receptors. abstract_id: PUBMED:35569207 Valuing non-marginal changes in mortality and morbidity risk. Many stated-preference studies that seek to estimate the marginal willingness-to-pay (WTP) for reductions in mortality or morbidity risk suffer from inadequate scope sensitivity. One possible reason is that the risk reductions presented to respondents are too small to be meaningful. Survey responses may thus not accurately reflect respondents preferences for health and safety. In this paper we propose a novel approach to estimating the value per statistical life (VSL) or the value per statistical case (VSC) based on larger risk reductions measurable as percent changes. While such non-marginal risk reductions are easier to understand, they introduce well known biases. We propose a methodology to de-bias VSL and VSC estimates derived from the evaluation of non-marginal risk reductions and present a proof of concept using simulated stated preference data. abstract_id: PUBMED:25110355 Estimating Marginal Returns to Education. This paper estimates marginal returns to college for individuals induced to enroll in college by different marginal policy changes. The recent instrumental variables literature seeks to estimate this parameter, but in general it does so only under strong assumptions that are tested and found wanting. We show how to utilize economic theory and local instrumental variables estimators to estimate the effect of marginal policy changes. Our empirical analysis shows that returns are higher for individuals with values of unobservables that make them more likely to attend college. We contrast our estimates with IV estimates of the return to schooling. Answer: Marginal changes in Private Health Insurance (PHI) membership do not consistently predict marginal changes in PHI use in Western Australia. Between 1981 and 1991, the timing and direction of changes in PHI membership were found to be congruent with that of PHI use across all three clinical categories. However, from 1992 onwards, significant changes in the trend were observed in the surgical clinical category, which were inconsistent with those documented for PHI membership. Specifically, between 2000 and 2001, trends in PHI membership were only congruent with trends in PHI use in the surgical clinical category. This suggests that investigating marginal changes in PHI membership represents an incomplete method for assessing the effectiveness of policies aimed at reducing the pressure on the public system (PUBMED:16055226). Additionally, the overall likelihood of utilizing PHI versus Medicare for in-patient hospitalization, adjusted for all demographic characteristics, decreased between 1981 and 1998 but increased precipitously after 1999. The determinants of using PHI have also changed significantly across health policy eras, with the most significant changes occurring with respect to age and socio-economic status. This indicates that the introduction of recent health policies was associated with a change in both the age and socio-economic profile of individuals who utilize PHI (PUBMED:16831484). Therefore, while there may be some correlation between PHI membership and PHI use, the relationship is not straightforward and is influenced by various factors, including changes in health policy and demographic determinants.
Instruction: Time course of recovery of erectile function after radical retropubic prostatectomy: does anyone recover after 2 years? Abstracts: abstract_id: PUBMED:20722784 Time course of recovery of erectile function after radical retropubic prostatectomy: does anyone recover after 2 years? Introduction: Given the paucity of literature on the time course of recovery of erectile function (EF) after radical prostatectomy (RP), many publications have led patients and clinicians to believe that erections are unlikely to recover beyond 2 years after RP. Aims: We sought to determine the time course of recovery of EF beyond 2 years after bilateral nerve sparing (BNS) RP and to determine factors predictive of continued improved recovery beyond 2 years. Methods: EF was assessed prospectively on a 5-point scale: (i) full erections; (ii) diminished erections routinely sufficient for intercourse; (iii) partial erections occasionally satisfactory for intercourse; (iv) partial erections unsatisfactory for intercourse; and (v) no erections. From 01/1999 to 01/2007, 136 preoperatively potent (levels 1-2) men who underwent BNS RP without prior treatment and who had not recovered consistently functional erections (levels 1-2) at 24 months had further follow-up regarding EF. Median follow-up after the 2-year visit was 36.0 months. Main Outcome Measures: Recovery of improved erections at a later date: recovery of EF level 1-2 in those with level 3 EF at 2 years and recovery of EF level 1-3 in those with level 4-5 EF at 2 years. Results: The actuarial rates of further improved recovery of EF to level 1-2 in those with level 3 EF at 2 years and to level 1-3 in those with level 4-5 EF at 2 years were 8%, 20%, and 23% at 3, 4, and 5 years postoperatively, and 5%, 17%, and 21% at 3, 4, and 5 years postoperatively, respectively. Younger age was predictive of greater likelihood of recovery beyond 2 years. Conclusion: There is continued improvement in EF beyond 2 years after BNS RP. Discussion of this prolonged time course of recovery may allow patients to have a more realistic expectation. abstract_id: PUBMED:31521569 Development of Nomograms to Predict the Recovery of Erectile Function Following Radical Prostatectomy. Introduction: Given the number of confounders in predicting erectile function recovery after radical prostatectomy (RP), a nomogram predicting the chance to be functional after RP would be useful to patients' and clinicians' discussions. Aim: To develop preoperative and postoperative nomograms to aid in the prediction of erectile function recovery after RP. Main Outcome Measures: International Index of Erectile Function (IIEF) erectile function domain score-based erectile function. Methods: A prospective quality-of-life database was used to develop a series of nomograms using multivariable ordinal logistic regression models. Standard preoperative and postoperative factors were included. Main Outcome Measures: The nomograms predicted the probability of recovering functional erections (erectile function domain scores ≥24) and severe erectile dysfunction (≤10) 2 years after RP. Results: 3 nomograms have been developed, including a preoperative, an early postoperative, and a 12-month postoperative version. The concordance indexes for all 3 exceeded 0.78, and the calibration was good. Clinical Implications: These nomograms may aid clinicians in discussing erectile function recovery with patients undergoing RP. Strengths & Limitations: Strengths of this study included a large population, validated instrument, nerve-sparing grading, and nomograms that are well calibrated with excellent discrimination ability. Limitations include current absence of external validation and an overall low comorbidity index. Conclusions: It is hoped that these nomograms will allow for a more accurate discussion between patients and clinicians regarding erectile function recovery after RP. Mulhall JP, Kattan MW, Bennett NE, et al. Development of Nomograms to Predict the Recovery of Erectile Function Following Radical Prostatectomy J Sex Med 2019;16:1796-1802. abstract_id: PUBMED:19091349 Changes in continence and erectile function between 2 and 4 years after radical prostatectomy. Purpose: There is a paucity of information on changes in continence and erectile function beyond 2 years after radical prostatectomy. We prospectively examined changes in continence and erectile function between 2 and 4 years after radical prostatectomy. Materials And Methods: Between October 2000 and August 2003, 731 consecutive men underwent open retropubic radical prostatectomy for clinically localized prostate cancer. Preoperative and postoperative continence, and erectile function were ascertained using the UCLA Prostate Cancer Index. The 48-month prospective self-assessment followup questionnaire captured changes in urinary control and erectile function between 24 and 48 months, including marked, moderate or slight improvement, no change or worsening. Results: Overall between 24 and 48 months after radical prostatectomy 23.4% and 42.3% of men showed any degree of improvement in continence and erectile function, and 12.2% and 19.8% showed marked and moderate improvement in continence and erectile function, respectively. The probability of experiencing any qualitative improvement in urinary continence was not significantly different in men who were continent or incontinent at 24 months. The likelihood of experiencing any qualitative improvement in erectile function was significantly greater in men who were potent at 24 months compared to those who were impotent. Conclusions: Our study provides compelling evidence that clinically significant improvements in urinary control and erectile function occur beyond 2 years after radical prostatectomy. These qualitative improvements are greatest for erectile function in men who were potent at 2 years. Therefore, men should not be counseled that maximal urinary continence or erectile function are achieved by 24 months after radical prostatectomy. abstract_id: PUBMED:27784540 Erectile Function Outcomes after Robot-Assisted Radical Prostatectomy: Is It Superior to Open Retropubic or Laparoscopic Approach? Introduction: Erectile dysfunction (ED) is one of the most commonly affected domains of health-related quality of life after prostate cancer therapy. Functional outcomes after radical prostatectomy (RP) have continued to improve through refinement of surgical techniques and development of several procedural modifications. In this context, it has been hypothesized that robotic technologies should simplify the preservation of the neurovascular bundle, thus possibly providing improved functional outcomes. Aim: To compare the prevalence of post-RP ED and identify whether recently developed robotic technologies are able to improve erectile function (EF) recovery after RP. Methods: Literature Review. Main Outcome Measure: To evaluate whether post-therapy ED rates after robotic surgery have shown improvement when compared with the other forms of nerve-sparing RP. Results: Previously published series have shown EF recovery rates after robot-assisted RP (RARP) ranging between 40% and 90% of patients at 12 months, postoperatively. Some claim that the RARP procedure can also significantly shorten recovery time in return of EF when compared with open RP. On the other hand, some authors have reported that patients undergoing minimally invasive RP have experienced even more ED on comparison. Conclusions: Although it has been widely promoted by the industry and hospitals, at the moment there are not enough evidence-based data to answer the question, "Does RARP surgery provide better EF outcomes?." Because of the current market trends and patient preferences, the perfect randomized study will probably never be performed, and thus the question of which procedure's results are superior will most likely remain unanswered. Isgoren AE, Saitz TR, and Serefoglu EC. Erectile function outcomes after robot-assisted radical prostatectomy: Is it superior to open retropubic or laparoscopic approach? Sex Med Rev 2014;2:10-23. abstract_id: PUBMED:28851580 Improved Recovery of Erectile Function in Younger Men after Radical Prostatectomy: Does it Justify Immediate Surgery in Low-risk Patients? Background: Although active surveillance is increasingly used for the management of low-risk prostate cancer, many eligible patients are still nonetheless subject to curative treatment. One argument for considering surgery rather than active surveillance is that the probability of postoperative recovery of erectile function is age dependent, that is, patients who delay surgery may lose the window of opportunity to recover erectile function after surgery. Objective: To model erectile function over a 10-yr period for immediate surgery versus active surveillance. Design, Setting, And Participants: Data from 1103 men who underwent radical prostatectomy at a tertiary referral center were used. Outcome Measurements And Statistical Analysis: Patients completed the International Index of Erectile Function (IIEF-6) pre- and postoperatively as a routine part of clinical care. Preoperative IIEF-6 scores were plotted against age to assess the natural rate of functional decline due to aging. Reported erectile scores in the 2-yr period following surgery were used to assess post-surgical recovery. Results And Limitations: Each year increase in patient age resulted in a 0.27 reduction in IIEF scores. In addition to IIEF reducing with increased age, the amount of erectile function that is recovered from presurgery to 12-mo postsurgery also decreases (-0.16 IIF points/yr, 95% confidence interval -0.27, -0.05, p=0.006). However, delayed radical prostatectomy increased the mean IIEF-6 score over a 10-yr period compared with immediate surgery (p=0.001), even under the assumption that all men placed on active surveillance are treated within 5 yr. Conclusions: Small differences in erectile function recovery in younger men are offset by a longer period of time living with decreased postoperative function. Better erectile recovery in younger men should not be a factor used to recommend immediate surgery in patients suitable for active surveillance, even if crossover to surgery is predicted within a short period of time. Patient Summary: Younger men have better recovery of erectile function after surgery for prostate cancer. This has led to the suggestion that delaying surgery for low-risk disease may lead patients to miss a window of opportunity to recover erectile function postoperatively. We conducted a modeling study and found that predicted erectile recovery was far superior on delayed treatment because slightly better recovery in younger men is offset by a longer period of time living with poorer postoperative function in those choosing immediate surgery. abstract_id: PUBMED:27743754 High Chance of Late Recovery of Urinary and Erectile Function Beyond 12 Months After Radical Prostatectomy. Urinary incontinence (UI) and erectile dysfunction (ED) after radical prostatectomy (RP) can impose a strong burden. While most studies focus on certain time points after RP when analyzing functional outcome, there is paucity of evidence on late functional recovery in patients with UI or ED at 12 mo after RP. Using longitudinal patient data from a large European single-center, we show that the chance of regaining continence among patients (n=974) with UI (≥1 pad/24h) at 12 mo after RP was 38.6% after 24 mo and 49.7% after 36 mo. The corresponding rates for patients (n=1115) with ED (defined as International Index of Erectile Function-5 score &lt;18) at 12 mo after RP were 30.8% at 24 mo and 36.5% at 36 mo after RP. Patients with postoperative UI or ED 12 mo after RP should be counseled about their good chance of achieving continence or potency in the course of time. Patient Summary: We analyzed the probability of functional recovery among patients with urinary incontinence (UI) and erectile dysfunction (ED) 12 mo after radical prostatectomy. We found that up to 49.7% (36.5%) of patients with UI (ED) regain function within the next 24 mo and should be informed about these encouraging numbers. abstract_id: PUBMED:24903070 Three-year outcomes of recovery of erectile function after open radical prostatectomy with sural nerve grafting. Introduction: Optimal oncologic control of higher stage prostate cancers often requires sacrificing the neurovascular bundles (NVB) with subsequent postoperative erectile dysfunction (ED), which can be treated with interposition graft using sural nerve. Aims: To examine the long term outcome of sural nerve grafting (SNG) during radical retropubic prostatectomy (RRP) performed by a single surgeon. Methods: Sixty-six patients with clinically localized prostate cancer and preoperative International Index of Erectile Function (IIEF) score &gt;20 who underwent RRP were included. NVB excision was performed if the risk of side-specific extra-capsular extension (ECE) was &gt;25% on Ohori' nomogram. SNG was harvested by a plastic surgeon, contemporaneously as the urologic surgeon was performing RRP. IIEF questionnaire was used pre- and postoperatively and at follow-up. Main Outcome Measures: Postoperative IIEF score at three years of men undergoing RRP with SNG. Recovery of potency was defined as postoperative IIEF-EF domain score &gt;22. Results: There were 43 (65%) unilateral SNG and 23 (35%) bilateral SNG. Mean surgical time was 164 minutes (71 to 221 minutes).The mean preoperative IIEF score was 23.4+1.6. With a mean follow-up of 35 months, 19 (28.8%) patients had IIEF score &gt;22. The IIEF-EF scores for those who had unilateral SNG and bilateral SNG were 12.9+4.9 and 14.8+5.3 respectively. History of diabetes (P=0.001) and age (P=0.007) negatively correlated with recovery of EF. 60% patients used PDE5i and showed a significantly higher EF recovery (43% vs. 17%, P=0.009). Conclusions: SNG can potentially improve EF recovery for potent men with higher stage prostate cancer undergoing RP. The contemporaneous, multidisciplinary approach provides a good quality graft and expedited the procedure without interrupting the work-flow. abstract_id: PUBMED:18810650 Early recovery of urinary continence after laparoscopic versus retropubic radical prostatectomy: evaluation of preoperative erectile function and nerve-sparing procedure as predictors. The aim of this study was to evaluate preoperative erectile function and attempted nerve-sparing procedure as predictors for early recovery of urinary continence after retropubic and laparoscopic radical prostatectomy. Patients were divided into two groups according to surgical approach (retropubic or laparoscopic) and learning curve for laparoscopic approach: group 1--retropubic approach (37 patients operated on from April 2000 to June 2006), group 2--laparoscopic approach (109 patients operated on from April 2003 to June 2006). We assessed state of urinary continence at 1, 3, 6, and 12 months after removal of the urinary catheter. Overall rates of urinary continence were 18%, 49%, 68%, and 80% at 1, 3, 6, and 12 months, respectively. Between groups 1 and 2, no statistically significant differences in recovery of urinary continence were evident, being 27% versus 15% at 1 month, 54% versus 47% at 3 months, 77% versus 65% at 6 months, and 91% versus 77% at 12 months in groups 1 and 2, respectively. An attempted nerve-sparing procedure (one or both neurovascular bundles) was statistically associated with urinary continence at 3 month, and International Index of Erectile Function-5 (IIEF-5) score (&gt;or=14) was identified as a significant factor predicting urinary continence at 6 months after laparoscopic radical prostatectomy. Younger age tended to result in early recovery of urinary continence after retropubic radical prostatectomy. abstract_id: PUBMED:11061884 Factors predicting recovery of erections after radical prostatectomy. Purpose: Because preservation of functioning penile erections is a major concern for many patients considering treatment for localized prostate cancer, we analyzed various factors determined before and after radical retropubic prostatectomy to identify those significantly associated with recovery of erectile function. Materials And Methods: Our prospective database of patients undergoing pelvic lymphadenectomy and radical retropubic prostatectomy was used to determine factors predictive of erection recovery after radical prostatectomy. The study included 314 consecutive men with prostate cancer treated with radical retropubic prostatectomy between November 1993 and December 1996. Preoperative potency satisfactory for intercourse and degree of neurovascular bundle preservation during the operation were documented. Results: Patient age, preoperative potency status and extent of neurovascular bundle preservation but not pathological stage were predictive of potency recovery after radical prostatectomy. At 3 years after the operation 76% of men younger than age 60 years with full erections preoperatively who had bilateral neurovascular bundle preservation would be expected to regain erections sufficient for intercourse. Compared to the younger men, those 60 to 65 years old were only 56% (95% confidence interval [CI] 37 to 84) and those older than 65 years were 47% (95% CI 30 to 73) as likely to recover potency. Patients with recently diminished erections were only 63% (95% CI 38 to 100) as likely to recover potency as men with full erections preoperatively, and those with partial erections were only 47% (95% CI 23 to 96) as likely to recover potency. Resection of 1 neurovascular bundle reduced the chance of recovery to 25% (95% CI 10 to 61) compared to preserving both nerves. Conclusions: Knowledge of preoperative erectile function and patient age before the operation and the degree of neurovascular bundle preservation afterward may aid in patient counseling regarding potency recovery after radical prostatectomy. abstract_id: PUBMED:15928287 Erectile dysfunction following radical prostatectomy. Erectile dysfunction following radical prostatectomy for clinically localized prostate cancer is a known potential complication of the surgery. Because prostate cancer is diagnosed today more frequently than in the past and because the diagnosis is made in increasingly younger men, there is an urgent need to develop effective interventions that preserve erectile function after surgery. In this presentation, a 51-year-old man with adenocarcinoma of the prostate underwent a bilateral nerve-sparing radical prostatectomy, after which he lost natural erectile function for approximately 9 months. The case highlights the fact that following surgery in which the nerve-sparing radical prostatectomy technique is used, between 60% to 85% of men eventually recover erectile function. This constitutes a dramatic improvement over an earlier era, when postprostatectomy erectile dysfunction was the nearly universal rule. The case also emphasizes that despite expert application of the nerve-sparing prostatectomy technique, early recovery of natural erectile function is uncommon. Many patients experience erectile dysfunction for as long as 2 years after the procedure, requiring the use of erectile aids for sexual activity during this period until natural erections recover. Corrective, cause-specific advances such as neuromodulatory therapy offer valuable adjuncts to this surgery. Answer: The time course of recovery of erectile function (EF) after radical retropubic prostatectomy (RP) has been a subject of interest, and there is evidence that recovery can indeed continue beyond the 2-year mark. A study that followed 136 preoperatively potent men who underwent bilateral nerve-sparing RP and had not recovered consistently functional erections at 24 months found that there was continued improvement in EF beyond 2 years. The actuarial rates of further improved recovery of EF to level 1-2 in those with level 3 EF at 2 years and to level 1-3 in those with level 4-5 EF at 2 years were 8%, 20%, and 23% at 3, 4, and 5 years postoperatively, respectively. Younger age was predictive of a greater likelihood of recovery beyond 2 years (PUBMED:20722784). Another study examining changes in continence and erectile function between 2 and 4 years after RP reported that 23.4% and 42.3% of men showed any degree of improvement in continence and erectile function, respectively, between these time points. The likelihood of experiencing any qualitative improvement in erectile function was significantly greater in men who were potent at 24 months compared to those who were impotent (PUBMED:19091349). Furthermore, a study on late recovery of urinary and erectile function found that among patients with erectile dysfunction (ED) at 12 months after RP, 30.8% at 24 months and 36.5% at 36 months postoperatively regained erectile function. This suggests that patients with postoperative ED at 12 months after RP should be counseled about their good chance of achieving potency over time (PUBMED:27743754). In summary, while many patients and clinicians may believe that recovery of erections is unlikely beyond 2 years after RP, evidence suggests that a significant proportion of men can experience continued improvement in erectile function even after this period. This information may help provide more realistic expectations for patients undergoing RP.
Instruction: Do interventions that improve immunisation uptake also reduce social inequalities in uptake? Abstracts: abstract_id: PUBMED:8173457 Do interventions that improve immunisation uptake also reduce social inequalities in uptake? Objective: To investigate whether an intervention designed to improve overall immunisation uptake affected social inequalities in uptake. Design: Cross-sectional small area analyses measuring immunisation uptake in cohorts of children before and after intervention. Small areas classified into five groups, from most deprived to most affluent, with Townsend deprivation score of census enumeration districts. Setting: County of Northumberland. Subjects: All children born in country in four birth cohorts (1981-2, 1985-6, 1987-8, and 1990-1) and still resident at time of analysis. Main Outcome Measures: Overall uptake in each cohort of pertussis, diphtheria, and measles immunisation, difference in uptake between most deprived and most affluent areas, and odds ratio of uptake between deprived and affluent areas. Results: Coverage for pertussis immunisation rose from 53.4% in first cohort to 91.1% in final cohort. Coverage in the most deprived areas was lower than in the most affluent areas by 4.7%, 8.7%, 10.2%, and 7.0% respectively in successive cohorts, corresponding to an increase in odds ratio of uptake between deprived and affluent areas from 1.2 to 1.6 to 1.9 to 2.3. Coverage for diphtheria immunisation rose from 70.0% to 93.8%; differences between deprived and affluent areas changed from 8.6% to 8.3% to 9.0% to 5.5%, corresponding to odds ratios of 1.5, 2.0, 2.5, and 2.6. Coverage for measles immunisation rose from 52.5% to 91.4%; differences between deprived and affluent areas changed from 9.1% to 5.7% to 8.2% to 3.6%, corresponding to odds ratios of 1.4, 1.4, 1.7, and 1.5. Conclusion: Despite substantial increase in immunisation uptake, inequalities between deprived and affluent areas persisted or became wider. Any reduction in inequality occurred only after uptake in affluent areas approached 95%. Interventions that improve overall uptake of preventive measures are unlikely to reduce social inequalities in uptake. abstract_id: PUBMED:35243743 A systematic review of inequalities in the uptake of, adherence to, and effectiveness of behavioral weight management interventions in adults. The extent to which behavioral weight management interventions affect health inequalities is uncertain, as is whether trials of these interventions directly consider inequalities. We conducted a systematic review, synthesizing evidence on how different aspects of inequality impact uptake, adherence, and effectiveness in trials of behavioral weight management interventions. We included (cluster-) randomized controlled trials of primary care-applicable behavioral weight management interventions in adults with overweight or obesity published prior to March 2020. Data about trial uptake, intervention adherence, attrition, and weight change by PROGRESS-Plus criteria (place of residence, race/ethnicity, occupation, gender, religion, education, socioeconomic status, social capital, plus other discriminating factors) were extracted. Data were synthesized narratively and summarized in harvest plots. We identified 91 behavioral weight loss interventions and 12 behavioral weight loss maintenance interventions. Fifty-six of the 103 trials considered inequalities in relation to at least one of intervention or trial uptake (n = 15), intervention adherence (n = 15), trial attrition (n = 32), or weight outcome (n = 34). Most trials found no inequalities gradient. If a gradient was observed for trial uptake, intervention adherence, and trial attrition, those considered "more advantaged" did best. Alternative methods of data synthesis that enable data to be pooled and increase statistical power may enhance understanding of inequalities in behavioral weight management interventions. abstract_id: PUBMED:38162329 Immunisation services in North-Eastern Nigeria: Perspectives of critical stakeholders to improve uptake and service delivery. We investigated the perspectives of parents, health workers (HWs) and traditional medical practitioners (TMPs) on immunisation advocacy, knowledge, attitudes and immunisation practice and ways of improving immunisation uptake in Borno State, North-eastern Nigeria. A cross-sectional study analysing quantitative data from the three stakeholders' categories. It was conducted across 18 local government areas of Borno State. A representative sample of 4288 stakeholders (n=1763 parents, n=1707 TMPs, and n=818 HWs aged 20 to 59years, had complete data. The sample has more males: 57.8% (Parents); 71.8% (TMPs) and 57.3% (HWs). The awareness of immunisation schedule among the stakeholders ranged from 87.2 to 93.4%. The study showed that 67.9% of the parent and 57.1% of the health workers had participated in immunisation except the TMPs (27.8%). Across the stakeholders' categories, between 61.9 and 72.6% have children who had Adverse Event Following Immunisation (AEFI). The most common AEFI was fever. Safety concerns, preference for herbs and charm, culture and religions, and vaccination perception as a western culture were the major barriers to immunisation uptake. While 63.6 to 95.7% of respondents indicated that community leaders, religious and spiritual leaders and TMPs should be involved in immunisation advocacy, 56.9-70.4% of them reported that community leaders should be involved in immunisation policy. Upscaling the critical stakeholders' involvement in advocacy, policy development and implementation of immunization activities may improve acceptance, create demand and engender ownership in vulnerable communities of Borno State, Nigeria. AEFI could be detrimental to immunisation access and utilization. Consequently, health education by health workers needs strengthening to minimise vaccine hesitancy. abstract_id: PUBMED:34320942 Factors influencing childhood immunisation uptake in Africa: a systematic review. Background: Vaccine preventable diseases are still the most common cause of childhood mortality, with an estimated 3 million deaths every year, mainly in Africa and Asia. An estimate of 29% deaths among children aged 1-59 months were due to vaccine preventable diseases. Despite the benefits of childhood immunisation, routine vaccination coverage for all recommended Expanded Programme on Immunization vaccines has remained poor in some African countries, such as Nigeria (31%), Ethiopia (43%), Uganda (55%) and Ghana (57%). The aim of this study is to collate evidence on the factors that influence childhood immunisation uptake in Africa, as well as to provide evidence for future researchers in developing, implementing and evaluating intervention among African populations which will improve childhood immunisation uptake. Methods: We conducted a systematic review of articles on the factors influencing under-five childhood immunisation uptake in Africa. This was achieved by using various keywords and searching multiple databases (Medline, PubMed, CINAHL and Psychology &amp; Behavioral Sciences Collection) dating back from inception to 2020. Results: Out of 18,708 recorded citations retrieved, 10,396 titles were filtered and 324 titles remained. These 324 abstracts were screened leading to 51 included studies. Statistically significant factors found to influence childhood immunisation uptake were classified into modifiable and non-modifiable factors and were further categorised into different groups based on relevance. The modifiable factors include obstetric factors, maternal knowledge, maternal attitude, self-efficacy and maternal outcome expectation, whereas non-modifiable factors were sociodemographic factors of parent and child, logistic and administration factors. Conclusion: Different factors were found to influence under-five childhood immunisation uptake among parents in Africa. Immunisation health education intervention among pregnant women, focusing on the significant findings from this systematic review, would hopefully improve childhood immunisation uptake in African countries with poor coverage rates. abstract_id: PUBMED:28343775 Lower vaccine uptake amongst older individuals living alone: A systematic review and meta-analysis of social determinants of vaccine uptake. Introduction: Vaccination is a key intervention to reduce infectious disease mortality and morbidity amongst older individuals. Identifying social factors for vaccine uptake enables targeted interventions to reduce health inequalities. Objective: To systematically appraise and quantify social factors associated with vaccine uptake amongst individuals aged ≥60years from Europe. Methods: We searched Medline and Embase from inception to 24/02/2016. The association of vaccine uptake was examined for social factors relevant at an individual level, to provide insight into individuals' environment and enable development of targeted interventions by healthcare providers to deliver equitable healthcare. Factors included: living alone, marital status, education, income, vaccination costs, area-level deprivation, social class, urban versus rural residence, immigration status and religion. Between-study heterogeneity for each factor was identified using I2-statistics and Q-statistics, and investigated by stratification and meta-regression analysis. Meta-analysis was conducted, when appropriate, using fixed- or random-effects models. Results: From 11,754 titles, 35 eligible studies were identified (uptake of: seasonal influenza vaccine (SIV) only (n=27) or including pneumococcal vaccine (PV) (n=5); herpes zoster vaccine (n=1); pandemic influenza vaccine (n=1); PV only (n=1)). Higher SIV uptake was reported for individuals not living alone (summary odds ratios (OR)=1.39 (95% confidence interval (CI): 1.16-1.68). Lower SIV uptake was observed in immigrants and in more deprived areas: summary OR=0.57 (95%CI: 0.47-0.68) and risk ratio=0.93 (95%CI: 0.92-0.94) respectively. Higher SIV uptake was associated with higher income (OR=1.26 (95%CI: 1.08-1.47)) and higher education (OR=1.05 (95%CI: 1-1.11)) in adequately adjusted studies. Between-study heterogeneity did not appear to result from variation in categorisation of social factors, but for education was partly explained by varying vaccination costs (meta-regression analysis p=&lt;0.0001); individuals with higher education had higher vaccine uptake in countries without free vaccination. Conclusions: Quantification of associations between social factors and lower vaccine uptake, and notably living alone (an overlooked factor in vaccination programmes), should enable health professionals target specific social groups to tackle vaccine-related inequalities. abstract_id: PUBMED:33081730 Identifying interventions with Gypsies, Roma and Travellers to promote immunisation uptake: methodological approach and findings. Background: In the UK, Gypsy, Roma and Traveller (GRT) communities are generally considered to be at risk of low or variable immunisation uptake. Many strategies to increase uptake for the general population are relevant for GRT communities, however additional approaches may also be required, and importantly one cannot assume that "one size fits all". Robust methods are needed to identify content and methods of delivery that are likely to be acceptable, feasible, effective and cost effective. In this paper, we describe the approach taken to identify potential interventions to increase uptake of immunisations in six GRT communities in four UK cities; and present the list of prioritised interventions that emerged. Methods: This work was conducted in three stages: (1) a modified intervention mapping process to identify ideas for potential interventions; (2) a two-step prioritisation activity at workshops with 51 GRTs and 25 Service Providers to agree a prioritised list of potentially feasible and acceptable interventions for each community; (3) cross-community synthesis to produce a final list of interventions. The theoretical framework underpinning the study was the Social Ecological Model. Results: Five priority interventions were agreed across communities and Service Providers to improve the uptake of immunisation amongst GRTs who are housed or settled on an authorised site. These interventions are all at the Institutional (e.g. cultural competence training) and Policy (e.g. protected funding) levels of the Social Ecological Model. Conclusions: The "upstream" nature of the five interventions reinforces the key role of GP practices, frontline workers and wider NHS systems on improving immunisation uptake. All five interventions have potentially broader applicability than GRTs. We believe that their impact would be enhanced if delivered as a combined package. The robust intervention development and co-production methods described could usefully be applied to other communities where poor uptake of immunisation is a concern. Study Registration: Current Controlled Trials ISRCTN20019630, Date of registration 01-08-2013, Prospectively registered. abstract_id: PUBMED:36141450 Socioeconomic Inequalities and Vaccine Uptake: An Umbrella Review Protocol. The effectiveness of immunization is widely accepted: it can successfully improve health outcomes by reducing the morbidity and mortality associated with vaccine-preventable diseases. In the era of pandemics, there is a pressing need to identify and understand the factors associated with vaccine uptake amongst different socioeconomic groups. The knowledge generated from research in this area can be used to inform effective interventions aimed at increasing uptake. This umbrella systematic review aims to determine whether there is an association between socioeconomic inequalities and rate of vaccine uptake globally. Specifically, the study aims to determine whether an individual's socioeconomic status, level of education, occupation, (un)-employment, or place of residence affects the uptake rate of routine vaccines. The following databases will be searched from 2011 to the present day: Medline (Ovid), Embase (Ovid), CINAHL (EBSCO), Cochrane CENTRAL, Science Citation Index (Web of Science), DARE, SCOPUS (Elsevier), and ASSIA (ProQuest). Systematic reviews will be either included or excluded based on a priori established eligibility criteria. The relevant data will then be extracted, quality appraised, and narratively synthesised. The synthesis will be guided by the theoretical framework developed for this review. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses Equity extension (PRISMA-E) guidance will be followed. This protocol has been registered on PROSPERO, ID: CRD42022334223. abstract_id: PUBMED:31402236 Identifying inequalities in childhood immunisation uptake and timeliness in southeast Scotland, 2008-2018: A retrospective cohort study. Background: In 2018, there was a record incidence of measles and other vaccine-preventable diseases across developed countries. Declining childhood immunisation uptake in southeast Scotland-an area with a large, highly mobile, and socioeconomically diverse population-threatens regional herd immunity and warrants investigation of suboptimal coverage. As deprivation of social and material resources increases risk of non-vaccination, we examined here the relationship between deprivation, uptake, and timeliness for four routine childhood vaccines and identified trends over the past decade. Methods: This retrospective cohort study analysed immunisation data from the Scottish Immunisation Recall System (SIRS) for four routine childhood vaccines in the UK: the third dose of the primary vaccine (TPV), both doses of measles, mumps, rubella (MMR 1 and MMR 2), and the preschool booster (PSB). Immunisations (N = 329,897) were administered between 2008 and 2018. Deprivation was measured via the Scottish Index of Multiple Deprivation (SIMD), ranking postcodes by deprivation decile. Chi-squared tests and cox proportional hazards models assessed the relationship between uptake, timeliness, and deprivation. Results: There is strong evidence for an association between deprivation, uptake, and timeliness. Uptake for all childhood immunisations are very high, especially for TPV and MMR 1 (&gt;98.0%), though certain deprivation deciles exhibit increased risks of non-vaccination for all vaccines. Delay was pronounced for the 40% most deprived population and for immunisations scheduled at later ages. Absolute PSB and MMR 2 uptake has improved since 2008; however, disparities in uptake have increased for all vaccines since the 2006 birth cohort. Conclusion: Both timeliness and uptake are strongly associated with deprivation. While absolute uptake was high for all vaccines, relative uptake and timeliness has been worsening for most groups; the reason for this decline is unclear. Here we identified subgroups that may require targeted interventions to facilitate uptake and timeliness for essential childhood vaccines. abstract_id: PUBMED:33965331 Uptake of vaccination in pregnancy. Maternal immunisation is a public health strategy that aims to provide protection against certain infections to both mother and her foetus or newborn child. Vaccination of pregnant women induces vaccine-specific antibodies that lead to the subsequent transfer of these antibodies across the placenta or through breastfeeding to the offspring. At present, vaccinations in pregnancy are limited to pertussis, tetanus, diphtheria, polio, and the seasonal Influenza vaccine. Recently, some countries have incorporated routine antenatal vaccinations in their national immunisation programmes. Future vaccines targeted at pregnant women such as respiratory syncytial virus (RSV) and Group B streptococcus (GBS) are under development. The recently approved Covid-19 vaccines have no safety data for use in pregnancy at present, but have been considered in the UK in extremely vulnerable pregnant women or pregnant frontline health and social care workers. In this article, we review the evidence supporting maternal immunisation and discuss the uptake of vaccines in pregnant women, challenges of recording the data on vaccine coverage, and consider reasons behind the present levels of uptake and strategies for future improvements. abstract_id: PUBMED:32295233 Inequalities in Rotavirus Vaccine Uptake in Ethiopia: A Decomposition Analysis. A previous study in Ethiopia reported significant variation in rotavirus vaccine uptake across socioeconomic strata. This study aims to quantify socioeconomic inequality of rotavirus vaccine uptake in Ethiopia and to identify the contributing factors for the inequality. The concentration curve (CC) and the Erreygers Normalized Concentration Index (ECI) were used to assess the socioeconomic related inequality in rotavirus vaccine uptake using data from the 2016 Ethiopian Demographic and Health Survey. Decomposition analysis was conducted to identify the drivers of inequalities. The CC for rotavirus vaccine uptake lay below the line of equality and the ECI was 0.270 (p &lt; 0.001) indicating that uptake of rotavirus vaccine in Ethiopia was significantly concentrated among children from families with better socioeconomic status. The decomposition analysis showed that underlining inequalities in maternal health care services utilization, including antenatal care use (18.4%) and institutional delivery (8.1%), exposure to media (12.8%), and maternal educational level (9.7%) were responsible for the majority of observed inequalities in the uptake of rotavirus vaccine. The findings suggested that there is significant socioeconomic inequality in rotavirus vaccine uptake in Ethiopia. Multi-sectoral actions are required to reduce the inequalities, inclusive increasing maternal health care services, and educational attainments among economically disadvantaged mothers. Answer: Interventions designed to improve immunisation uptake do not necessarily reduce social inequalities in uptake. For instance, a study in Northumberland found that despite a substantial increase in immunisation uptake, inequalities between deprived and affluent areas persisted or even became wider. It was observed that any reduction in inequality only occurred after uptake in affluent areas approached 95% (PUBMED:8173457). Similarly, a systematic review of behavioral weight management interventions, which can be considered analogous to immunisation interventions in terms of public health measures, indicated that most trials found no inequalities gradient. However, when a gradient was observed for trial uptake, intervention adherence, and trial attrition, those considered "more advantaged" did best (PUBMED:35243743). In the context of North-Eastern Nigeria, safety concerns, preference for herbs and charm, culture and religions, and vaccination perception as a western culture were major barriers to immunisation uptake. The study suggested that upscaling the involvement of critical stakeholders in advocacy, policy development, and implementation of immunization activities may improve acceptance and create demand in vulnerable communities (PUBMED:38162329). A systematic review of factors influencing childhood immunisation uptake in Africa found that various modifiable and non-modifiable factors influenced uptake. The review suggested that immunisation health education interventions among pregnant women could improve childhood immunisation uptake in African countries with poor coverage rates (PUBMED:34320942). Research on older individuals living alone in Europe found that living alone was associated with lower vaccine uptake. The study highlighted the need for targeted interventions to reduce health inequalities and improve vaccine uptake among specific social groups (PUBMED:28343775). A study on Gypsies, Roma, and Travellers in the UK identified that interventions at the institutional and policy levels of the Social Ecological Model could improve immunisation uptake. The "upstream" nature of these interventions underscores the importance of GP practices, frontline workers, and wider NHS systems in improving immunisation uptake (PUBMED:33081730). In southeast Scotland, an association between deprivation, uptake, and timeliness for routine childhood vaccines was observed. The study identified subgroups that may require targeted interventions to facilitate uptake and timeliness for essential childhood vaccines (PUBMED:31402236). In summary, while interventions can improve overall immunisation uptake, they do not automatically reduce social inequalities in uptake. Targeted strategies that address specific barriers faced by disadvantaged groups are necessary to reduce these inequalities (PUBMED:8173457, PUBMED:35243743, PUBMED:38162329, PUBMED:34320942, PUBMED:28343775, PUBMED:33081730, PUBMED:31402236).
Instruction: Should the extrahepatic bile duct be resected for locally advanced gallbladder cancer? Abstracts: abstract_id: PUBMED:25663014 Combined extrahepatic bile duct resection for locally advanced gallbladder carcinoma: does it work? Background: Prophylactic combined extrahepatic bile duct resection remains controversial for locally advanced gallbladder carcinoma without extrahepatic bile duct invasion. The aim of this study is to resolve this issue and establish an appropriate surgery for locally advanced gallbladder carcinoma. Methods: A total of 52 patients underwent surgical resection combined with extrahepatic bile duct resection for locally advanced gallbladder carcinoma without extrahepatic bile duct invasion, and their medical records were retrospectively reviewed for microvessel invasion (MVI), including lymphatic, venous, and/or perineural invasions, around the extrahepatic bile duct. Results: Of the 52 patients, 8 (15 %) had MVI around the extrahepatic bile duct. All of the 8 patients had Stage IV disease. According to a survival analysis of the 50 patients who tolerated surgery, MVIs around the extrahepatic bile duct and distant metastasis were identified as independent prognostic factors. Survival for patients with MVI around the extrahepatic bile duct was dismal, with a lack of 2-year survivors. Conclusions: MVI around the extrahepatic bile duct is a sign of extremely locally advanced gallbladder carcinoma; therefore, prophylactic combined bile duct resection has no survival impact for patients without extrahepatic bile duct invasion. abstract_id: PUBMED:15523394 Should the extrahepatic bile duct be resected for locally advanced gallbladder cancer? Background: The incidence and mode of spread of carcinoma of the gallbladder into the hepatoduodenal ligament have not been well described pathologically for gallbladder carcinoma extending into the subserosa and beyond. Methods: Between 1985 and 2002, 50 consecutive patients with gallbladder carcinoma extending into the subserosa or beyond underwent radical surgery, including extrahepatic bile duct resection. Serial sections of specimens of the resected extrahepatic bile ducts were examined to determine the incidence and the pattern of invasion of the hepatoduodenal ligament from the primary cancer. Results: Invasion of the hepatoduodenal ligament was present in 30 of the 50 specimens. Of these, 9 showed direct extramural spread (type I), 4 showed continuous intramural spread (type II), 5 showed distant spread separated from the primary tumor (type III), and 4 showed spread of cancer cells from metastatic lymph nodes (type IV). The remaining 8 patients had more than 1 type: 1 patient had types I + III; 3 had types I + III + IV; and 4 had types III+IV. Invasion of the hepatoduodenal ligament was present in 24 of 44 patients without preoperative obstructive jaundice and in 2 of 13 patients with stage IB disease. Patients with types II, III, and IV spread into the hepatoduodenal ligament had significantly better survival than those with type I spread. Conclusions: Gallbladder carcinoma extending into the subserosa or beyond invades the hepatoduodenal ligament with relatively high frequency. Preoperative diagnosis of this invasion is difficult; therefore, strong consideration should be given to resection of the extrahepatic bile ducts and lymph nodes. abstract_id: PUBMED:10022655 An extrahepatic bile duct metastasis from a gallbladder cancer mimicking Mirizzi's syndrome. We report a case of an extrahepatic bile duct metastasis from a gallbladder cancer that mimicked Mirizzi's syndrome on cholangiography. A 67-yr-old woman was admitted to our hospital with a diagnosis of acute calculous cholecystitis. As obstructive jaundice developed after the admission, percutaneous transhepatic biliary drainage was performed to ameliorate the jaundice and to evaluate the biliary system. Tube cholangiography revealed bile duct obstruction at the hepatic hilus, and extrinsic compression of the lateral aspect of the common hepatic duct, with nonvisualization of the gallbladder. No impacted cystic duct stone was visualized on CT or ultrasonography. Laparotomy revealed a gallbladder tumor as well as an extrahepatic bile duct tumor. We diagnosed that the latter was a metastasis from the gallbladder cancer, based on the histopathological features. This case is unique in that the extrahepatic bile duct metastasis obstructed both the common hepatic duct and the cystic duct, giving the appearance of Mirizzi's syndrome on cholangiography. Metastatic bile duct tumors that mimic Mirizzi's syndrome have not been previously reported. The presence of this condition should be suspected in patients with the cholangiographic features of Mirizzi's syndrome, when the CT or ultrasonographic findings fail to demonstrate an impacted cystic duct stone. abstract_id: PUBMED:19779773 Should the extrahepatic bile duct be resected or preserved in R0 radical surgery for advanced gallbladder carcinoma? Results of a Japanese Society of Biliary Surgery Survey: a multicenter study. Purpose: We assessed the significance of an extra bile duct resection by comparing the survival of patients with advanced gallbladder carcinoma who had resected bile ducts with those who had preserved bile ducts. A radical cholecystectomy that includes extra bile duct resections has been performed without any clear evidence of whether an extra bile duct resection is preventive or curative. Methods: We conducted a questionnaire survey among clinicians who belonged to the 114 member institutions of the Japanese Society of Biliary Surgery. The questionnaires included questions on the preoperative diagnosis, complications, treatment, and surgical treatment, resection procedures, surgical results, pathological and histological findings, mode and site of recurrence, and the need for additional postoperative treatment. A total of 4243 patients who had gallbladder carcinoma and were treated from January 1, 1994 to December 31, 2003 were identified. The 838 R0 patients with pT2, pT3, and pT4 advanced carcinoma of the gallbladder for which there was no cancer invasion to the hepatoduodenal ligament or cystic duct in the final analysis. Results: The 5-year cumulative survival, postoperative complications, postoperative lymph node metastasis, and local recurrence along the hepatoduodenal ligament were not substantially different between the resected bile duct and the preserved bile duct groups. Conclusions: Our retrospective questionnaire survey showed that an extrahepatic bile duct resection had no preventive value in some patients with advanced gallbladder carcinoma in comparison to similar patients who had no such bile duct resection. An extrahepatic bile duct resection may therefore be unnecessary in advanced gallbladder carcinoma without a direct infiltration of the hepatoduodenal ligament and the cystic duct. abstract_id: PUBMED:29470881 Syncrhonous carcinosarcoma of the extrahepatic bile duct and gallbladder. Carcinosarcoma is a malignant neoplasm characterized for intermingled epithelial and mesenchymal components. Case Report: A preoperative suspected diagnosis will allow a radical therapy avoiding a very bad prognosis. We report on a male patient who was operated in our Service with diagnosis of synchronous carcinosarcoma of gallbladder and extrahepatic bile duct and a review of the Medical Literature. Discussion: A gallblader carcinosarcoma showing extension into common bile duct is very rare, a carcinosarcoma of the bile duct is exceptional, and a synchronous carcinosarcoma ofthe bile duct and gallbladder has not been reported previously. abstract_id: PUBMED:33400191 Adenomyomatous hyperplasia of the extrahepatic bile duct: a systematic review of a rare lesion mimicking bile duct carcinoma. Adenomyomatous hyperplasia (AH) is a tumor-like inflammatory hyperplastic lesion. In the biliary system, AH commonly arises in the gallbladder, but AH of the extrahepatic bile duct is extremely rare. AH usually develops and is found with symptoms related to biliary stenosis or obstruction, but there are few disease-specific manifestations. It is difficult to make a definitive diagnosis by imaging or cytopathological examination; thus, surgical resections were performed in all past reported cases. The pathophysiological etiology of AH is unknown, but it is considered to be associated with chronic inflammation. According to the epidemiological findings of cases reported to date, the possibility of malignant transformation is considered to be negative. However, the symptoms and imaging findings of AH are difficult to distinguish from those of early-stage bile duct carcinoma. In the current review, we discuss the epidemiology, pathophysiology, diagnosis, and management of AH of the bile duct. abstract_id: PUBMED:7520348 An immunohistochemical study of p53 protein in gallbladder and extrahepatic bile duct/ampullary carcinomas. Background: p53 mutations are known to occur frequently in human cancers, including gallbladder carcinoma. However, there has been no study of p53 expression in extrahepatic bile duct/ampullary carcinoma. Furthermore, gallbladder carcinoma is associated with cholelithiasis, whereas no such association is known for extrahepatic bile duct carcinoma, suggesting that they could arise from different pathogenetic mechanisms. Methods: Twenty-four gallbladder carcinomas and 35 extrahepatic bile duct/ampullary carcinomas were stained with an anti-human p53 protein monoclonal antibody by the streptavidin-biotin immunoperoxidase method. Both the extent and intensity of p53 protein staining were noted. Results: Ninety-two percent of the gallbladder carcinomas stained for p53 protein compared with only 66% of the extrahepatic bile duct/ampullary carcinomas. The statistical significance was maintained even when the comparison was restricted to strong p53 staining in moderately to poorly differentiated adenocarcinomas (P &lt; 0.05). Of the gallbladder carcinomas, poorly differentiated adenocarcinomas stained more strongly than well to moderately differentiated adenocarcinomas; the converse was true for extrahepatic bile duct/ampullary adenocarcinomas. Conclusion: The majority of gallbladder and extrahepatic bile duct/ampullary carcinomas stain for p53 protein. The incidence and pattern of staining is different, however, and supports the contention that these could be different tumors with differing etiologies and pathogenetic mechanisms. abstract_id: PUBMED:28090223 To Resect or Not to Resect Extrahepatic Bile Duct in Gallbladder Cancer? The indications for and limitations of extrahepatic bile duct resection (EHBDR) in the context of gallbladder (GB) cancer are unclear. The purpose of this review was to examine the current literature to determine the impact of EHBDR on loco-regional recurrence and survival in GB cancer. The EMBASE and Medline databases were searched up to February 2016 using the terms: extrahepatic bile duct resection and gallbladder cancer. Studies published in the last 20 years were eligible for inclusion. Given the heterogeneity of the population and the study methodologies employed, qualitative data synthesis in the form of meta-analysis was deemed implausible. Twenty-four studies fulfilled the inclusion criteria. The selected studies include 6,722 (55%) EHBDRs in a total of 12,251 GB cancer operations. The 25 studies were categorized into seven groups: 1) cancer survival all stages; 2) hepatoduodenal ligament invasion; 3) outcome in EHBDR and EHBDNR; 4) pT1b tumors; 5) pT2 tumors; 6) pT3/T4 tumors; and 7) incidental GB cancer. Radical cholecystectomy with EHBDR should be used as a standard operation for tumors involving the neck or the cystic duct of the GB (either macroscopically or microscopically). In all other cases, operative strategy should be individualized to the patient. abstract_id: PUBMED:12485219 Gall bladder cancer, extrahepatic bile duct cancer and ampullary carcinoma in New Zealand: Demographics, pathology and survival. Introduction: The aim of present paper was to document the incidence of gall bladder cancer, cancer of the extrahepatic bile ducts and ampullary carcinoma in New Zealand. Methods: Data were collected from the New Zealand Cancer Registry from 1980 to 1997 and combined with national census statistics to give crude and age standardized incidence rates. Results: Over the 18-year study period, 226 carcinomas of the ampulla of Vater, 608 gall bladder cancers, and 486 extrahepatic cholangiocarcinomas were registered. The age standardized incidence rates for gall bladder carcinoma in all New Zealanders were 0.41/100 000 in men and 0.74/100 000 in women. The age standardized incidence rates for gall bladder cancer in Maori were 1.49/100 000 in Maori men and 1.59/100 000 in Maori women. The corresponding age standardized incidence rates for extrahepatic bile duct cancers were 0.67/100 000 in men and 0.45/100 000 in women. There were insufficient cases to calculate an age standardized incidence in Maori or Pacific Islanders. For carcinoma of the ampulla, the age standardized rates were 0.34/100 000 in men and 0.25/100 000 in women. There were insufficient cases to calculate an age standardized incidence rate for Maori or Pacific Islanders. When histology was defined adenocarcinoma was the most common form of cancer occurring in 66% of gall bladder cancers, 91% of extrahepatic bile duct cancers and 70% of ampullary cancers. Most tumours were advanced at presentation with regional or distant metastases present in 72% of gall bladder cancers, 63% of extrahepatic bile duct cancers and 69% of ampullary tumours at diagnosis. Survival was poor with median survivals of 86 days, 151 days and 440 days recorded for gall bladder cancer, extrahepatic bile duct cancer and ampullary cancer, respectively. Conclusions: The demographic profile, pathology and survival of patients with gall bladder cancer, extrahepatic bile duct cancer and ampullary carcinoma are similar in New Zealand to that of other Western countries. However New Zealand Maori have a relatively high incidence of gall bladder cancer, and the incidence is equal in both Maori men and women, while cancers of the extra-hepatic bile duct and ampulla of Vater are rare in Maori. In comparison, cancers of the gall bladder, extrahepatic bile ducts and ampulla are rare in Pacific Islanders. abstract_id: PUBMED:20572800 Primary hematolymphoid malignancies involving the extrahepatic bile duct or gallbladder. Primary hematolymphoid malignancies of the extrahepatic biliary tract are rare tumors. We report five cases of primary hematolymphoid malignancies involving the extrahepatic biliary tract. One is a granulocytic sarcoma of the extrahepatic bile duct, another is an extramedullary plasmacytoma of the gallbladder, and the others are two non-Hodgkin lymphomas of the extrahepatic bile duct and one of the gallbladder. The clinical presentations, radiographic studies, and gross findings at surgery have not been a significant help in differential diagnosis. Although a preoperative diagnosis of primary hematolymphoid malignancy is very difficult to reach because of the rarity of this disease, it should be considered, because, if an accurate diagnosis is made before surgical intervention, chemotherapy is the most appropriate treatment. In limited cases mimicking cholangiocarcinoma and gallbladder cancer, surgical resection followed by chemotherapy has a valid role as reasonable treatment for patients. Answer: The decision to resect the extrahepatic bile duct for locally advanced gallbladder cancer is complex and depends on various factors, including the extent of the disease and the potential for improved survival outcomes. Some studies suggest that resection of the extrahepatic bile duct should be considered in cases where there is invasion of the hepatoduodenal ligament, as gallbladder carcinoma extending into the subserosa or beyond invades the hepatoduodenal ligament with relatively high frequency (PUBMED:15523394). However, the survival benefit of this approach is not clear-cut. For instance, a study found that microvessel invasion (MVI) around the extrahepatic bile duct is a sign of extremely locally advanced gallbladder carcinoma, and prophylactic combined bile duct resection has no survival impact for patients without extrahepatic bile duct invasion (PUBMED:25663014). Another study concluded that extrahepatic bile duct resection had no preventive value in some patients with advanced gallbladder carcinoma compared to similar patients who had no such bile duct resection, suggesting that it may be unnecessary in advanced gallbladder carcinoma without direct infiltration of the hepatoduodenal ligament and the cystic duct (PUBMED:19779773). A review of the literature indicated that radical cholecystectomy with extrahepatic bile duct resection should be used as a standard operation for tumors involving the neck or the cystic duct of the gallbladder, either macroscopically or microscopically. In other cases, the operative strategy should be individualized to the patient (PUBMED:28090223). In summary, the decision to resect the extrahepatic bile duct in the context of locally advanced gallbladder cancer should be made on a case-by-case basis, considering the specific characteristics of the tumor and the potential for improved survival outcomes. It is not universally recommended, especially in the absence of direct invasion into the hepatoduodenal ligament or cystic duct.
Instruction: Can proton MR spectroscopic and perfusion imaging differentiate between neoplastic and nonneoplastic brain lesions in adults? Abstracts: abstract_id: PUBMED:18055564 Can proton MR spectroscopic and perfusion imaging differentiate between neoplastic and nonneoplastic brain lesions in adults? Background And Purpose: Noninvasive diagnosis of brain lesions is important for the correct choice of treatment. Our aims were to investigate whether 1) proton MR spectroscopic imaging ((1)H-MRSI) can aid in differentiating between tumors and nonneoplastic brain lesions, and 2) perfusion MR imaging can improve the classification. Materials And Methods: We retrospectively examined 69 adults with untreated primary brain lesions (brain tumors, n = 36; benign lesions, n = 10; stroke, n = 4; demyelination, n = 10; and stable lesions not confirmed on pathologic examination, n = 9). MR imaging and (1)H-MRSI were performed at 1.5T before biopsy or treatment. Concentrations of N-acetylaspartate (NAA), creatine (Cr), and choline (Cho) in the lesion were expressed as metabolite ratios and were normalized to the contralateral hemisphere. Dynamic susceptibility contrast-enhanced perfusion MR imaging was performed in a subset of patients (n = 32); relative cerebral blood volume (rCBV) was evaluated. Discriminant function analysis was used to identify variables that can predict inclusion in the neoplastic or nonneoplastic lesion groups. Receiver operator characteristic (ROC) analysis was used to compare the discriminatory capability of (1)H-MRSI and perfusion MR imaging. Results: The discriminant function analysis correctly classified 84.2% of original grouped cases (P &lt; .0001), on the basis of NAA/Cho, Cho(norm), NAA(norm), and NAA/Cr ratios. MRSI and perfusion MR imaging had similar discriminatory capabilities in differentiating tumors from nonneoplastic lesions. With cutoff points of NAA/Cho &lt; or =0.61 and rCBV &gt; or =1.50 (corresponding to diagnosis of the tumors), a sensitivity of 72.2% and specificity of 91.7% in differentiating tumors from nonneoplastic lesions were achieved. Conclusion: These results suggest a promising role for (1)H-MRSI and perfusion MR imaging in the distinction between brain tumors and nonneoplastic lesions in adults. abstract_id: PUBMED:9367317 Accuracy of single-voxel proton MR spectroscopy in distinguishing neoplastic from nonneoplastic brain lesions. Purpose: To measure the accuracy of single-voxel, image-guided proton MR spectroscopy in distinguishing normal from abnormal brain tissue and neoplastic from nonneoplastic brain disease. Methods: MR spectroscopy was performed at 0.5 T with the point-resolved spectroscopic pulse sequence and conventional postprocessing techniques. Subjects consisted of a consecutive series of patients with suspected brain neoplasms or recurrent neoplasia and 10 healthy adult volunteers. Fifty-five lesions in 53 patients with subsequently verified final diagnoses were included. Spectra were interpreted qualitatively by visual inspection by nonblinded readers (prospectively) with the benefit of prior clinical data and imaging studies, and by blinded readers (retrospectively). The nonblinded readers interpreted the spectra as diagnostic or not, and, if diagnostic, as neoplastic or nonneoplastic. The blinded readers classified the spectra as diagnostic or not, and, if diagnostic, as normal or abnormal and as neoplastic or nonneoplastic (when abnormal). The sensitivity, specificity, positive and negative predictive values, and accuracy were calculated from blinded and nonblinded MR spectroscopy interpretations. A receiver operator characteristic (ROC) curve analysis was performed on blinded MR spectroscopy interpretations. Results: The diagnostic accuracy averaged across four blinded readers in differentiating patients from control subjects was .96, while the area under the aggregate (pooled interpretations) ROC curve approached unity. Accuracy in the nonblinded and blinded discrimination of neoplastic from nonneoplastic disease was .96 and .83, respectively. The area under the aggregate ROC curve in the blinded discrimination of neoplasm from nonneoplasm was .89. Conclusions: Image-guided proton spectra obtained at 0.5 T from patients with suspected neoplasia can be distinguished from spectra in healthy control subjects, and neoplastic spectra can be distinguished from nonneoplastic spectra with a high degree of diagnostic accuracy. abstract_id: PUBMED:16374884 Proton magnetic resonance spectroscopic imaging to differentiate between nonneoplastic lesions and brain tumors in children. Purpose: To investigate whether in vivo proton magnetic resonance spectroscopic imaging (MRSI) can differentiate between 1) tumors and nonneoplastic brain lesions, and 2) high- and low-grade tumors in children. Materials And Methods: Thirty-two children (20 males and 12 females, mean age = 10 +/- 5 years) with primary brain lesions were evaluated retrospectively. Nineteen patients had a neuropathologically confirmed brain tumor, and 13 patients had a benign lesion. Multislice proton MRSI was performed at TE = 280 msec. Ratios of N-acetyl aspartate/choline (NAA/Cho), NAA/creatine (Cr), and Cho/Cr were evaluated in the lesion and the contralateral hemisphere. Normalized lesion peak areas (Cho(norm), Cr(norm), and NAA(norm)) expressed relative to the contralateral hemisphere were also calculated. Discriminant function analysis was used for statistical evaluation. Results: Considering all possible combinations of metabolite ratios, the best discriminant function to differentiate between nonneoplastic lesions and brain tumors was found to include only the ratio of Cho/Cr (Wilks' lambda, P = 0.012; 78.1% of original grouped cases correctly classified). The best discriminant function to differentiate between high- and low-grade tumors included the ratios of NAA/Cr and Cho(norm) (Wilks' lambda, P = 0.001; 89.5% of original grouped cases correctly classified). Cr levels in low-grade tumors were slightly lower than or comparable to control regions and ranged from 53% to 165% of the control values in high-grade tumors. Conclusion: Proton MRSI may have a promising role in differentiating pediatric brain lesions, and an important diagnostic value, particularly for inoperable or inaccessible lesions. abstract_id: PUBMED:11498420 Proton MR spectroscopic evaluation of suspicious brain lesions after stereotactic radiotherapy. Background And Purpose: The radiologic assessment of suspicious brain lesions after stereotactic radiotherapy of brain tumors is difficult. The purpose of our study was to define parameters from single-voxel proton MR spectroscopy that provide a probability measure for differentiating neoplastic from radiation-induced, nonneoplastic lesions. Methods: Seventy-two lesions in 56 patients were examined using a combined MR imaging and MR spectroscopy protocol (point-resolved spectroscopy, TE = 135 ms). Signal intensities of cholines, creatines, N-acetyl aspartate, and the presence of lactate and lipid resonances were correlated to final diagnoses established by clinical and MR imaging follow-up, positron emission tomography studies, or biopsy/surgery. Statistical analysis was performed using the t test, linear discriminant analysis, and k nearest-neighbor method. Results: Significantly increased signal intensity ratios I(tCho)/I(tCr) (P &lt;.0001) and I(tCho)/I(NAA) (P &lt;.0001) were observed in neoplastic (n = 34) compared with nonneoplastic lesions (n = 32) and contralateral normal brain (n = 33). Analysis of I(tCho)/I(tCr) and I(tCho)/I(NAA) data yielded correct retrospective classification as neoplastic and nonneoplastic in 82% and 81% of the lesions, respectively. Neither I(NAA)/I(tCr) nor signal intensitities of lactate or lipids were useful for differential diagnosis. Conclusion: Metabolic information provided by proton MR spectroscopy is useful for the differentiation of neoplastic and nonneoplastic brain lesions after stereotactic radiotherapy of brain tumors. abstract_id: PUBMED:9802493 Single-voxel proton MR spectroscopy of nonneoplastic brain lesions suggestive of a neoplasm. Background And Purpose: MR spectroscopy is used to characterize biochemical components of normal and abnormal brain tissue. We sought to evaluate common histologic findings in a diverse group of nonneoplastic diseases in patients with in vivo MR spectroscopic profiles suggestive of a CNS neoplasm. Methods: During a 2-year period, 241 patients with suspected neoplastic CNS lesions detected on MR images were studied with MR spectroscopy. Of these, five patients with a nonneoplastic diagnosis were identified retrospectively; a sixth patient without tissue diagnosis was added. MR spectroscopic findings consistent with a neoplasm included elevated choline and decreased N-acetylaspartate and creatine, with or without detectable mobile lipid and lactate peaks. Results: The histologic specimens in all five patients for whom tissue diagnoses were available showed significant WBC infiltrates, with both interstitial and perivascular accumulations of lymphocytes, macrophages, histiocytes, and (in one case) plasma cells. Reactive astrogliosis was also prominent in most tissue samples. This cellular immune response was an integral component of the underlying disorder in these patients, including fulminant demyelination in two patients, human herpesvirus 6 encephalitis in one patient, organizing hematoma from a small arteriovenous malformation in one patient, and inflammatory pseudotumor in one patient. Although no histologic data were available in the sixth patient, neoplasm was considered unlikely on the basis of ongoing clinical and neuroradiologic improvement without specific therapy. Conclusion: Nonneoplastic disease processes in the CNS may elicit a reactive proliferation of cellular elements of the immune system and of glial tissue that is associated with MR spectroscopic profiles indistinguishable from CNS neoplasms with current in vivo MR spectroscopic techniques. Such false-positive findings substantiate the need for histologic examination of tissue as the standard of reference for the diagnosis of intracranial mass lesions. abstract_id: PUBMED:24324699 The role of dynamic susceptibility contrast-enhanced perfusion MR imaging in differentiating between infectious and neoplastic focal brain lesions: results from a cohort of 100 consecutive patients. Background And Purpose: Differentiating between infectious and neoplastic focal brain lesions that are detected by conventional structural magnetic resonance imaging (MRI) may be a challenge in routine practice. Brain perfusion-weighted MRI (PWI) may be employed as a complementary non-invasive tool, providing relevant data on hemodynamic parameters, such as the degree of angiogenesis of lesions. We aimed to employ dynamic susceptibility contrast-enhanced perfusion MR imaging (DSC-MRI) to differentiate between infectious and neoplastic brain lesions by investigating brain microcirculation changes. Materials And Methods: DSC-MRI perfusion studies of one hundred consecutive patients with non-cortical neoplastic (n = 54) and infectious (n = 46) lesions were retrospectively assessed. MRI examinations were performed using a 1.5-T scanner. A preload of paramagnetic contrast agent (gadolinium) was administered 30 seconds before acquisition of dynamic images, followed by a standard dose 10 seconds after starting imaging acquisitions. The relative cerebral blood volume (rCBV) values were determined by calculating the regional cerebral blood volume in the solid areas of lesions, normalized to that of the contralateral normal-appearing white matter. Discriminant analyses were performed to determine the cutoff point of rCBV values that would allow the differentiation of neoplastic from infectious lesions and to assess the corresponding diagnostic performance of rCBV when using this cutoff value. Results: Neoplastic lesions had higher rCBV values (4.28±2.11) than infectious lesions (0.63±0.49) (p&lt;0.001). When using an rCBV value &lt;1.3 as the parameter to define infectious lesions, the sensitivity of the method was 97.8% and the specificity was 92.6%, with a positive predictive value of 91.8%, a negative predictive value of 98.0%, and an accuracy of 95.0%. Conclusion: PWI is a useful complementary tool in distinguishing between infectious and neoplastic brain lesions; an elevated discriminatory value for diagnosis of infectious brain lesions was observed in this sample of patients when the rCBV cutoff value was set to 1.3. abstract_id: PUBMED:29890916 Arterial spin labeling perfusion: Prospective MR imaging in differentiating neoplastic from non-neoplastic intra-axial brain lesions. Purpose: The purpose of this article is to assess the diagnostic performance of arterial spin-labeling (ASL) magnetic resonance perfusion imaging to differentiate neoplastic from non-neoplastic brain lesions. Material And Methods: This prospective study included 60 consecutive, newly diagnosed, untreated patients with intra-axial lesions with perilesional edema (PE) who underwent clinical magnetic resonance imaging including ASL sequences at 3T. Region of interest analysis was performed to obtain mean cerebral blood flow (CBF) values from lesion (L), PE and normal contralateral white matter (CWM). Normalized (n) CBF ratio was obtained by dividing the mean CBF value of L and PE by mean CBF value of CWM. Discriminant analyses were performed to determine the best cutoff value of nCBFL and nCBFPE in differentiating neoplastic from non-neoplastic lesions. Results: Thirty patients were in the neoplastic group (15 high-grade gliomas (HGGs), 15 metastases) and 30 in the non-neoplastic group (12 tuberculomas, 10 neurocysticercosis, four abscesses, two fungal granulomas and two tumefactive demyelination) based on final histopathology and clincoradiological diagnosis. We found higher nCBFL (6.65 ± 4.07 vs 1.68 ± 0.80, p &lt; 0.001) and nCBFPE (1.86 ± 1.43 vs 0.74 ± 0.21, p &lt; 0.001) values in the neoplastic group than non-neoplastic. For predicting neoplastic lesions, we found an nCBFL cutoff value of 1.89 (AUC 0.917; 95% CI 0.854 to 0.980; sensitivity 90%; specificity 73%) and nCBFPE value of 0.76 (AUC 0.783; 95% CI 0.675 to 0.891; sensitivity 80%; specificity 58%). Mean nCBFL was higher in HGGs (8.70 ± 4.16) compared to tuberculomas (1.98 ± 0.87); and nCBFPE was higher in HGGs (3.06 ± 1.53) compared to metastases (0.86 ± 0.34) and tuberculomas (0.73 ± 0.22) ( p &lt; 0.001). Conclusion: ASL perfusion may help in distinguishing neoplastic from non-neoplastic brain lesions. abstract_id: PUBMED:9769815 Focal brain lesions: effect of single-voxel proton MR spectroscopic findings on treatment decisions. Purpose: To determine the influence of single-voxel proton magnetic resonance (MR) spectroscopic findings on the treatment of patients suspected of having a brain tumor. Materials And Methods: Medical records were reviewed in 78 patients who underwent MR spectroscopy for evaluation of a focal brain mass suspected of being neoplastic. MR spectroscopic findings were positive for neoplasm in 49 patients and negative in 29. Treatment with or without performance of biopsy was noted. In patients with positive findings who underwent irradiation or chemotherapy without biopsy and in patients with negative findings who were treated medically or followed up for interval changes, MR spectroscopy was classified as having a potential positive influence on treatment. In patients with positive findings with subsequently proved nonneoplastic lesions and in patients with negative findings with subsequently proved tumors, MR spectroscopy was classified as having a potential negative influence. Results: MR spectroscopy in eight (16%) patients with positive findings and in 15 (52%) patients with negative findings had a potential positive influence on treatment. In two (3%) patients, MR spectroscopy had a potential negative influence. Conclusion: MR spectroscopy may play a beneficial role in the management of suspected brain tumors. Prospective studies are needed to test the effect of MR spectroscopy on clinical practice and to measure costs and benefits. abstract_id: PUBMED:24467341 Evaluation of standard magnetic resonance characteristics used to differentiate neoplastic, inflammatory, and vascular brain lesions in dogs. Magnetic resonance (MR) imaging characteristics are commonly used to help predict intracranial disease categories in dogs, however, few large studies have objectively evaluated these characteristics. The purpose of this retrospective study was to evaluate MR characteristics that have been used to differentiate neoplastic, inflammatory, and vascular intracranial diseases in a large, multi-institutional population of dogs. Medical records from three veterinary teaching hospitals were searched over a 6-year period for dogs that had diagnostic quality brain MR scans and histologically confirmed intracranial disease. Three examiners who were unaware of histologic diagnosis independently evaluated 19 MR lesion characteristics totaling 57 possible responses. A total of 75 dogs with histologically confirmed intracranial disease were included in analyses: 51 with neoplasia, 18 with inflammatory disease, and six with cerebrovascular disease. Only strong contrast enhancement was more common in neoplasia than other disease categories. A multivariable statistical model suggested that extra-axial origin, T2-FLAIR mixed intensity, and defined lesion margins were also predictive of neoplasia. Meningeal enhancement, irregular lesion shape, and multifocal location distinguished inflammatory diseases from the other disease categories. No MR characteristics distinguished vascular lesions and these appeared most similar to neoplasia. These results differed from a previous report describing seven MR characteristics that were predictive of neoplasia in dogs and cats. Findings from the current study indicated that the high performance of MR for diagnosing canine intracranial diseases might be due to evaluator recognition of combinations of MR characteristics vs. relying on any one MR characteristic alone. abstract_id: PUBMED:22016411 MR imaging of neoplastic central nervous system lesions: review and recommendations for current practice. MR imaging is the preferred technique for the diagnosis, treatment planning, and monitoring of patients with neoplastic CNS lesions. Conventional MR imaging, with gadolinium-based contrast enhancement, is increasingly combined with advanced, functional MR imaging techniques to offer morphologic, metabolic, and physiologic information. This article provides updated recommendations to neuroradiologists, neuro-oncologists, neurosurgeons, and radiation oncologists on the practical applications of MR imaging of neoplastic CNS lesions in adults, with particular focus on gliomas, based on a review of the clinical trial evidence and personal experiences shared at a recent international meeting of experts in neuroradiology, neuro-oncology, neurosurgery, and radio-oncology. Answer: Yes, proton MR spectroscopic (1H-MRSI) and perfusion imaging can differentiate between neoplastic and nonneoplastic brain lesions in adults. Studies have shown that these imaging techniques can aid in the noninvasive diagnosis of brain lesions, which is crucial for determining the appropriate treatment strategy. For instance, a study involving 69 adults with untreated primary brain lesions demonstrated that discriminant function analysis based on metabolite ratios from 1H-MRSI correctly classified 84.2% of the cases as either neoplastic or nonneoplastic. The metabolite ratios used included N-acetylaspartate (NAA) to choline (Cho), normalized Cho (Cho(norm)), normalized NAA (NAA(norm)), and NAA to creatine (Cr). Additionally, perfusion MR imaging, specifically relative cerebral blood volume (rCBV), was evaluated and found to have similar discriminatory capabilities to 1H-MRSI. Using cutoff points of NAA/Cho ≤0.61 and rCBV ≥1.50, a sensitivity of 72.2% and specificity of 91.7% were achieved in differentiating tumors from nonneoplastic lesions (PUBMED:18055564). Another study using single-voxel proton MR spectroscopy at 0.5 T reported high diagnostic accuracy in distinguishing neoplastic from nonneoplastic brain disease, with an accuracy of .96 for nonblinded interpretations and .83 for blinded interpretations (PUBMED:9367317). Furthermore, proton MRSI has been found to have a promising role in differentiating pediatric brain lesions, which suggests its utility may extend to various age groups (PUBMED:16374884). In the context of post-treatment evaluation, proton MR spectroscopy has been useful in differentiating neoplastic from radiation-induced, nonneoplastic lesions after stereotactic radiotherapy of brain tumors (PUBMED:11498420). However, it is important to note that nonneoplastic disease processes in the CNS can sometimes elicit MR spectroscopic profiles similar to CNS neoplasms, which can lead to false-positive findings. This underscores the need for histologic examination of tissue as the standard reference for the diagnosis of intracranial mass lesions (PUBMED:9769815). In summary, proton MR spectroscopic and perfusion imaging have shown promise in differentiating between neoplastic and nonneoplastic brain lesions in adults, contributing to the decision-making process for treatment. However, the interpretation of these imaging modalities should be done with caution, and in some cases, histologic confirmation may still be necessary.
Instruction: Cardiovascular and metabolic responses to tap water ingestion in young humans: does the water temperature matter? Abstracts: abstract_id: PUBMED:24684853 Cardiovascular and metabolic responses to tap water ingestion in young humans: does the water temperature matter? Aim: Drinking water induces short-term cardiovascular and metabolic changes. These effects are considered to be triggered by gastric distension and osmotic factors, but little is known about the influence of water temperature. Methods: We determined, in a randomized crossover study, the acute cardiovascular and metabolic responses to 500 mL of tap water at 3 °C (cold), 22 °C (room) and 37 °C (body) in 12 young humans to ascertain an effect of water temperature. We measured continuous beat-to-beat haemodynamics, skin blood flux with laser-Doppler flowmetry and resting energy expenditure by indirect calorimetry starting with a 30-min baseline followed by a 4-min drink period and a subsequent 90-min post-drink observation. Results: Ingestion of cold- and room-tempered water led to decreased heart rate (P &lt; 0.01) and double product (P &lt; 0.01), and increased stroke volume (P &lt; 0.05); these effects were not observed with body-tempered water. Drinking cold- and room-, but not body-tempered water, led to increased high frequency power of heart rate variability (P &lt; 0.05) and baroreflex sensitivity (P &lt; 0.05). Cold- and room-tempered water increased energy expenditure over 90 min by 2.9% (P &lt; 0.05) and 2.3% (ns), respectively, accompanied by a diminished skin blood flux (P &lt; 0.01), thereby suggesting that both small increases in heat production together with decreased heat loss contribute to warming up the ingested water to intra-abdominal temperature levels. Conclusions: Overall, ingestion of cold- and room-, but not body-tempered water reduced the workload to the heart through a reduction in heart rate and double product which could be mediated by an augmented cardiac vagal tone. abstract_id: PUBMED:29681860 Cardiovascular and Metabolic Responses to the Ingestion of Caffeinated Herbal Tea: Drink It Hot or Cold? Aim: Tea is usually consumed at two temperatures (as hot tea or as iced tea). However, the importance of drink temperature on the cardiovascular system and on metabolism has not been thoroughly investigated. The purpose of this study was to compare the cardiovascular, metabolic and cutaneous responses to the ingestion of caffeinated herbal tea (Yerba Mate) at cold or hot temperature in healthy young subjects. We hypothesized that ingestion of cold tea induces a higher increase in energy expenditure than hot tea without eliciting any negative effects on the cardiovascular system. Methods: Cardiovascular, metabolic and cutaneous responses were analyzed in 23 healthy subjects (12 men and 11 women) sitting comfortably during a 30-min baseline and 90 min following the ingestion of 500 mL of an unsweetened Yerba Mate tea ingested over 5 min either at cold (~3°C) or hot (~55°C) temperature, according to a randomized cross-over design. Results: Averaged over the 90 min post-drink ingestion and compared to hot tea, cold tea induced (1) a decrease in heart rate (cold tea: -5 ± 1 beats.min-1; hot tea: -1 ± 1 beats.min-1, p &lt; 0.05), double product, skin blood flow and hand temperature and (2) an increase in baroreflex sensitivity, fat oxidation and energy expenditure (cold tea: +8.3%; hot tea: +3.7%, p &lt; 0.05). Averaged over the 90 min post-drink ingestion, we observed no differences of tea temperature on cardiac output work and mean blood pressure responses. Conclusion: Ingestion of an unsweetened caffeinated herbal tea at cold temperature induced a greater stimulation of thermogenesis and fat oxidation than hot tea while decreasing cardiac load as suggested by the decrease in the double product. Further experiments are needed to evaluate the clinical impact of unsweetened caffeinated herbal tea at a cold temperature for weight control. abstract_id: PUBMED:35570674 Perceptions of tap water associated with low-income Michigan mothers' and young children's beverage intake. Objective: To quantify perceptions of tap water among low-income mothers with young children residing in Michigan and examine associations between perceptions of tap water, mothers' and young children's beverage intake, and mothers' infant feeding practices. Design: Cross-sectional study. Setting: Online survey. Participants: Medicaid-insured individuals who had given birth at a large Midwestern US hospital between fall 2016 and fall 2020 were invited by email to complete a survey in winter 2020 (N 3881); 15·6 % (N 606) completed eligibility screening, 550 (90·8 %) were eligible to participate, and 500 (90·9 %) provided valid survey data regarding perceptions of tap water, self and child beverage intake, and infant feeding practices. Results: Two-thirds (66·2 %) of mothers reported that their home tap water was safe to drink without a filter, while 21·6 % were unsure about the safety of their home tap water. Mothers' perceptions of their home tap water were associated with their own tap and bottled water intake and their young children's tap water and bottled water intake. Mothers with more negative perceptions of tap water in general, independent of their perceptions about their home tap water, consumed more bottled water and sugar-sweetened beverages, and their young children drank bottled water and fruit drinks more frequently. Few associations were observed between mothers' perceptions of tap water and infant feeding practices. Conclusions: Uncertainty about tap water safety and negative perceptions of tap water are common among low-income Michigan mothers. These beliefs may contribute to less healthful and more costly beverage intake among mothers and their young children. abstract_id: PUBMED:30319445 Early and Late Cardiovascular and Metabolic Responses to Mixed Wine: Effect of Drink Temperature. Aim: Red wine is usually ingested as an unmixed drink. However, mixtures of wine with juices and/or sucrose (mixed wine) are becoming more and more popular and could be ingested at either cold or hot temperature. Although the temperature effects on the cardiovascular system have been described for water and tea, with greater energy expenditure (EE) and lower cardiac workload with a colder drink, little information is available on the impact of temperature of alcoholic beverages on alcoholemia and cardiometabolic parameters. The purpose of the present study was to compare the acute cardiovascular and metabolic changes in response to mixed wine ingested at a cold or at a hot temperature. Methods: In a randomized crossover design, 14 healthy young adults (seven men and seven women) were assigned to cold or hot mixed wine ingestion. Continuous cardiovascular, metabolic, and cutaneous monitoring was performed in a comfortable sitting position during a 30-min baseline and for 120 min after ingesting 400 ml of mixed wine, with the alcohol content adjusted to provide 0.4 g ethanol/kg of body weight and drunk at either cold (3°C) or hot (55°C) temperature. Breath alcohol concentration was measured intermittently throughout the study. Results: Overall, alcoholemia was not altered by drink temperature, with a tendency toward greater values in women compared to men. Early responses to mixed wine ingestion (0-20 min) indicated that cold drink transiently increased mean blood pressure (BP), cardiac vagal tone, and decreased skin blood flow (SkBf) whereas hot drink did not change BP, decreased vagal tone, and increased SkBf. Both cold and hot mixed wine led to increases in EE and reductions in respiratory quotient. Late responses (60-120 min) led to similar cardiovascular and metabolic changes at both drink temperatures. Conclusion: The magnitude and/or the directional change of most of the study variables differed during the first 20 min following ingestion and may be related to drink temperature. By contrast, late changes in cardiometabolic outcomes were similar between cold and hot wine ingestion, underlying the typical effect of alcohol and sugar intake on the cardiovascular system. abstract_id: PUBMED:30617417 The effects of water temperature on gastric motility and energy intake in healthy young men. Purpose: Although immediate pre-meal water ingestion has been shown to reduce energy intake in healthy young men, no studies are available regarding potential mechanisms underlying the effect of energy intake in response to different temperatures of pre-meal water ingestion. This study examined the effects of consuming different temperatures of water on gastric motility and energy intake in healthy young men. Methods: Eleven young men were completed three, 1-day trials in a random order. Subjects visited the laboratory after a 10-h overnight fast and consumed 500 mL of water at 2 °C, 37 °C, or 60 °C in 5 min. Then, subjects sat on a chair over 1 h to measure the cross-sectional gastric antral area and gastric contractions using the ultrasound imaging systems. Thereafter, subjects consumed a test meal until they felt completely full. Energy intake was calculated from the amount of food consumed. Results: Energy intake in the 2 °C (6.7 ± 1.8 MJ) trial was 19% and 26% lower than the 37 °C (7.9 ± 2.3 MJ, p = 0.039) and 60 °C (8.5 ± 3.2 MJ, p = 0.025) trials, respectively. The frequency of the gastric contractions after 1-h consuming water was lowered in the 2 °C trial than the 60 °C trial (trial-time interaction, p = 0.020). The frequency of gastric contractions was positively related to energy intake (r = 0.365, p = 0.037). Conclusions: These findings demonstrate that consuming water at 2 °C reduces energy intake and this reduction may be related to the modulation of the gastric motility. abstract_id: PUBMED:34789317 Comparison of low-concentration carbon dioxide-enriched and tap water immersion on body temperature after passive heating. Background: Because carbon dioxide (CO2)-enriched water causes cutaneous vasodilation, immersion in CO2-enriched water facilitates heat transfer from the body to the water or from the water to the body. Consequently, immersion in CO2-enriched water raises or reduces body temperature faster than immersion in fresh water. However, it takes time to dissolve CO2 in tap water and because the dissolved CO2 concentration decreases over time, the actual CO2 concentration is likely lower than the stated target concentration. However, it is unclear whether water containing a lower CO2 concentration would also cool the body faster than fresh water after body temperature had been increased. Methods: Ten healthy males (mean age = 20 ± 1 years) participated in the study. Participants were first immersed for 15 min in a tap water bath at 40 °C to raise body temperature. They then moved to a tap water or CO2-enriched water bath at 30 °C to reduce body temperature. The CO2 concentration was set at 500 ppm. The present study measured cooling time and cooling rate (slope of the regression line relating auditory canal temperature (Tac) to cooling time) to assess the cooling effect of CO2-enriched water immersion. Results: Immersion in 40 °C tap water caused Tac to rise 0.64 ± 0.25 °C in the tap water session and 0.62 ± 0.27 °C in the CO2-enriched water session (P &gt; 0.05). During the 30 °C water immersion, Tac declined to the baseline within 13 ± 6 min in tap water and 10 ± 6 min in CO2-enriched water (P &gt; 0.05). Cooling rates were 0.08 ± 0.06 °C/min in tap water and 0.08 ± 0.04 °C/min in CO2-enriched water (P &gt; 0.05). Conclusions: CO2-enriched water containing 500 ppm CO2 did not cool faster than tap water immersion. This suggests that when the water temperature is 30 °C, a CO2 concentration of 500 ppm is insufficient to obtain the advantageous cooling effect during water immersion after body temperature has been increased. abstract_id: PUBMED:33548702 A year-long cyclic pattern of dissolved organic matter in the tap water of a metropolitan city revealed by fluorescence spectroscopy. Delivering drinking water with stable quality in metropolitan cities is a big challenge. This study investigated the year-long dynamics of dissolved organic matter (DOM) in the tap water and source water of a metropolitan city in southern China using fluorescence spectroscopy. The DOM detected in the tap water, and source water of Shenzhen city was season and location-dependent. A year-long cyclic trend of DOM was found with predominate protein-like fluorescence in the dry season compared to the humic-like enriched DOM in the wet season. A general DOM pattern was estimated by measuring the shift in dominant fluorescence regions on the excitation-emission matrix (EEM). The difference in fluorescent DOM (FDOM) composition (in terms of the ratio of protein-like to humic-like fluorescence) was above 200% between wet and dry seasons. The taps associated with reservoirs receiving water from the eastern tributary of Dongjiang River showed significant changes in protein-like contents than the taps with source water originating from the western part of the river. This study highlights the importance of optimizing drinking water treatment plants' operational conditions after considering seasonal changes and source water characteristics. abstract_id: PUBMED:31143736 Parental perception of fluoridated tap water. Purpose: The purpose of this study was to investigate parental knowledge and preference of tap water in a country where faucet water is fluoridated according to international standards and where the average percentage of dental caries in young children reaches up to approximately 73%. Materials And Methods: A cross-sectional perspective study was conducted at Hamad Medical Corporation, the only tertiary care and academic hospital in the state of Qatar. Parents of children older than 1 year of age were offered an interview survey. Results: A total of 200 questionnaires were completed (response rate = 100%). The mean age of participant children was 6 ± 4 years. One of the main finding in our study was that primary care physicians never discussed the topic of the best water choice for children in our community, as expressed by more than 86% of parents. More than two-third of parents used bottled water. The main concerns of why parents did not allow their children to drink tap water were taste (8.94%), smell (9.76%), concerns of toxins content (32.52%), and concerns that tap water might cause unspecified sickness (52.03%). Amid revealing participants that our tap water is safe and that fluorine can prevent dental caries, 33% of parents would you use tap water due to its fluoride content. The study also showed that 65% of parents would allow their children to drink tap water if it is free from any toxic ingredients. Conclusion: Actions to augment fluoridated water acceptability in the developing world, such as focusing on safety and benefits, could be important in the disseminated implementation of the use of faucet water. Ultimately, a slump in the prevalence of dental caries among children will depend on the ability of pediatricians and dental professionals to institute evidence-based and preventive approach that can benefit oral health in childhood. These data will also allow us to propose the use of tap water safely in young children in the state of Qatar while simultaneously advocating awareness of oral health. abstract_id: PUBMED:19963769 Spontaneous variability analysis for characterizing cardiovascular responses to water ingestion. This paper examines the effect of water ingestion on the cardiovascular system, utilizing advanced fluctuation analysis. The ingestion of water has been known to significantly raise the blood pressure in subjects with autonomic disorders, resulting in the effect of preventing syncope occurrences. For precise characterization of the effect of water ingestion, head-up tilt experiments at 80 degrees have been conducted for fourteen healthy subjects, ranging in age from 16 to 24. Systolic/diastolic blood pressures (sBP/dBP), total peripheral resistance index (TRPI) and ECG RR intervals (RRIs) were measured for thirty minutes before and after the isotonic water ingestion of 340 ml. Blood pressures: sBP (2.8%), dBP(3.6%), and TPRI (5.3%) showed statistically significant increases after the water ingestion. RRIs also tended to increase (2.3%), although they were not statistically significant. The data analysis confirmed that the water injection of 340 ml has the acute effect against the syncope occurrences that are mainly due to the increase in TPRI. Then heart rate (HR) spectral analysis with the derivative of the cubic spline interpolation (DCSI) method, and a closed loop system identification technique, which associate fluctuations in sBP and HR, are utilized for further precise characterization of the change in recorded physiologic quantities. abstract_id: PUBMED:35964802 Ingestion of carbonated water increases middle cerebral artery blood velocity and improves mood states in resting humans exposed to ambient heat stress. Sugar-free carbonated water is consumed worldwide. The consumption of carbonated water is high in summer, when the heat loss responses of sweating and skin vasodilation are activated, and thermal perceptions (thermal sensation and comfort) and mood states are negatively modulated. However, whether ingesting carbonated water under ambient heat exposure modulates cerebral blood flow index, heat loss responses, thermal perceptions, and mood states remains to be determined. In this study, 17 healthy, habitually active, young adults (eight women) ingested 4 °C noncarbonated or carbonated water under 37 °C ambient heat-stressed resting conditions. Both drinks increased the middle cerebral artery mean blood velocity, an index of cerebral blood flow, and mean arterial pressure, with carbonated water exhibiting higher elevations than noncarbonated water (P &lt; 0.05). However, the heart rate, sweat rate, and skin blood flow during and after drinking remained unchanged between the two conditions (P &gt; 0.05). The thermal sensation and comfort after drinking remained unchanged between the two conditions (P &gt; 0.05); but, a drink-induced reduction in sleepiness was higher, and drink-induced elevations in motivation and exhilaration were higher after ingesting carbonated water than those after ingesting noncarbonated water (P &lt; 0.05). The analyses suggest that in humans under ambient heat-stressed resting conditions, ingestion of cold carbonated water increases the cerebral blood flow index, blood pressure, motivation, and exhilaration, whereas it decreases sleepiness relative to ingestion of noncarbonated cold water. However, ingestion of cold carbonated water fails to modulate thermoregulatory responses and thermal perception as opposed to noncarbonated cold water. Answer: Yes, the water temperature does matter when it comes to cardiovascular and metabolic responses to tap water ingestion in young humans. Studies have shown that the ingestion of cold and room-tempered water, as opposed to body-tempered water, can lead to a variety of physiological changes. Ingestion of cold (3°C) and room-tempered (22°C) water has been associated with decreased heart rate and double product, increased stroke volume, high frequency power of heart rate variability, and baroreflex sensitivity. These effects were not observed with body-tempered water (37°C). Additionally, cold and room-tempered water increased energy expenditure over 90 minutes, suggesting that both small increases in heat production and decreased heat loss contribute to warming up the ingested water to intra-abdominal temperature levels (PUBMED:24684853). Furthermore, the ingestion of cold caffeinated herbal tea (Yerba Mate) led to a decrease in heart rate, double product, skin blood flow, and hand temperature, while increasing baroreflex sensitivity, fat oxidation, and energy expenditure compared to hot tea. This indicates that cold tea induces a higher increase in energy expenditure than hot tea without eliciting negative effects on the cardiovascular system (PUBMED:29681860). Another study found that consuming water at 2°C reduced energy intake in healthy young men, which may be related to the modulation of gastric motility (PUBMED:30617417). In summary, the temperature of ingested water does have significant effects on cardiovascular and metabolic responses in young humans, with colder temperatures generally leading to increased energy expenditure, changes in heart rate variability, and potentially reduced energy intake.
Instruction: Diminished perception of ambient light: a symptom of clinical depression? Abstracts: abstract_id: PUBMED:11099749 Diminished perception of ambient light: a symptom of clinical depression? Objective: In a non-randomized, uncontrolled pilot study, the authors investigated whether depressed patients were more likely to perceive the lighting in their environment as being dimmer than usual. Method: 120 patients (46 males, 74 females) who presented for possible admission for depression at a psychiatric facility were administered a Diagnostic and Statistical Manual of disorders (DSM-IV) based questionnaire and underwent psychiatric evaluation. A question asking whether 'the lights in my surroundings seem dimmer than usual' was included in the 15-point question survey. Statistical analyses were performed to determine whether an affirmative response to this dimness question was correlated with the depth of depression (mild, moderate, severe) and also whether significant correlation was present between the percentage of patients answering yes to the dimness question versus the number of yes responses to the core symptoms of depression. Results: Two thirds of the patients categorized as severely depressed responded that their ambient environment appeared dimmer than usual compared to 21% of moderately and 14% of mildly depressed patients. This difference was statistically significant (P&lt;0.05). The degree of depression as determined by the number of core questions answered affirmatively and the presence of this 'dimness' symptom were highly correlated (P=0.002, R=0.87). Limitations: The specificity of the finding has not been tested in reference to non-affective psychiatric patient groups. Conclusion: A patient's perception of the ambient light in the environment being dimmer than usual may be an important symptom of a major depressive disorder. Further replication and objective testing of visual function in depressed patients appears warranted. abstract_id: PUBMED:27194618 Influence of Ambient Odors on Time Perception in a Retrospective Paradigm. Environmental stimuli can influence time perception, including sensory stimulations. Among them, odors are known to modulate emotion, attention, behavior, or performance, but few studies have investigated the possible effects of ambient odors on time perception. Thus, the present study aimed to compare in a retrospective paradigm the time estimation in three conditions, i.e., with phenyl ethyl alcohol as a pleasant odor, pyridine as unpleasant odor, and a control condition without ambient odor. A total of 90 participants (M age = 23 years, 10 months) took part in three different tasks, i.e., an aesthetic classification task, a sensorimotor checking task, and a mathematical operations task. Results showed a better accuracy of the time estimation in odor condition (1) independently of the characteristics of odorants (2) limited to tasks with a low cognitive involvement. These findings are discussed in relation to the possible role of attention and arousal in the modulation of time perception by ambient odors. abstract_id: PUBMED:18055020 Diminished perception of light as a symptom of depression: further studies. Background: In a previous preliminary report, the perception of a decrease in ambient light intensity appeared to be correlated with depression. We prospectively studied this potential link in a controlled study. Methods: The question, "I've noticed that the lights in my surroundings seem dimmer than usual", was added to the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire and administered prospectively to 213 subjects 50-80 years of age participating in the Age-Related Eye Disease Study (AREDS). All had visual acuity of 20/32 or better in at least one eye. Main outcome measures were the relationship between the dimness question answer and severity of depression, and the likelihood that patients reporting dimness were depressed. Results: Subjects endorsing their surroundings as being dimmer than usual at least some of the time had a mean CES-D score of 10.6 (SD=7.0) compared to a mean of 5.5 (SD=5.4) for subjects who never noted dimness (t=-4.22, p=.0001). Depressed individuals (CES-D &gt; or = 16) were significantly more likely to report dimness than non-depressed (CES-D&lt;16) subjects (chi(2)=15.6, p&lt;0.0001). The total CES-D score and the degree of reported dimness (0-3) were significantly associated (r=0.31, p&lt;.0001). Using a stepwise regression analysis, subjects who reported any dimness were more likely to be depressed. Limitations: A relatively small number of subjects, 38 (18%), reported dimness requiring us to dichotomize their dimness level in some analyses. Conclusions: Perceived dimness of one's ambient surroundings and clinical depression are linked. Health care professionals should inquire about this symptom in potentially depressed patients. abstract_id: PUBMED:36498345 Perception of Current Educational Environment, Clinical Competency, and Depression among Malaysian Medical Students in Clinical Clerkship: A Cross-Sectional Study. The COVID-19 pandemic has altered the educational environment of medical students in clinical clerkship, with potential impacts on clinical competency and reported increased prevalence of depression. This study aimed to determine the relationship between the perception of the educational environment, self-perceived clinical competency, and depression among them. Subjects (N = 196) at the National University of Malaysia participated through convenience sampling in an online survey including sociodemographic data, COVID-19-related stressors, Dundee Ready Education Environment Measure (DREEM), self-perceived clinical competency, and Patient Health Questionnaire (PHQ-9). The cut-off point for depression was a PHQ-9 score ≥ 15. Multiple logistic regression followed bivariate analyses to identify factors for depression. The participants (mean age: 23.2 years, SD ± 0.98 years) were mainly female (71.9%) and Malay (59.2%). The prevalence of depression was 17.4% (95% CI: 12.3-23.4%). Most participants perceived the educational environment positively. In logistic regression, ethnicity (Adjusted OR = 3.1, 95% CI: 1.2-8.1) and DREEM score were significantly associated with depression, whereas self-perceived clinical competency was not. A higher DREEM score indicating a better perception of the educational environment was linked to a lower likelihood of depression (p = 0.046). Besides ethnicity, perception of the educational environment emerged as a factor associated with depression. This relationship between the educational environment and mental well-being warrants further exploration. abstract_id: PUBMED:31894712 Association of depression and obesity is mediated by weight perception. This study investigates whether the association between obesity and depression is mediated by the perception of body weight and verifies the combined effect of being obese and having a self-perception of being fat on depression in a population-based sample of 1238 individuals. Weight perception mediated the association between depression and obesity in 39.3 percent of participants. In stratified analysis, mediation occurred in the following groups: non-single, those with more schooling, non-alcohol abusers, non-smokers, and those who did not engage in physical activity. Being obese and having a self-perception of being fat produced a potentiating effect, significantly increasing the likelihood of depression. abstract_id: PUBMED:35935461 The Relationship Between Corruption Perception and Depression: A Multiple Mediation Model. Background: Corruption perception is an important risk factor for depression. On the psychological level, corruption perception will cause negative emotions to individuals. On the physiological level, higher corruption perception may mean a more unfair social environment, which is not conducive to individuals' health. However, the mechanism linking corruption perception and depression has not been fully understood. Objective: To investigate how corruption perception affects depression, this study used trust in government and online news consumption as mediators to construct a multiple mediation model. Methods: The data used in this study were derived from the 2016 wave and 2018 wave of China Family Panel Studies (CFPS). After eliminating samples with missing values, this study finally included 7845 samples. This study used Stata version 16.0 and a longitudinal research design to investigate the relationship between corruption perception and depression. Results: The results revealed that the increase on corruption perception could aggravate depression (β = 0.037, p &lt; 0.05). Meanwhile, trust in government partially mediated the effect of corruption perception on depression (indirect effect = 0.030, p &lt; 0.001). Notably, online news consumption partially masked the effect of corruption perception on depression (indirect effect = -0.003, p &lt; 0.01). Conclusion: Trust in government and online news consumption may be two important mediators between corruption perception and depression. More attention should be paid to the relationship between corruption perception and depression, and mental health promotion interventions could be tailored to alleviate depression in the future. abstract_id: PUBMED:34204130 Effects of Environmental Quality Perception on Depression: Subjective Social Class as a Mediator. Although the relationship between environment and public depression has aroused heated debate, the empirical research on the relationship between environmental quality perception and public depression is still relatively insufficient. This paper aims to explore the influence of environmental quality perception on public depression and the mediating role of subjective social class between environmental quality perception and public depression. Using the China Family Panel Studies data of 2016 for empirical analysis, this study's results show that environmental quality perception has a significant effect on public depression and subjective social class also has a significant effect on public depression. In addition, we found that subjective social class can play a partial mediating role between environmental quality perception and public depression, and the intermediary effect only comes from the contribution of the perception of living environmental quality, not the perception of overall environmental quality. That is to say, the perception of living environment quality deeply affects the subjective social class, and then induces public depression. In order to alleviate the relationship between environmental quality and public depression, it is recommended that the state environmental protection department and civil affairs department strengthen the improvement of public living environment so as to promote individual subjective social class and reduce the risk of public depression. Moreover, it is suggested that research with longitudinal design and comprehensive indicators be undertaken in the future. abstract_id: PUBMED:29495214 A review on the research progress related to ambient air pollution and depression It is reported that depression has caused heavy disease burden across the world, with an possible association between ambient air pollution and depressive symptoms. In this paper, we reviewed relative literature in this field and summarized the research events on association between ambient air pollution and depression, both in China and abroad and found that the results of the existed studies were inconsistent, with most studies showing that there existed a positive correlation between the exposure of air pollution and depression, but few studies showing the negative correlation or no correlation between the two. abstract_id: PUBMED:32090767 The accuracy of depression risk perception in high risk Canadians. Background: Prevention and early detection of depression is a top public health priority. Accurate perception of depression risk may play an important role in health behavior change and prevention of depression. However, the way in which people in the community perceive their risk of developing depression is currently unknown. Methods: We analyzed the baseline data from a randomized controlled trial in 358 men and 356 women who are at high risk of having a major depressive episode (MDE). The predicted risk was assessed by sex-specific multivariable risk predictive algorithms for MDE. We compared participants' perceived risk and their predicted risk. Accurate risk perception was defined as perceived risk is in the range of predicted risk ± 10%. Results: In men, 29.7% perceived their risk accurately; 47.5% overestimated their risk; 22.8% underestimated their risk. In women, the proportions were 21.7%, 59.6% and 18.7%, respectively. Compared to men, women were more likely to overestimate their risk and less likely to be accurate. Regression modeling revealed that poor self-rated health and higher predicted depression risk were associated with inaccuracy of risk perception in men; a family history of MDE, higher psychological distress and lower predicted risk were associated with inaccuracy of risk perception in women. Conclusions: Individuals who are at high risk of developing depression tend to overestimate their risk, especially women. Inaccurate depression risk perception is related to people's health status. Educational interventions are needed to enhance the accuracy of risk perception to encourage positive behavior change and uptake of preventive strategies. abstract_id: PUBMED:36058137 Time perception at resting state and during active motion: The role of anxiety and depression. Background: Time perception and motion intensity are interrelated factors that may influence symptom expression and severity in case of various psychiatric conditions, including anxiety and depression. Aims: The present study aimed to 1) explore the associations between the intensity of physical activity, time perception, impulsivity, anxiety and depressive symptoms, and to 2) investigate the extent to which resting state motion intensity can be used to identify the assessed psychiatric conditions. Methods: 20 healthy controls and 20 psychiatric patients (with either anxiety or depression-related diagnoses) were included in the study and filled out a questionnaire consisting of validated anxiety, depression and impulsivity measures. Time perception was measured by a computerized time production task, whereas motion intensity was analyzed by a motion capture and analysis software. Respondents were randomly assigned to an experimental (with active motion task) and non-experimental group (resting state conditions). Both subgroups were repeatedly assessed, in order to explore changes in motion intensity, time perception and psychiatric symptom levels. Results: Random forest regression analysis identified the level of impulsivity, depression and anxiety as the strongest predictors of resting state motion intensity, while a path analysis model indicated that controls and psychiatric patients show different pathways regarding the connection between motion intensity changes, time production ratio alterations and symptom reduction. Conclusions: Our study implies the importance of distinguishing between clinical and subclinical severity of psychiatric symptoms when considering the association between motion intensity, time perception, anxiety and depression. Potential transdiagnostic relevance of resting state motion intensity is also addressed. Answer: The perception of ambient light being dimmer than usual may indeed be a symptom of clinical depression. In a pilot study, it was found that two-thirds of patients categorized as severely depressed responded that their ambient environment appeared dimmer than usual compared to only 21% of moderately and 14% of mildly depressed patients. This difference was statistically significant, and there was a high correlation between the degree of depression and the presence of this 'dimness' symptom (PUBMED:11099749). Further studies have supported this finding, showing that subjects who reported their surroundings as being dimmer than usual had significantly higher scores on the Center for Epidemiologic Studies Depression Scale (CES-D) compared to those who did not report dimness. Depressed individuals were also significantly more likely to report dimness than non-depressed subjects (PUBMED:18055020). These findings suggest that health care professionals should inquire about this symptom in potentially depressed patients, as perceived dimness of one's ambient surroundings and clinical depression appear to be linked.
Instruction: Tuberculosis epidemics driven by HIV: is prevention better than cure? Abstracts: abstract_id: PUBMED:37217850 The estimation of long and short term survival time and associated factors of HIV patients using mixture cure rate models. Background: HIV is one of the deadliest epidemics and one of the most critical global public health issues. Some are susceptible to die among people living with HIV and some survive longer. The aim of the present study is to use mixture cure models to estimate factors affecting short- and long-term survival of HIV patients. Methods: The total sample size was 2170 HIV-infected people referred to the disease counseling centers in Kermanshah Province, in the west of Iran, from 1998 to 2019. A Semiparametric PH mixture cure model and a mixture cure frailty model were fitted to the data. Also, a comparison between these two models was performed. Results: Based on the results of the mixture cure frailty model, antiretroviral therapy, tuberculosis infection, history of imprisonment, and mode of HIV transmission influenced short-term survival time (p-value &lt; 0.05). On the other hand, prison history, antiretroviral therapy, mode of HIV transmission, age, marital status, gender, and education were significantly associated with long-term survival (p-value &lt; 0.05). The concordance criteria (K-index) value for the mixture cure frailty model was 0.65 whereas for the semiparametric PH mixture cure model was 0.62. Conclusion: This study showed that the frailty mixture cure models is more suitable in the situation where the studied population consisted of two groups, susceptible and non-susceptible to the event of death. The people with a prison history, who received ART treatment, and contracted HIV through injection drug users survive longer. Health professionals should pay more attention to these findings in HIV prevention and treatment. abstract_id: PUBMED:14600522 Tuberculosis epidemics driven by HIV: is prevention better than cure? Objective: To compare the benefits of tuberculosis (TB) treatment with TB and HIV prevention for the control of TB in regions with high HIV prevalence. Design And Methods: A compartmental difference equation model of TB and HIV has been developed and fitted to time series and other published data using Bayesian methods. The model is used to compare the effectiveness of TB chemotherapy with three strategies for prevention: highly active antiretroviral therapy (HAART), the treatment of latent TB infection (TLTI) and the reduction of HIV transmission. Results: Even where the prevalence of HIV infection is high, finding and curing active TB is the most effective way to minimize the number of TB cases and deaths over the next 10 years. HAART can be as effective, but only with very high levels of coverage and compliance. TLTI is comparatively ineffective over all time scales. Reducing HIV incidence is relatively ineffective in preventing TB and TB deaths over 10 years but is much more effective over 20 years. Conclusions: In countries where the spread of HIV has led to a substantial increase in the incidence of TB, TB control programmes should maintain a strong emphasis on the treatment of active TB. To ensure effective control of TB in the longer term, methods of TB prevention should be carried out in addition to, but not as a substitute for, treating active cases. abstract_id: PUBMED:25069354 Nurses in the face of the Great War's epidemics In 1914, nurses were still considered as volunteers. By 1918, given more efficient training, they had acquired legitimacy among the French public. Their skills and their professionalism were appreciated and recognised, notably thanks to the crucial role they played in the fight against the tuberculosis and Spanish flu epidemics. abstract_id: PUBMED:20343357 The principles of a cure for tuberculosis patients. N/A abstract_id: PUBMED:38235116 Stevens' Cure (Umckaloabo)-the vindication of a patent medicine. Stevens' Cure (Umckaloabo) emerged as a patent medicine claiming to treat tuberculosis in the United Kingdom at the beginning of the 20th century. However, due to its identity being shrouded in secrecy, it was never truly accepted by the medical community. It was "rediscovered" in the 1970s and subsequently developed into a very popular and successful phytopharmaceutical for the treatment of upper respiratory tract infections. Whether Stevens' Cure contained the same ingredient(s) as the modern Umckaloabo has not yet been demonstrated. We attempted to elucidate for the first time the identity of the original ingredient by comparative analysis of historical product samples. Three historical samples of Stevens' Cure were compared with Pelargonium sidoides DC. and P. reniforme Curt. root per UPLC-MS analysis. We confirm that the ingredient-P. sidoides DC.-is indeed the same as used in modern phytotherapy. We also attribute the first ethnopharmacological record of P. sidoides DC. being used for the treatment of tuberculosis to C. H. Stevens, the "creator" of Umckaloabo. abstract_id: PUBMED:36964130 Transmission modeling to infer tuberculosis incidence prevalence and mortality in settings with generalized HIV epidemics. Tuberculosis (TB) killed more people globally than any other single pathogen over the past decade. Where surveillance is weak, estimating TB burden estimates uses modeling. In many African countries, increases in HIV prevalence and antiretroviral therapy have driven dynamic TB epidemics, complicating estimation of burden, trends, and potential intervention impact. We therefore develop a novel age-structured TB transmission model incorporating evolving demographic, HIV and antiretroviral therapy effects, and calibrate to TB prevalence and notification data from 12 African countries. We use Bayesian methods to include uncertainty for all TB model parameters, and estimate age-specific annual risks of TB infection, finding up to 16.0%/year in adults, and the proportion of TB incidence from recent (re)infection, finding a mean across countries of 34%. Rapid reduction of the unacceptably high burden of TB in high HIV prevalence settings will require interventions addressing progression as well as transmission. abstract_id: PUBMED:22083439 A worldwide investigation of tuberculosis epidemics. We analyse the tuberculosis (TB) epidemics of 211 countries with a view to proposing more efficient and targeted TB control strategies. Countries are classified by how their TB case notification rates have evolved over time and the age distribution of those suffering from active TB disease in 2008. Further analysis of key statistics associated with each of the countries shows the impact of different indicators. As expected, HIV is a key driver of TB epidemics and affects their age-distribution and their scale. The level of development of a country and its wealth also vary with the shape and scale of a country's TB epidemic. Immigration has an influence on the shape of TB epidemics, which is particularly pronounced in highly developed countries with low levels of TB disease in the native population. We conclude by proposing how the TB control programme in each country analysed should prioritise its efforts. abstract_id: PUBMED:33374751 Bayesian Spatial Survival Analysis of Duration to Cure among New Smear-Positive Pulmonary Tuberculosis (PTB) Patients in Iran, during 2011-2018. Mycobacterium tuberculosis is the causative agent of tuberculosis (TB), and pulmonary TB is the most prevalent form of the disease worldwide. One of the most concrete actions to ensure an effective TB control program is monitoring TB treatment outcomes, particularly duration to cure; but, there is no strong evidence in this respect. Thus, the primary aim of this study was to examine the possible spatial variations of duration to cure and its associated factors in Iran using the Bayesian spatial survival model. All new smear-positive PTB patients have diagnosed from March 2011 to March 2018 were included in the study. Out of 34,744 patients, 27,752 (79.90%) patients cured and 6992 (20.10%) cases were censored. For inferential purposes, the Markov chain Monte Carlo algorithms are applied in a Bayesian framework. According to the Bayesian estimates of the regression parameters in the proposed model, a Bayesian spatial log-logistic model, the variables gender (male vs. female, TR = 1.09), altitude (&gt;750 m vs. ≤750 m, TR = 1.05), bacilli density in initial smear (3+ and 2+ vs. 1-9 Basil &amp; 1+, TR = 1.09 and TR = 1.02, respectively), delayed diagnosis (&gt;3 months vs. &lt;1 month, TR = 1.02), nationality (Iranian vs. other, TR = 1.02), and location (urban vs. rural, TR = 1.02) had a significant influence on prolonging the duration to cure. Indeed, pretreatment weight (TR = 0.99) was substantially associated with shorter duration to cure. In summary, the spatial log-logistic model with convolution prior represented a better performance to analyze the duration to cure of PTB patients. Also, our results provide valuable information on critical determinants of duration to cure. Prolonged duration to cure was observed in provinces with low TB incidence and high average altitude as well. Accordingly, it is essential to pay a special attention to such provinces and monitor them carefully to reduce the duration to cure while maintaining a focus on high-risk provinces in terms of TB prevalence. abstract_id: PUBMED:26835156 Tuberculosis and Cardiovascular Disease: Linking the Epidemics. The burden of tuberculosis and cardiovascular disease (CVD) is enormous worldwide. CVD rates are rapidly increasing in low- and middle-income countries. Public health programs have been challenged with the overlapping tuberculosis and CVD epidemics. Monocyte/macrophages, lymphocytes and cytokines involved in cellular mediated immune responses against Mycobacterium tuberculosis are also main drivers of atherogenesis, suggesting a potential pathogenic role of tuberculosis in CVD via mechanisms that have been described for other pathogens that establish chronic infection and latency. Studies have shown a pro-atherogenic effect of antibody-mediated responses against mycobacterial heat shock protein-65 through cross reaction with self-antigens in human vessels. Furthermore, subsets of mycobacteria actively replicate during latent tuberculosis infection (LTBI), and recent studies suggest that LTBI is associated with persistent chronic inflammation that may lead to CVD. Recent epidemiologic work has shown that the risk of CVD in persons who develop tuberculosis is higher than in persons without a history of tuberculosis, even several years after recovery from tuberculosis. Together, these data suggest that tuberculosis may play a role in the pathogenesis of CVD. Further research to investigate a potential link between tuberculosis and CVD is warranted. abstract_id: PUBMED:36258908 Evaluations of Factors Affecting of Short-and Long-Time of Occurrence of Disease Relapse in Patients with Tuberculosis Using Parametric Mixture Cure Model: A Cohort Study. Background: The success of treatment strategies to control the disease relapse requires determining factors affecting the incident short-time and long-time of disease relapse. Therefore, this study was aimed to identify the factors affecting of short-and long-time of occurrence of disease relapse in patients with tuberculosis (TB) using a parametric mixture cure model. Materials And Methods: In this historical cohort study; the data was collected from 4564 patients with TB who referred to the Tuberculosis and Lung Diseases Research Center of Dr. Masih Daneshvari Hospital from 2005 to 2015. In order to evaluate the factors affecting of short-and long-time of occurrence of disease relapse, a parametric mixture cure model was used. Results: In this study, the estimation of the annual incidence of TB relapse showed that the probability of recurrence in the first year is 1% and in the third and tenth years after treatment is 3% and 5%, respectively. In addition, the results of this study showed that the variables of residence, exposure to cigarette smoke, adverse effects of drug use, incarceration, and pulmonary and extra- pulmonary tuberculosis were the factors affecting the short-time recurrence of TB. The variables of drug use, pulmonary and extra- pulmonary tuberculosis, and also incarceration affected the long-term recurrence of this disease. Conclusion: Cure models by separating factors affecting the short-time occurrence from the long-time occurrence of disease relapse can provide more accurate information to researchers to control and reduce TB relapse. Answer: The question of whether prevention is better than cure in the context of tuberculosis (TB) epidemics driven by HIV is addressed in the abstract from PUBMED:14600522. The study compares the benefits of TB treatment with TB and HIV prevention strategies for controlling TB in regions with high HIV prevalence. The findings suggest that even in areas with high HIV infection rates, finding and curing active TB is the most effective way to minimize the number of TB cases and deaths over the next 10 years. Highly active antiretroviral therapy (HAART) can be as effective as TB treatment, but only with very high levels of coverage and compliance. Treatment of latent TB infection (TLTI) is comparatively ineffective over all time scales, and reducing HIV incidence is relatively ineffective in preventing TB and TB deaths over 10 years but is much more effective over 20 years. The study concludes that TB control programs should maintain a strong emphasis on the treatment of active TB, and methods of TB prevention should be carried out in addition to, but not as a substitute for, treating active cases. In summary, the study indicates that while prevention strategies are important, particularly in the long term, the treatment of active TB remains a critical component of controlling TB epidemics in the short term, even in the context of high HIV prevalence.
Instruction: A prospective randomized comparison of unsedated ultrathin versus standard esophagogastroduodenoscopy in routine outpatient gastroenterology practice: does it work better through the nose? Abstracts: abstract_id: PUBMED:12929058 A prospective randomized comparison of unsedated ultrathin versus standard esophagogastroduodenoscopy in routine outpatient gastroenterology practice: does it work better through the nose? Background And Study Aims: In an outpatient gastroenterological practice setting, highly effective diagnostic procedures and patient satisfaction play an important role. Ultrathin endoscopy in unsedated patients has been shown to be more cost-effective and time-efficient in comparison with standard endoscopy. A prospective randomized study was carried out in unsedated patients to compare performance, feasibility, safety, and patient tolerance between ultrathin transnasal (UT), ultrathin oral (UO), and standard (SO) esophagogastroduodenoscopy (EGD). Patients And Methods: A total of 200 of 600 eligible patients consented to participate in the study, and were randomly assigned to undergo UT, UO, or SO. Patients reported their tolerance of the procedure (anxiety, pain, gagging, and overall satisfaction; Likert scale 1-10), and the endoscopists reported the effectiveness of the procedure (handling, picture quality, and overall performance; Likert scale 1-10). Statistics were calculated using the Kruskal-Wallis test. Results: After randomization, 65, 67, and 68 patients were allocated to the UT, UO, and SO groups, respectively. Failure to achieve complete EGD by the intended route occurred in 14 patients (22 %) in the UT group. Compared to the SO group, patients in the UT and UO groups rated anxiety before the procedure as being more intense - median score (10 % quantile estimate; 90 % quantile estimate): UT, 2.0 (1.0; 4.0); UO, 2.0 (1.0; 4.0); SO, 0.0 (0.0; 2.0); p &lt; 0.0001), whereas SO patients experienced a higher level of anxiety during the procedure ( P &lt; 0.0001). Pain during insertion of the endoscope was the least intense in the UO group: UT, 2.0 (1.0; 5.0); UO, 1.0 (1.0; 3.0); SO, 2.0 (1.0; 4.0); P &lt; 0.001). Gagging during insertion was more pronounced in the UO group: UT, 2.0 (1.0; 4.0); UO, 3.0 (1.0; 7.0); SO, 2.0 (1.0; 5.0); P &lt; 0.01). The patients' score for the overall assessment was better in the SO group ( P &lt; 0.0001). The endoscopists' overall assessment for ultrathin EGD was poorer than for standard EGD: UT, 3.0 (2.0; 5.0); UO, 3.0 (2.0; 5.0); SO, 2.0 (1.0; 3.0); P &lt; 0.0001). Conclusions: Ultrathin endoscopy through both the transnasal and oral routes has limited use in routine outpatient practice. Techniques for reducing pain and gagging may improve patient tolerance. Further technical improvements are needed to allow routine implementation. abstract_id: PUBMED:23858379 Ultrathin endoscope flexibility can predict discomfort associated with unsedated transnasal esophagogastroduodenoscopy. Aim: To evaluate the effects of choice of insertion route and ultrathin endoscope types. Methods: This prospective study (January-June 2012) included 882 consecutive patients who underwent annual health checkups. Transnasal esophagogastroduodenoscopy (EGD) was performed in 503 patients and transoral EGD in 235 patients using six types of ultrathin endoscopes. Patients were given a choice of insertion route, either transoral or transnasal, prior to EGD examination. For transoral insertion, the endoscope was equipped with a thin-type mouthpiece and tongue depressor. Conscious sedation was not used for any patient. EGD-associated discomfort was assessed using a visual analog scale (VAS; no discomfort 0- maximum discomfort 10). Results: Rates of preference for transnasal insertion were significantly higher in male (male/female 299/204 vs 118/117) and younger patients (56.8 ± 11.2 years vs 61.3 ± 13.0 years), although no significant difference was found in VAS scores between transoral and transnasal insertion (3.9 ± 2.3 vs 4.1 ± 2.5). Multivariate analysis revealed that gender, age, operator, and endoscope were independent significant predictors of VAS for transnasal insertion, although gender, age, and endoscope were those for transoral insertion. Further analysis revealed only the endoscopic flexibility index (EFI) as an independent significant predictor of VAS for transnasal insertion. Both EFI and tip diameter were independent significant predictors of VAS for transoral insertion. Conclusion: Flexibility of ultrathin endoscopes can be a predictor of EGD-associated discomfort, especially in transnasal insertion. abstract_id: PUBMED:19691797 Prospective comparison between sedated high-definition oral and unsedated ultrathin transnasal esophagogastroduodenoscopy in the same subjects: pilot study. Background: Recently, quality as well as acceptability has been a concern regarding endoscopy. The aim of the present study was to compare the acceptability and quality of sedated high-definition esophagogastroduodenoscopy (sHD-EGD) using a newly developed high-definition videoscope with those of unsedated ultrathin esophagogastroduodenoscopy (uUT-EGD) using a 5.2 mm videoscope. Methods: Twenty-two volunteers underwent both peroral sHD-EGD and transnasal uUT-EGD on the same day. Sedation consisted of 40 mg of propofol i.v. Both endoscopist and subject satisfaction levels were assessed using a 10 cm visual analogue scale. Results: All 22 subjects completed the sHD-EGD and 21 subjects completed the uUT-EGD. The endoscopist and subject satisfaction levels of sHD-EGD were significantly better than those of uUT-EGD (overall endoscopist satisfaction: 9 vs 4, P &lt; 0.0001; overall subject satisfaction: 9 vs 3, P &lt; 0.0001). The optical quality of the endoscopic images of sHD-EGD was significantly higher than that of uUT-EGD except in the duodenal bulb (overall quality: 8 vs 7, P &lt; 0.0001). The interobserver agreement for EGD findings in sHD-EGD was better than with uUT-EGD, although the EGD findings in both sHD-EGD and uUT-EGD were similar. After undergoing both procedures, 91% were willing to have sHD-EGD again compared to 9% with uUT-EGD. Conclusions: The endoscopist and subject satisfaction levels and image quality of sHD-EGD were better than those of uUT-EGD. The routine use of high-definition videoscopes would be expected to provide better acceptability than that obtained with unsedated endoscopy. abstract_id: PUBMED:12160501 Comparison of thin versus standard esophagogastroduodenoscopy. Objective: To compare the tolerance, feasibility, and safety of ultrathin esophagogastroduodenoscopy (EGD) in unsedated patients with conventional EGD in sedated patients. Study Design: This was an unblinded, randomized controlled trial. Population: Diagnostic EGD was performed on 72 adult outpatients at a US Air Force community hospital residency. Patients were randomized to either ultrathin or conventional EGD (n = 33 and 39, respectively). Outcomes Measured: Patients reported their tolerance of the procedure (pain, choking, gagging, and anxiety; scale 0-10), and the endoscopist reported the effectiveness of the procedure (successful intubation, reaching duodenum, retroflexion, and duration of examination and recovery) and safety (complications). Results: No statistically significant difference was noted between the 2 groups in mean procedure time or pain during the procedure. Mean ( standard error) recovery time was approximately halved in the ultrathin group vs the conventional group (21.5 +/- 2.3 min vs 55.4 +/- 2.3 min, P &lt; 0001). Although patients undergoing ultrathin EGD had higher mean gagging and choking scores, they had lower mean anxiety scores. Of 33 patients randomized to the unsedated ultrathin EGD procedure, 29 completed the protocol. The retroflexion maneuver was completed in 85% of patients in the ultrathin EGD group and 100% of patients in the conventional EGD group (P =.017). No statistically significant difference was noted between groups as to the likelihood of reaching the second portion of the duodenum (97% vs 100%). Conclusions: Most patients tolerate ultrathin EGD with significantly shorter recovery time and less overall anxiety than with the conventional procedure. Techniques to reduce gagging and choking associated with ultrathin EGD may improve patient acceptance and tolerability. Adoption of ultrathin EGD by primary care physicians may decrease cost, time, and inconvenience while increasing access to EGD for many patients. abstract_id: PUBMED:34079324 Comparison of Lidocaine Spray and Lidocaine Ice Popsicle in Patients Undergoing Unsedated Esophagogastroduodenoscopy: A Single Center Prospective Randomized Controlled Trial. Purpose: Esophagogastroduodenoscopy (EGD) under topical pharyngeal anesthesia has the advantage of avoiding the unwanted cardiopulmonary adverse events experienced following intravenous sedation. Lidocaine spray is a common anesthetic option and is safe for unsedated EGD. Although several studies have compared different topical anesthetic agents, their formulations, and delivery techniques, questions still remain concerning the optimal mode of administration. We have designed a lidocaine formulation in the form of an ice popsicle and compared its effectiveness and tolerability with lidocaine spray in patients undergoing unsedated EGD. Methods: This was a single-center prospective randomized controlled trial. Unsedated EGD patients were randomly allocated the lidocaine spray [Group (Gp) A] or lidocaine ice popsicle (Gp B) formulation. Results: In total, 204 unsedated EGD patients were evaluated. Compared to the spray, the lidocaine ice popsicle group showed better scores for effects in terms of endoscopist satisfaction (Gp A, 7.28±1.44; Gp B, 7.8±0.89; p=0.0022), gag reflex (Gp A, 1.3±0.66; Gp B, 1.02±0.61; p=0.0016), patient satisfaction (Gp A, 7.74±0.82; Gp B, 8.08±0.82; p=0.0039), discomfort (Gp A, 6.54±1.34; Gp B, 5.95±1.21; p=0.0012), and pain (Gp A, 5.38±1.85; Gp B, 4.51±2.01; p=0.0015). Conclusion: Both the lidocaine spray and ice popsicle formulations are safe, effective options for diagnostic EGD with the ice popsicle exhibiting better performance. We propose the lidocaine ice popsicle formulation for topical pharyngeal anesthesia in patients undergoing unsedated diagnostic EGD and suggest it may be a suitable option during the COVID-19 pandemic. Clinical Trial Register: Thai Clinical Trials Registry (TCTR) number TCTR20190502001. abstract_id: PUBMED:17376037 Prospective randomized trial of transnasal versus peroral endoscopy using an ultrathin videoendoscope in unsedated patients. Aim: The aim of this study was to compare the acceptance and tolerance of transnasal and peroral esophagogastroduodenoscopy (EGD) using an ultrathin videoendoscope in unsedated patients. Methods: A total of 124 patients referred for diagnostic endoscopy were assigned randomly to have an unsedated transnasal EGD (n = 64) or peroral EGD (n = 60) with local anesthesia. An ultrathin videoendoscope with a diameter of 5.9 mm was used in this study. A questionnaire for tolerance was completed by the patient (a validated 0-10 scale where '0' represents no discomfort/well tolerated and '10' represents severe discomfort/poorly tolerated). Results: Of the 64 transnasal EGD patients, 60 patients (94%) had a complete examination. Four transnasal EGD examinations failed for anatomical reasons; all four patients were successfully examined when switched to the peroral EGD. All 60 peroral EGD patients had a complete examination. Between the transnasal and peroral groups, there was a statistically significant difference in scores for discomfort during local anesthesia (1.5 +/- 0.2 vs 2.6 +/- 0.3, P = 0.003), discomfort during insertion (2.3 +/- 0.3 vs 4.3 +/- 0.3, P = 0.001), and overall tolerance during procedure (1.6 +/- 0.2 vs 3.8 +/- 0.2, P = 0.001). In all, 95% of transnasal EGD patients and 75% of peroral EGD patients (P = 0.002) were willing to undergo the same procedure in the future. Four patients in the transnasal EGD group experienced mild epistaxis. Conclusion: For unsedated endoscopy using an ultrathin videoendoscope, transnasal EGD is well tolerated and considerably reduces patient discomfort compared with peroral EGD. abstract_id: PUBMED:14724812 Unsedated ultrathin EGD is well accepted when compared with conventional sedated EGD: a multicenter randomized trial. Background & Aims: In the United States, upper gastrointestinal endoscopy is usually performed using intravenous sedation. Sedation increases the rate of both complications and costs of endoscopy. Unsedated esophagogastroduodenoscopy (EGD) using conventional 8-11-mm endoscopes is an alternative to sedated endoscopy but is generally perceived as unacceptable to many American patients. Unsedated EGD using ultrathin 5-6-mm endoscopes is better tolerated. A randomized trial comparing unsedated ultrathin EGD (UT-EGD) with sedated conventional EGD (C-EGD) in a diverse American population is needed. Methods: In this multicenter, randomized, controlled trial, 80 patients scheduled to undergo elective outpatient EGD were randomized to unsedated UT-EGD or sedated C-EGD. The study was carried out at San Francisco General Hospital, San Francisco Veterans Affairs Medical Center, and the Liver and Digestive Health Medical Clinic, San Jose. Results: Baseline characteristics of patients randomized to unsedated UT-EGD and sedated C-EGD were similar. Moreover, there were no significant differences in overall patient satisfaction and willingness to repeat endoscopy in the same manner among the 2 study groups. There was, however, a significant difference in median total procedure time between the 2 study groups of 1.5 hours (P &lt; 0.0001). The mean (+/- SD) total procedure cost was 512.4 US dollars (+/- 100.8 US dollars) for sedated C-EGD and 328.6 US dollars (+/- 70.3 US dollars) for unsedated UT-EGD (P &lt; 0.0001). Conclusions: Patients undergoing unsedated UT-EGD are as satisfied as patients undergoing sedated C-EGD and are just as willing to repeat an unsedated UT-EGD. Unsedated UT-EGD was also faster, less costly, and may allow greater accessibility to this procedure. abstract_id: PUBMED:10855118 Unsedated transnasal esophagogastroduodenoscopy: a view of the future. Unsedated transnasal esophagogastroduodenoscopy (EGD) is a unique approach to viewing the gastrointestinal tract. A significant portion of the expense and complications of conventional EGD is related to the use of conscious sedation. Transnasal EGD can be performed with a topical anesthetic alone and can be safely executed in inpatient and outpatient settings. abstract_id: PUBMED:24146101 Is the transnasal access for esophagogastroduodenoscopy in routine use equal to the transoral route? A prospective, randomized trial. Background And Study Aims: Routine esophagogastroduodenoscopy (EGD) is increasingly performed without sedation. Transoral (TO) and transnasal (TN) EGD offer different patient comfort and complications. Patients And Methods: For a controlled, randomized, clinical trial comparing TN-EGD with TO-EGD without sedation, patients were assigned to TN-EGD using a thin endoscope (group 1, 93 patients), or TO-EGD using a standard endoscope (group 2, 90 patients). Physician-rated procedural time and complications as well as patient-rated side effects and preferences were compared. In group 3, patients (118) who had previously undergone TO-EGD, now underwent TN-EGD. Results: Between group 1 and 2 there was no significant difference for procedural time. Nausea (p = 0.047) and epistaxis (p &lt; 0.001) were significantly more frequent for TN-EGD. Conversion rate from TN- to TO-EGD was low with 4.3 %. For TN-EGD, patients' tolerance was better (p &lt; 0.001), gagging was less (p &lt; 0.001). In case of a future EGD, patients who know both procedures (group 3), strongly vote for TN-EGD (80 %). All groups vote against sedation for future procedures (90 %/90 %/89 %). Conclusions: Epistaxis can be relevant after TN-EGD, but can mostly be managed conservatively. TN-EGD is superior to TO-EGD regarding subjective and objective gagging as well as procedural tolerance. Patients who experienced both access routes, prefer TN-EGD. TN-EGD without sedation should be aspired for patient comfort and is recommended for routine use. abstract_id: PUBMED:23368664 Classification of patients who experience a higher distress level to transoral esophagogastroduodenoscopy than to transnasal esophagogastroduodenoscopy. Background: In Japanese routine clinical practice, endoscopy is generally carried out without sedation. The present study aimed to identify the factors essential for appropriate selection of transnasal esophagogastroduodenoscopy (TN-EGD) as an alternative to unsedated transoral esophagogastroduodenoscopy (TO-EGD). Patients And Methods: Subjects in this prospective cohort study comprised consecutive outpatients who underwent EGD at a single center. Factors predicting TO-EGD-induced distress were evaluated on a visual analog scale (VAS) and analyzed. Patients were classified into a two-layered system on the basis of these predictive factors, and the severity of distress between the TN-EGD and TO-EGD groups was compared using VAS and the change in the rate-pressure product as subjective and objective indices, respectively. Results: In total, 728 outpatients (390 male, 338 female; mean age, 63.1 ± 0.5 years; TO-EGD group, 630; TN-EGD group, 98)met the inclusion criteria. Multivariate logistic regression analysis confirmed that age &lt;65 years (P &lt; 0.01; odds ratio [OR], 1.69; 95% confidence interval [CI], 1.14-2.52), gender (female; P &lt; 0.01; OR,1.97; 95% CI, 1.34-2.91), marital status (single; P &lt; 0.01; OR, 1.96; 95% CI, 1.18-3.27), and anxiety towards TO-EGD (P &lt; 0.001; OR, 3.62; 95% CI, 2.44-5.37) were independently associated with intolerance. Both indices were significantly higher in the TO-EGD subgroup than in the TN-EGD subgroup in the high predictive class, but not in the low predictive class. Conclusion: Predictive factors for detecting intolerance to unsedated TO-EGD may be useful to appropriately select patients who transpose unsedated TO-EGD to TN-EGD. Answer: The prospective randomized comparison of unsedated ultrathin versus standard esophagogastroduodenoscopy (EGD) in routine outpatient gastroenterology practice aimed to evaluate the performance, feasibility, safety, and patient tolerance between ultrathin transnasal (UT), ultrathin oral (UO), and standard (SO) EGD. The study found that ultrathin endoscopy through both the transnasal and oral routes has limited use in routine outpatient practice. Patients in the UT and UO groups reported more intense anxiety before the procedure compared to the SO group, while SO patients experienced higher levels of anxiety during the procedure. Pain during insertion of the endoscope was least intense in the UO group, but gagging during insertion was more pronounced in the UO group compared to the UT and SO groups. The overall assessment by patients was better in the SO group, and the endoscopists' overall assessment for ultrathin EGD was poorer than for standard EGD. The study concluded that techniques for reducing pain and gagging may improve patient tolerance and that further technical improvements are needed to allow routine implementation of ultrathin endoscopy (PUBMED:12929058). Additional studies have explored various aspects of unsedated transnasal versus transoral EGD. For instance, one study found that transnasal EGD is well tolerated and considerably reduces patient discomfort compared with peroral EGD, with a high percentage of patients willing to undergo the same procedure in the future (PUBMED:17376037). Another study concluded that unsedated ultrathin EGD is well accepted when compared with conventional sedated EGD, with similar levels of patient satisfaction and willingness to repeat the procedure, while also being faster and less costly (PUBMED:14724812). Furthermore, a study comparing TN-EGD with TO-EGD without sedation found that TN-EGD is superior regarding subjective and objective gagging as well as procedural tolerance, with patients who experienced both access routes preferring TN-EGD (PUBMED:24146101). In summary, while ultrathin endoscopy through the transnasal route has shown some benefits in terms of patient discomfort and tolerance, it still faces limitations in routine outpatient practice, and further improvements are needed to enhance its feasibility and acceptance among patients and endoscopists.
Instruction: Complete dislocation of the ulnar nerve at the elbow: a protective effect against neuropathy? Abstracts: abstract_id: PUBMED:27859367 Complete dislocation of the ulnar nerve at the elbow: a protective effect against neuropathy? Introduction: Recurrent complete ulnar nerve dislocation has been perceived as a risk factor for development of ulnar neuropathy at the elbow (UNE). However, the role of dislocation in the pathogenesis of UNE remains uncertain. Methods: We studied 133 patients with complete ulnar nerve dislocation to determine whether this condition is a risk factor for UNE. In all, the nerve was palpated as it rolled over the medial epicondyle during elbow flexion. Results: Of 56 elbows with unilateral dislocation, UNE localized contralaterally in 17 elbows (30.4%) and ipsilaterally in 10 elbows (17.9%). Of 154 elbows with bilateral dislocation, 26 had UNE (16.9%). Complete dislocation decreased the odds of having UNE by 44% (odds ratio = 0.475; P = 0.028), and was associated with less severe UNE (P = 0.045). Conclusions: UNE occurs less frequently and is less severe on the side of complete dislocation. Complete dislocation may have a protective effect on the ulnar nerve. Muscle Nerve 56: 242-246, 2017. abstract_id: PUBMED:26228078 Does ulnar nerve dislocation at the elbow cause neuropathy? Introduction: The role of ulnar nerve dislocation in the pathogenesis of ulnar neuropathy at the elbow (UNE) is not clear. Data exist for and against a causal relationship. Methods: We studied UNE patients and controls divided into 4 groups consisting of 203 UNE patient arms (185 with abnormal and 18 with normal diagnostic studies) and 49 controls (10 with abnormal and 39 with normal studies). In all arms we performed neurologic examination, short-segment nerve conduction studies (SSNCS), and ultrasonography (US). The frequency of partial and complete nerve dislocation was calculated in each group. Results: Dislocation tended to be more common in controls compared with UNE patients (P = 0.056). It was particularly common in controls with subclinical UNE and patients with UNE symptoms but normal diagnostic studies. Conclusion: Our data speak against a causal relationship between ulnar nerve dislocation and UNE. However, the findings also suggest that dislocation may cause mild ulnar nerve damage. abstract_id: PUBMED:38333238 Post-traumatic recurrent ulnar nerve dislocation at the elbow: a rare case report. Introduction And Importance: Several authors have also made reference to a less prevalent condition known in elbow as ulnar nerve subluxation. However, this particular condition tends to manifest primarily in young individuals who engage in professional sports or activities involving extensive use of the forearm. A more severe form of ulnar nerve subluxation, which is ulnar nerve dislocation, gives rise to a characteristic dislocation and relocation of the nerve at the elbow during flexion and extension of the forearm. Due to the rarity of this condition in clinical settings and its predominant occurrence as subluxation in younger patients, there are instances where traumatic ulnar nerve dislocation can be overlooked and misdiagnosed with two commonly encountered pathological conditions as ulnar nerve entrapment or medial epicondylitis. Case Presentation: The authors present a 51-year-old male with chronic pain when moving his right forearm following a fall that caused a direct force injury to his elbow. The patient was misdiagnosed and treated as medial epicondylitis and early-stage ulnar nerve entrapment. However, the symptoms did not improve for a long time. The authors performed the ulnar nerve anterior transposition surgery using the subcutaneous transposition technique and the result is very good without any pain. Clinical Discussion: The ulnar nerve can naturally be subluxed or dislocated if Osborne's ligament is loose or when there are anatomical variations in the medial epicondyle. In some case, this ligament can be ruptured by trauma. The symptoms of ulnar instability are caused by friction neuritis. Dynamic ultrasound of the ulnar nerve in two positions show clearly this condition. Conclusion: Post-traumatic ulnar nerve dislocation is a rare condition, and the recurrent characteristic of it leads to neuritis or neuropathy. The condition can be overlooked or misdiagnosed as medial epicondylitis or early-stage ulnar nerve entrapment. The nerve transposition surgery will give good result. abstract_id: PUBMED:32338326 Ulnar nerve subluxation and dislocation: a review of the literature. The pathogenesis of ulnar nerve subluxation and dislocation is widely debated. Upon elbow flexion, the ulnar nerve slips out of the groove for the ulnar nerve, relocates medial or anterior to the medial epicondyle, and returns to its correct anatomical position upon extension. This chronic condition can cause neuritis or neuropathy; however, it has also been suggested that it protects against neuropathy by reducing tension along the nerve. This article reviews the extant literature with the aim of bringing knowledge of the topic into perspective and standardizing terminology. abstract_id: PUBMED:38356457 Ultrasonographic evaluation of ulnar nerve morphology in patients with ulnar nerve instability. Introduction/aims: Ulnar nerve instability (UNI) in the retroepicondylar groove is described as nerve subluxation or dislocation. In this study, considering that instability may cause chronic ulnar nerve damage by increasing the friction risk, we aimed to examine the effects of UNI on nerve morphology ultrasonographically. Methods: Asymptomatic patients with clinical suspicion of UNI were referred for further clinical and ultrasonographic examination. Based on ulnar nerve mobility on ultrasound, the patients were first divided into two groups: stable and unstable. The unstable group was further divided into two subgroups: subluxation and dislocation. The cross-sectional area (CSA) of the nerve was measured in three regions relative to the medial epicondyle (ME). Results: In the ultrasonographic evaluation, UNI was identified in 59.1% (52) of the 88 elbows. UNI was bilateral in 50% (22) of the 44 patients. Mean CSA was not significantly different between groups. A statistically significant difference in ulnar nerve mobility was found between the group with CSA of &lt;10 versus ≥10 mm2 (p = .027). Nerve instability was found in 85.7% of elbows with an ulnar nerve CSA value of ≥10 mm2 at the ME level. Discussion: The probability of developing neuropathy in patients with UNI may be higher than in those with normal nerve mobility. Further prospective studies are required to elucidate whether asymptomatic individuals with UNI and increased CSA may be at risk for developing symptomatic ulnar neuropathy at the elbow. abstract_id: PUBMED:30864003 Ulnar nerve instability in the cubital tunnel of asymptomatic volunteers. Purpose: Ulnar nerve instability (UNI) in the cubital tunnel is defined as ulnar nerve subluxation or dislocation. It is a common disorder that may be noted in patients with neuropathy or in the asymptomatic. Our prospective, single-site study utilized high-resolution ultrasonography (US) to evaluate the ulnar nerve for cross-sectional area (CSA) and measures of shear-wave elastography (SWE). Mechanical algometry was obtained from the ulnar nerve in the cubital tunnel to assess pressure pain threshold (PPT). Methods: Forty-two asymptomatic subjects (n = 84 elbows) (25 males, 17 females) aged 22-40 were evaluated. Two chiropractic radiologists, both with 4 years of ultrasound experience performed the evaluation. Ulnar nerves in the cubital tunnel were sampled bilaterally in three different elbow positions utilizing US, SWE, and algometry. Descriptive statistics, two-way ANOVA, and rater reliability were utilized for data analysis with p ≤ 0.05. Results: Fifty-six percent of our subjects demonstrated UNI. There was a significant increase in CSA in subjects with UNI (subluxation: 0.066 mm2 ± 0.024, p = 0.027; dislocation: 0.067 mm2 ± 0.024, p = 0.003) compared to controls (0.057 mm2 ± 0.017) in all three elbow positions. There were no significant group differences in SWE or algometry. Inter- and intra-observer agreements for CSA of the ulnar nerves within the cubital tunnel were assessed using intraclass correlation coefficient (ICC) and demonstrated moderate (ICC 0.54) and excellent (ICC 0.94) reliability. Conclusions: Most of the asymptomatic volunteers demonstrated UNI. There was a significant increase in CSA associated with UNI implicating it as a risk factor for ulnar neuropathy in the cubital tunnel. There were no significant changes in ulnar nerve SWE and PPT. Intra-rater agreement was excellent for the CSA assessment of the ulnar nerve in the cubital tunnel. High-resolution US could be utilized to assess UNI and monitor for progression to ulnar neuropathy. abstract_id: PUBMED:30939566 Tardy Ulnar Nerve Palsy. Tardy ulnar nerve palsy is a chronic clinical condition characterized by a delayed onset ulnar neuropathy after an injury to the elbow. Typically, tardy ulnar nerve palsy occurs as a consequence of nonunion of pediatric lateral condyle fractures at the elbow, which eventually lead to a cubitus valgus deformity. While the child grows, the deformity worsens and the ulnar nerve is gradually stretched until classic symptoms of ulnar nerve neuropathy appear. Other childhood elbow trauma has also been associated with tardy ulnar nerve palsy, including supracondylar fractures resulting in cubitus varus, fractures of the medial condyle and of the olecranon, as well as radial head or Monteggia fractures/dislocation, with or without deformity. The clinical assessment includes obtaining a complete history, physical examination, nerve conduction tests, and elbow imaging studies. Treatment consists of ulnar nerve decompression, with or without corrective osteotomy, with overall successful results usually achieved. abstract_id: PUBMED:37398522 Transscaphoid Transcapitate Perilunate Fracture-dislocation with Inferior Arc Injury and Acute Ulnar Nerve Compression: A Case Report. Introduction: Perilunate dislocations and perilunate fracture-dislocations (PLFD) are relatively uncommon injuries, comprising &lt;10% of wrist injuries. Perilunate injuries are often complicated by median neuropathy reported in 23-45% of cases, whereas there are very few reported cases of associated ulnar neuropathy. Combined greater arc and inferior arc injuries are also rare. We report an unusual PLFD pattern with associated inferior arc injury and acute ulnar nerve compression. Case Report: A 34-year-old male sustained a wrist injury after a motorcycle collision. Computed tomography scan revealed a trans-scaphoid, transcapitate, perilunate fracture-dislocation, and a distal radius lunate facet volar rim fracture with radiocarpal subluxation. Examination revealed acute ulnar neuropathy without median neuropathy. He underwent urgent nerve decompression and closed reduction, followed by open reduction internal fixation the next day. He recovered without complication. Conclusion: This case emphasizes the importance of a thorough neurovascular examination to rule out less commonly seen neuropathies. With up to 25% of perilunate injuries misdiagnosed, surgeons should have a low threshold for advanced imaging in high-energy injuries. abstract_id: PUBMED:24668453 Postoperative ulnar neuropathy is not necessarily iatrogenic: a prospective study on dynamic ulnar nerve dislocation at the elbow. Background: Patients who undergo surgery may develop ulnar neuropathy. Although the mechanism of ulnar neuropathy is still not clear, ulnar neuropathies are common causes of successful lawsuits against surgeons. Recently, the concept developed that endogenous patient factors can lead to postoperative peripheral neuropathies. We hypothesize that dynamic ulnar nerve dislocation at the elbow (DUNDE) may be a predisposing factor for ulnar irritation (i.e., neuropathy) in normal subjects. Methods: In a prospective investigation, patients aged 20 years and older presenting in our emergency department were asked to participate. Three physicians examined both elbows of subjects included in our study for evidence of DUNDE (through clinical and sonographic examination) and for clinical symptoms related to ulnar neuropathy. Results: Dynamic ulnar nerve dislocation was observed in 29.3% of examined subjects. No significant difference in its occurrence was observed in relation to gender or dominant side. Physical examination with provocation tests demonstrated significantly more positive Tinel tests and spontaneous signs of neuropathy in patients with dynamic dislocating ulnar nerves (14.7 vs. 1.1%). Conclusion: Dynamic ulnar nerve dislocation may be linked to ulnar nerve irritability (i.e., ulnar neuropathy) in normal subjects without history of trauma, surgical procedure, or anesthesia. Considering the high incidence of this variant in the general population, our study supports previous investigations suggesting that many postoperative ulnar nerve deficits are traceable to chronic patient conditions. Our study suggests that dynamic ulnar nerve dislocation is a predisposing factor in the development of ulnar neuropathy in the postoperative period. Notes: (1) neuropathy should be viewed as a broad definition as signs of nerve irritation/inflammation, and independently of the pathophysiology and etiology; (2) because no specific term exists in the international anatomic nomenclature (Nomina Anatomica) to designate this variant, several synonyms have been used in the literature, leading to confusion and misleading conclusions concerning its traumatic etiologies and their consequences: (a) recurrent or habitual ulnar nerve luxation (or subluxation); (b) recurrent or habitual ulnar nerve dislocation; (c) ulnar nerve instability; (d) laxity of the ulnar nerve; and (e) ulnar nerve hypermobility. abstract_id: PUBMED:16831653 Tardy ulnar nerve palsy in cubitus varus deformity associated with ulnar nerve dislocation in adults. Seven patients with tardy ulnar nerve palsy from a posttraumatic cubitus varus deformity were reviewed retrospectively. The severity of symptoms was grade I in 3 patients and grade II in 4 patients according to McGowan's classification. The mean internal rotation angle was 30.7 degrees (range, 25 degrees -45 degrees ). The most prominent feature was dislocation of the nerve anterior to the medial epicondyle and entrapment of the nerve by the fibrous band of the flexor carpi ulnaris muscle. Of these 7 patients, 4 were treated by 3-dimensional osteotomy with ulnar nerve transposition, and 3 were treated by anterior transposition of the ulnar nerve. All patients improved clinically, and there was no significant difference between anterior transposition of the nerve in the group with osteotomy and the group without osteotomy. Ulnar nerve instability due to internal rotation deformity and distal entrapment was considered to be the main cause of neuropathy. Answer: The relationship between complete dislocation of the ulnar nerve at the elbow and ulnar neuropathy at the elbow (UNE) is complex and has been the subject of various studies. Some research suggests that complete dislocation of the ulnar nerve may actually have a protective effect against neuropathy. A study of 133 patients with complete ulnar nerve dislocation found that UNE occurred less frequently and was less severe on the side of complete dislocation, suggesting that complete dislocation may protect the ulnar nerve (PUBMED:27859367). This finding is supported by the observation that complete dislocation decreased the odds of having UNE by 44% and was associated with less severe UNE. However, other studies have presented different perspectives. For instance, a study involving UNE patients and controls found that dislocation tended to be more common in controls compared to UNE patients, which speaks against a causal relationship between ulnar nerve dislocation and UNE. Yet, the same study also suggested that dislocation may cause mild ulnar nerve damage (PUBMED:26228078). Another study on post-traumatic recurrent ulnar nerve dislocation at the elbow highlighted that this condition can lead to neuritis or neuropathy, and it can be overlooked or misdiagnosed as medial epicondylitis or early-stage ulnar nerve entrapment (PUBMED:38333238). A literature review also mentioned that chronic ulnar nerve subluxation or dislocation can cause neuritis or neuropathy, but it has been suggested that it might protect against neuropathy by reducing tension along the nerve (PUBMED:32338326). In summary, while some evidence indicates that complete dislocation of the ulnar nerve at the elbow may have a protective effect against neuropathy, other studies suggest that dislocation can still be associated with mild nerve damage or contribute to neuropathy under certain conditions. The exact role of dislocation in the pathogenesis of UNE remains uncertain, and further research is needed to fully understand this relationship.
Instruction: Is frozen section analysis of reexcision lumpectomy margins worthwhile? Abstracts: abstract_id: PUBMED:25319974 Is intraoperative frozen section analysis of reexcision specimens of value in preventing reoperation in breast-conserving therapy? Objectives: A prior study at our institution showed a marked reduction in reoperation for margin reexcision following the development of an intraoperative frozen section evaluation of margins (FSM) practice on lumpectomy specimens from patients undergoing breast-conserving therapy (BCT). This study aimed to examine the frequency of FSM utilization, FSM pathology performance, and outcomes for BCT patients undergoing margin reexcision only. Methods: Consecutive reexcision-only specimens were reviewed from a 40-month period following the development of the FSM practice. Clinicopathologic features and patient outcomes were assessed. Results: FSM was performed in 46 (30.7%) of 150 reexcision-only operations. Of the 46 operations with FSM, there were 28 (60.9%) true-negative, 12 (26.1%) true-positive, six (13.0%) false-negative, and no false-positive cases. There was no difference in further reexcision, total operations, or conversion to mastectomy among patients with and without FSM. Need for further reexcision was significantly associated with tumor multifocality (P = .008). Conclusions: Despite overall good pathology performance for FSM in reexcision-only specimens, use of FSM did not affect patient outcome. Rather, underlying disease biology appeared most significant in predicting whether adequate surgical margins could be attained. abstract_id: PUBMED:8174059 Is frozen section analysis of reexcision lumpectomy margins worthwhile? Margin analysis in breast reexcisions. Background: The authors performed reexcision lumpectomy on patients with breast cancer with tumor at or close to the resection margin or if the margin status was unknown. Frozen section analysis (FSA) of reexcision lumpectomy margins was performed to allow additional excision of margins or mastectomy, saving the patient another operation or an additional radiation boost. Methods: The authors reviewed the accuracy of FSA of margins in 107 patients undergoing reexcision lumpectomy between 1987 and 1992. There were 359 frozen sections performed on 156 specimens. Sensitivity and specificity of FSA for each frozen section margin, specimen, and patient were evaluated, as was gross inspection of tumor involvement at the resection margins. The accuracy of each pathologist's use of FSA also was evaluated. Results: FSA sensitivity per frozen section margin, specimen, and patient was 0.90, 0.89, and 0.85, respectively. The specificity of gross inspection was 0.97, 0.96, and 0.96 (sensitivity, 0.44), which was significantly less accurate than that of FSA (P = 0.0015) or permanent section (P = 0.019). There was no significant discordance between FSA and permanent section. Of 19 pathologists doing FSA, 6 evaluated 10 or more specimens. The error rate ranged from 4% to 10% among pathologists with 10 or more readings, whereas 12 of 13 pathologists with fewer readings had no errors. The final pathologist had a 100% error rate, significantly worse (range, P = 0.0085-0.02) than any experienced pathologist. Thirty-four (32%) patients underwent additional excision (24 patients) or mastectomy (10 patients) based on the results of FSA, which saved the patients from undergoing another operation. No one required an additional operation or a mastectomy because of a false FSA result. Conclusion: FSA is safe and accurate in evaluating reexcision lumpectomy margins. Gross inspection is not reliable in margin evaluation. FSA saved an additional operation 32% of the time. Obtaining clear margins during one procedure eliminates the necessity of an additional radiation boost and probably will improve cosmesis. abstract_id: PUBMED:25855313 Efficacy of intraoperative entire-circumferential frozen section analysis of lumpectomy margins during breast-conserving surgery for breast cancer. Background: Intraoperative frozen section analysis of the surgical margins during breast-conserving surgery (BCS) for breast cancer can reliably achieve clear surgical margins and prevent re-operations. The aim of this study was to assess intraoperative entire-circumferential frozen section analysis (IEFSA) of the lumpectomy margins during BCS. Methods: A total of 1029 patients who underwent BCS with IEFSA between June 2007 and July 2013 were available for assessment. The inner surfaces of the shaved lumpectomy margins were examined as frozen sections during BCS. The margins were defined as positive when the cancer cells were present within 5 mm from the edge of the outermost margins of the specimens. Results: Out of 1029 patients, 312 patients (30.3 %) had positive margins after the initial lumpectomy and underwent additional resections during BCS. Fourteen patients (1.4 %) underwent mastectomy following the results of additional resections during the first surgery. Of 1015 patients who completed BCS, 60 patients (5.9 %) were found to have positive margins in the final pathology. One patient (0.1 %) underwent re-operation after BCS while the residual diseases of the other 59 patients were judged to be minimal. Of the 312 patients who were judged to have positive margins after the initial lumpectomy with IEFSA, 53 patients (16.9 %) were found to have negative margins in the final pathology. At a median follow-up time of 54.1 months, one patient (0.1 %) had a recurrence of breast cancer in the preserved breast. Conclusion: IEFSA is useful for preventing the need for re-operation and local recurrence after BCS. abstract_id: PUBMED:28690654 The Usefulness of Intraoperative Circumferential Frozen-Section Analysis of Lumpectomy Margins in Breast-Conserving Surgery. Purpose: Intraoperative frozen-section analysis of the lumpect-omy margin during breast-conserving surgery (BCS) is an excellent method in obtaining a clear resection margin. This study aimed to investigate the usefulness of intraoperative circumferential frozen-section analysis (IOCFS) of lumpectomy margin during BCS for breast cancer, and to find factors that increase the conversion into mastectomy. Methods: From 2007 to 2011, 509 patients with breast cancer underwent IOCFS during BCS. The outer surfaces of the shaved lumpectomy margins were evaluated. A negative margin was defined as no ink on the tumor. All margins were evaluated using the permanent section analysis. Results: Among the 509 patients, 437 (85.9%) underwent BCS and 72 (14.1%) finally underwent mastectomy. Of the 483 pathologically confirmed patients, 338 (70.0%) were true-negative, 24 (5.0%) false-negative, 120 (24.8%) true-positive, and 1 (0.2%) false-positive. Twenty-four patients (4.7%) among total 509 patients had undetermined margins as either atypical ductal hyperplasia or ductal carcinoma in situ in the first IOCFS. The IOCFS has an accuracy of 94.8% with 83% sensitivity, 99.7% specificity, 93.4% negative predictive value, and 99.2% positive predictive value. Sixty-three cases (12.4%) were converted to mastectomy, the first intraoperatively. Of the 446 (87.6%) patients who successfully underwent BCS, 64 patients received additional excisions and 32 were reoperated to achieve clear margin (reoperation rate, 6.3%). Twenty-three of the reoperated patients underwent re-excisions using the second intraoperative frozen section analysis, and achieved BCS. Nine cases were additionally converted to mastectomy. No significant differences in age, stage, and biological factors were found between the BCS and mastectomy cases. Factors such as invasive lobular carcinoma, multiple tumors, large tumor, and multiple excisions increased the conversion to mastectomy. Conclusion: The IOCFS analysis during BCS is useful in evaluating lumpectomy margins and preventing reoperation. abstract_id: PUBMED:21861234 Cost-effectiveness analysis of routine frozen-section analysis of breast margins compared with reoperation for positive margins. Background: Negative margins are associated with decreased local recurrence after lumpectomy for breast cancer. A 2nd operation for re-excision of positive margins is required with rates varying from 15 to 50%. At our institution we routinely use frozen-section analysis of all margins to minimize rates of 2nd operations. The aim of this study was to evaluate the cost/benefit of routine frozen-section analysis. Methods: A decision tree was built to compare 2 strategies: (A) lumpectomy without frozen section and a 2nd operation for positive margin(s) versus (B) lumpectomy with intraoperative frozen-section analysis and a 2nd operation for positive margin(s). For strategy A the rate of positive margins and reoperation were varied from 15 to 50%. For strategy B, a 2nd operation rate of 3% was used. Review of our institutional experience demonstrates an intraoperative re-excision of at least 1 margin in 57% of cases performed with frozen-section support. Results: The cost to provider (i.e., institution) per patient resected to negative margins for strategy A ranged from $4835 to $6306. Average weighted cost of strategy B was $5708. Strategy B was less expensive when the reoperation rate was above 36%. The cost to payor (i.e., Medicare) for strategy A ranged from $3577 to $4665. Average weighted cost for strategy B was $3913. Strategy B was less expensive when the re-excision rate was above 26%. Conclusion: Routine use of frozen-section analysis of lumpectomy margins decreases reoperation rates for margin control; therefore, the cost to provider and payor can be cost effective. abstract_id: PUBMED:9327150 The role of frozen section analysis of margins during breast conservation surgery. Purpose: The role of frozen section analysis during breast conservation surgery is undefined. Assessment of margins using permanent section evaluation is the standard method of ensuring complete tumor excision. If the margin is positive, however, surgical re-excision is necessary to reduce the likelihood of subsequent local recurrence. Therefore, biopsy of the surgical cavity with immediate pathological evaluation during lumpectomy was performed to evaluate the effect on local recurrence, the number of re-excisions, and cosmesis. Patients And Methods: One hundred sixty patients underwent attempted lumpectomy with frozen section margin determination. One hundred forty patients were available for long-term follow-up (mean = 57 months, median = 46 months). All patients underwent attempted breast conservation surgery, which consisted of tumorectomy with excision of a greater than 1-cm rim of grossly normal tissue. Tumor margins were obtained by intraoperative biopsy with frozen section analysis of the lumpectomy cavity walls. Results: In 21 patients (15%), frozen section analyses (FSA) revealed positive margins, resulting in immediate re-excision. In seven of these patients (5%), margins were persistently positive, and these patients therefore underwent mastectomy. Fourteen patients were successfully re-excised to a negative margin. The sensitivity and specificity of FSA were 91% and 100%, respectively. Five percent of patients definitively managed by lumpectomy with FSA of margins recurred locally. The mean cosmesis score after radiotherapy was 7.0 out of a possible 10, correlating with a good to excellent result. Discussion: The accuracy of FSA, low recurrence rate, avoidance of reoperation, and good cosmesis indicate that intraoperative frozen section analysis should be adopted as a safe and effective method of margin analysis during breast conservation surgery. abstract_id: PUBMED:30535567 Malignant eyelid tumors: Are intra-operative rapid frozen section and permanent section diagnoses of surgical margins concordant? Purpose: To study the concordance between intra-operative rapid frozen section and permanent section diagnoses of surgical margins following wide surgical excisional biopsy of malignant eyelid tumors. Methods: This is a retrospective study of 120 cases and 429 frozen section slides. Results: Of 120 cases, 75 (63%) had sebaceous gland carcinoma, 34 (28%) had basal cell carcinoma, and 11 (9%) had squamous cell carcinoma. All cases with these malignant eyelid tumors underwent wide surgical excisional biopsy under frozen section control of surgical margins. A total of 429 frozen section slides were reviewed for rapid frozen section diagnosis. Eyelid reconstruction was performed in all cases after clearance was obtained by rapid frozen section diagnosis of surgical margins as negative for tumor infiltration. Permanent section diagnosis of surgical margins was positive for tumor infiltration in 5 (1%) slides, which were reported as negative on rapid frozen section diagnosis of surgical margins, and was negative for tumor infiltration in 3 (&lt; 1%), which were reported as positive on initial rapid frozen section diagnosis of surgical margins. The sensitivity, specificity, and accuracy of intra-operative rapid frozen section diagnosis of surgical margins for malignant eyelid tumors were 89%, 99%, and 98%, respectively. Conclusion: The concordance between the intra-operative rapid frozen section and permanent section diagnoses of surgical margins following wide surgical excisional biopsy of malignant eyelid tumors is excellent at 98%. abstract_id: PUBMED:30950438 Frozen section is not cost beneficial for the assessment of margins in oral cancer. Background: Routine use of frozen section (FS) is a costly procedure and sparsely available in resource poor countries. A proper cost benefit analysis may help to reduce its routine use and would empower surgeons to perform oral cancer surgeries without having FS facility. FS is performed to identify microscopic spread beyond gross disease that cannot be assessed clinically. Objective: Our primary aim was to determine the cost benefit analysis of FS in the assessment of margins in oral cavity squamous cell carcinoma (OSCC). Materials And Methods: Retrospective study of prospectively collected data of 1311 consecutive patients who were operated between January 2012 and October 2013. The gross and microscopic margin status of each patient was extracted from the patient's chart. The cost estimates were performed to calculate the financial burden of FS as well as expenses incurred on adjuvant treatment resulting from inadequate margins. Result: Microscopic spread changed the gross margin status in 5.2% (65/1237) patients. Of this entire cohort of 1237 patients, FS helped 29 (2.3%) patients to achieve tumor free margin, and it changed the adjuvant treatment plan in 9 (0.7%) patients. The cost of FS for each patient was INR 11052. The cost-benefit ratio of FS was 12:1. Gross examination alone could have identified majority of the inadequate margins. Conclusion: Frozen section for assessment of margin status bears poor cost-benefit ratio. Meticulous gross examination of the entire surgical specimen is sufficient to identify majority of inadequate margins. abstract_id: PUBMED:37285683 Frozen Section Analysis in Head and Neck Surgical Pathology: A Narrative Review of the Past, Present, and Future of Intraoperative Pathologic Consultation. Frozen section has remained the diagnostic gold standard for intraoperative pathological evaluation of surgical margins for head and neck specimens. While achieving tumor-free margins is of utmost importance to all head and neck surgeons, in practice, there are numerous debates and a lack of standardization for the role and method of intraoperative pathologic consultation. This review serves as a summary guide to the historical and contemporary practice of frozen section analysis and margin mapping in head and neck cancer. In addition, this review discusses current challenges in head and neck surgical pathology, and introduces 3D scanning as a groundbreaking technology to bypass many of the pitfalls in the current frozen section workflow. The ultimate goal for all head and neck pathologists and surgeons should be to modernize practices and take advantage of new technology, such as virtual 3D specimen mapping techniques, that improves the workflow for intraoperative frozen section analysis. abstract_id: PUBMED:23561622 Role of frozen section analysis of surgical margins during robot-assisted laparoscopic radical prostatectomy: a 2608-case experience. It remains unanswered whether and how intraoperative frozen section analysis contributes to the surgical margin status on radical prostatectomy specimens. We aimed to determine whether frozen section analysis during radical prostatectomy reduces the incidence of positive surgical margins. We retrospectively analyzed a consecutive series of patients undergoing robot-assisted laparoscopic radical prostatectomy performed at our institution between 2004 and 2011. We identified 2608 cases, including 1128 (43.3%) where intraoperative frozen section analysis was performed to assess surgical margins. Of the cases with positive (n = 60; 5.3%)/negative (n = 1029; 91.2%)/atypical or indeterminate (n = 39; 3.5%) frozen section analyses, 22 (36.7%)/83 (8.1%)/4 (10.3%) were found to have positive surgical margins on radical prostatectomy specimens, respectively. Thus, 109 (9.7%) of 1128 cases with frozen section analysis had positive surgical margins, compared with 163 (11.0%) of 1480 cases with no frozen section analysis (P = .264). When the patients were subgrouped by histopathologic characteristics, frozen section analysis led to a considerable reduction in the rate of positive surgical margins in cases with biopsy Gleason score 7 (12.4% → 8.7%; P = .087)/8 (28.6% → 16.3%; P = .048)/≥7 (15.3% → 10.1%; P = .012) tumor or pT3b (36.6% → 23.2%; P = .075)/≥pT3b (38.1% → 25.4%; P = .091) disease. Multivariate analysis further revealed that performing frozen section analysis in biopsy Gleason score 7 or higher tumors was an independent predictor of negative surgical margins (odds ratio, 0.61; P = .018). In addition, frozen section analysis of the distal urethra or apex of the prostate (7.5%, P = .035) as well as multiple negative frozen section analyses (≥2: 6.2%, P = .001; ≥4: 2.2%, P = .007) correlated with significantly lower rates of positive surgical margin, compared with no frozen section analysis. Overall, intraoperative frozen section analysis did not dramatically change surgical margin status of radical prostatectomy. Nonetheless, it could be useful in preventing incomplete tumor resection, especially in men with high-grade (Gleason score ≥7) tumor at the apex. Answer: The abstracts provided offer insights into the value of frozen section analysis (FSA) during reexcision lumpectomy margins in breast-conserving therapy (BCT) for breast cancer. The studies explore the efficacy, cost-effectiveness, and impact of FSA on reoperation rates and patient outcomes. Several studies support the use of FSA in this context. For instance, one study found that FSA is safe and accurate in evaluating reexcision lumpectomy margins, saving an additional operation 32% of the time (PUBMED:8174059). Another study reported that intraoperative entire-circumferential frozen section analysis (IEFSA) during BCS is useful for preventing the need for re-operation and local recurrence after BCS (PUBMED:25855313). Similarly, the use of intraoperative circumferential frozen-section analysis (IOCFS) during BCS was found to be useful in evaluating lumpectomy margins and preventing reoperation (PUBMED:28690654). Moreover, routine use of frozen-section analysis of lumpectomy margins was shown to decrease reoperation rates for margin control, potentially being cost-effective for both providers and payors (PUBMED:21861234). Another study highlighted the accuracy of FSA, low recurrence rate, avoidance of reoperation, and good cosmesis, suggesting that intraoperative frozen section analysis should be adopted as a safe and effective method of margin analysis during breast conservation surgery (PUBMED:9327150). However, not all findings were uniformly positive. One study found that despite good pathology performance for FSM in reexcision-only specimens, the use of FSM did not affect patient outcomes, and underlying disease biology was more significant in predicting whether adequate surgical margins could be attained (PUBMED:25319974). Additionally, a study on oral cancer suggested that frozen section for assessment of margin status bears a poor cost-benefit ratio, and meticulous gross examination of the entire surgical specimen is sufficient to identify the majority of inadequate margins (PUBMED:30950438). In conclusion, while there is evidence to suggest that FSA can be beneficial in reducing reoperation rates and improving patient outcomes in breast-conserving therapy, its value may depend on specific circumstances such as the underlying disease biology and the accuracy of gross examination. The cost-effectiveness of FSA also varies, and it may not be universally beneficial across all types of cancer surgeries.
Instruction: Is the physician's adherence to prescription guidelines associated with the patient's socio-economic position? Abstracts: abstract_id: PUBMED:19692716 Is the physician's adherence to prescription guidelines associated with the patient's socio-economic position? An analysis of statin prescription in South Sweden. Background: Knowledge about the social and economical determinants of prescription is relevant in healthcare systems like the Swedish one, which is based on the principle of equity, and which aims to allocate resources on the basis of need and not on criteria that are based on social constructs. We therefore investigated the association between patient and healthcare practice (HCP) characteristics on the one hand, and adherence to guidelines for statin prescription on the other, with a focus on social and economic conditions. Methods: The study included all patients in the Skåne region of Sweden who received a statin prescription between July 2005 and December 2005; 15 581 patients in 139 privately administered HCPs and 24 593 patients in 142 publicly administered HCPs. Socio-economic status was established using data from Longitudinal Multilevel Analysis in Skåne, and a stratified multilevel regression analysis was performed. Results: The proportion of patients receiving recommended statins was lower among privately administered HCPs than among publicly administered HCPs (65% vs 80%). Among men (but not women), low income (PR(privateHCP)=1.04 (1.01 to 1.09) and PR(publicHCP)=1.02 (0.99 to 1.07)) and cohabitation (PR(privateHCP)=1.04 (1.04 to 1.08) and PR(publicHCP)=1.03 (1.01 to 1.07)) were associated with a higher adherence to guidelines. Conclusion: The physician's decision to prescribe a recommended statin is conditioned by the socio-economic and demographic characteristics of the patient. Beyond individual characteristics, the contextual circumstances of the HCP were also associated with adherence to guidelines. An increased understanding of the connection between the patient's socio-economic status and the decisions made by the physician might be of relevance when planning interventions aimed at promoting efficient and evidence-based prescription. abstract_id: PUBMED:21566005 Physician adherence to the dyslipidemia guidelines is as challenging an issue as patient adherence. Background: A wide therapeutic gap exists between evidence-based guidelines and their practice in the primary care, which is primarily attributed to physician and patient adherence. Objective: This study aims to differentiate physician and patient adherence to dyslipidemia secondary prevention guidelines and various factors affecting it. Methods: A post hoc analysis of data collected by a prospective cluster randomized trial with 7041 patients diagnosed with clinical atherosclerosis requiring secondary prevention of dyslipidemia and 127 primary care physicians over an 18-month period. Adherence was measured by physicians' and patients' actions taken according to the guidelines and correlated using multivariate logistic regressions. Results: Physician adherence was 36.9% for lipid profile screening, 27.6% for pharmacotherapy up-titration and 21.0% for pharmacotherapy initiation. Physician adherence was positively correlated with frequent patient visits [odds ratios (OR = 1.304)], having more dyslipidemic patients (OR = 1.304) and treating immigrants (OR = 1.268). Patient adherence was 83.8%, 71.9% and 62.6% for medication up-titration, lipid profile screening and pharmacotherapy initiation, respectively. Patient adherence was affected by attending clinics with many dyslipidemic patients (OR = 1.542), being older (OR = 1.271) and being treated by a male physician (OR = 0.870). Conclusions: We learn from this study that (i) physician non-adherence was a major cause for the failure to follow guidelines, (ii) pharmacotherapy initiation was the most challenging issue to tackle and (iii) greater adherence occurred mainly in high volume conditions (patients and visits). Practical implications are designated focus on metabolic condition prevention in primary care by cardiologists or primary care clinics specializing in metabolic conditions and the need to facilitate more frequent follow-up visits. abstract_id: PUBMED:31732873 Socio-economic and behavioral determinants of prescription and non-prescription medicine use: the case of Turkey. Background: Demographic and socio-economic factors determine pharmaceutical health care utilization for individuals. Prescription and non-prescription medicine use are expected to have different determinants. Even though prescription and non-prescription medicine use is being well researched for developed countries, there are only a few studies for developing countries. Objectives: This paper aims to analyze the socio-economic and individual characteristics that determine the use of prescription and non-prescription medicine. We examine the issue for the specific case of Turkey since Turkey's health system has undertaken significant changes in the last two decades and especially after 2003 with the "Health Transformation Programme". Methods: Data from the nationally representative "Health Survey" are used in the analysis. The data set covers the 2008-2016 period with two-year intervals. Pooled multivariate logistic regression is employed to identify the underlying determinants of prescription and non-prescription medicine use. Results: When compared to 2008, non-prescription medicine use decreases until 2012, however, an increasing trend appears after 2012. For prescription medicine use, a decreasing trend emerges after 2012. Findings from the marginal effects indicate that for non-prescription medicine use, the highest effect stems from the health status. For prescription medicine use, the highest marginal effects arise from age, health and employment status indicating the importance of the need and predisposing factors. Conclusion: Decreasing non-prescription medicine use largely depends on easier access to health care service utilization. Although having a health insurance has a positive relationship with prescription medicine use, there is still a problem for individuals living a rural area and heaving a lower income level since they are more likely to use non-prescription medicine. abstract_id: PUBMED:28362354 Socio-Economic Position and Suicidal Ideation in Men. People in low socio-economic positions are over-represented in suicide statistics and are at heightened risk for non-fatal suicidal thoughts and behaviours. Few studies have tried to tease out the relationship between individual-level and area-level socio-economic position, however. We used data from Ten to Men (the Australian Longitudinal Study on Male Health) to investigate the relationship between individual-level and area-level socio-economic position and suicidal thinking in 12,090 men. We used a measure of unemployment/employment and occupational skill level as our individual-level indicator of socio-economic position. We used the Index of Relative Socio-Economic Disadvantage (a composite multidimensional construct created by the Australian Bureau of Statistics that combines information from a range of area-level variables, including the prevalence of unemployment and employment in low skilled occupations) as our area-level indicator. We assessed suicidal thinking using the Patient Health Questionnaire (PHQ-9). We found that even after controlling for common predictors of suicidal thinking; low individual-level and area-level socio-economic position heightened risk. Individual-level socio-economic position appeared to exert the greater influence of the two; however. There is an onus on policy makers and planners from within and outside the mental health sector to take individual- and area-level socio-economic position into account when they are developing strategic initiatives. abstract_id: PUBMED:10090868 Compliance with mammography guidelines: physician recommendation and patient adherence. Background: Guidelines recommend that women ages 50-75 years receive screening mammography every 1-2 years. We related receipt of physician recommendations for mammography and patient adherence to such recommendations to several patient characteristics. Methods: We retrospectively reviewed medical records of 1,111 women ages 50-75 attending three clinics in an urban university medical center. We ascertained overall compliance with mammography guidelines and two components of compliance: receipt of a physician recommendation and adherence to a recommendation. Outcome measures were the proportion of patients demonstrating each type of compliance and adjusted odds ratios, according to several patient-related characteristics. Results: Overall, 66% of women received a recommendation. Of women receiving a documented recommendation, 75% adhered. Factors showing significant positive associations with receiving a recommendation included being a patient in the general internal medicine clinic, having private insurance, visiting the clinic more often, and having a recent Pap smear. Patient adherence was positively associated with private insurance and Pap smear history, negatively associated with internal medicine, and not associated with visit frequency. Conclusions: Patient factors influencing physician mammography recommendations may be different from those associated with patient adherence, except for having private health insurance, which was a predictor of both. abstract_id: PUBMED:26396585 Development of an international scale of socio-economic position based on household assets. Background: The importance of studying associations between socio-economic position and health has often been highlighted. Previous studies have linked the prevalence and severity of lung disease with national wealth and with socio-economic position within some countries but there has been no systematic evaluation of the association between lung function and poverty at the individual level on a global scale. The BOLD study has collected data on lung function for individuals in a wide range of countries, however a barrier to relating this to personal socio-economic position is the need for a suitable measure to compare individuals within and between countries. In this paper we test a method for assessing socio-economic position based on the scalability of a set of durable assets (Mokken scaling), and compare its usefulness across countries of varying gross national income per capita. Results: Ten out of 15 candidate asset questions included in the questionnaire were found to form a Mokken type scale closely associated with GNI per capita (Spearman's rank rs = 0.91, p = 0.002). The same set of assets conformed to a scale in 7 out of the 8 countries, the remaining country being Saudi Arabia where most respondents owned most of the assets. There was good consistency in the rank ordering of ownership of the assets in the different countries (Cronbach's alpha = 0.96). Scores on the Mokken scale were highly correlated with scores developed using principal component analysis (rs = 0.977). Conclusions: Mokken scaling is a potentially valuable tool for uncovering links between disease and socio-economic position within and between countries. It provides an alternative to currently used methods such as principal component analysis for combining personal asset data to give an indication of individuals' relative wealth. Relative strengths of the Mokken scale method were considered to be ease of interpretation, adaptability for comparison with other datasets, and reliability of imputation for even quite large proportions of missing values. abstract_id: PUBMED:29584742 Patient and physician characteristics affect adherence to screening mammography: A population-based cohort study. Background: Screening mammograms are widely recommended biennially for women between the ages of 50 and 74. Despite the benefits of screening mammograms, full adherence to recommendations falls below 75% in most developed countries. Many studies have identified individual (obesity, smoking, socio-economic status, and co-morbid conditions) and primary-care physician parameters (physician age, gender, clinic size and cost) that influence adherence, but little data exists from large population studies regarding the interaction of these individual factors. Methods: We performed a historical cohort study of 44,318 Israeli women age 56-74 using data captured from electronic medical records of a large Israeli health maintenance organization. Univariate analysis was used to examine the association between each factor and adherence (none, partial or full) with screening recommendations between 2008-2014. Multivariate analysis was used to examine the significance of these factors in combination, using binary and multinomial logistic regression. Results: Among 44,318 women, 42%, 43% and 15% were fully, partially and non-adherent to screening recommendations, respectively. Factors associated with inferior adherence identified in our population included: smoking, obesity, low body weight, low socio-economic status, depression, diabetes mellitus and infrequent physician visits, while, women with ischemic heart disease, female physicians, physicians between the ages of 40 and 60, and medium-sized clinics were associated with higher screening rates. Most factors remained significant in the multivariate analysis. Conclusions: Both individual and primary-care physician factors contribute to adherence to mammography screening guidelines. Strategies to improve adherence and address disparities in mammography utilization will need to address these factors. abstract_id: PUBMED:27311330 Can Physicians Affect Patient Adherence With Medication? Non-compliance with medication therapy remains an unsolved and expensive problem for healthcare systems around the world, yet we know little about the factors that affect a patient's decision to follow treatment recommendations. In particular, there is little evidence on the extent to which doctors can influence patient adherence behavior. This study uses a unique panel dataset comprising all prescription drug users, physicians, and all prescription drug sales in Denmark over 7 years to analyze the contributions of doctor-specific, patient-specific, and drug-specific factors to the adherence decision. We find that physicians exert substantial influence on patient compliance. Further, the quality of the match between a doctor and a patient accounts for a substantial portion of the variation in adherence outcomes. This suggests that the sorting of patients across doctors is an important mechanism that affects patient adherence beyond the effects of individual patient-specific and physician-specific factors. Copyright © 2016 John Wiley &amp; Sons, Ltd. abstract_id: PUBMED:33668089 Socio-Economic Position, Cancer Incidence and Stage at Diagnosis: A Nationwide Cohort Study in Belgium. Background: Socio-economic position is associated with cancer incidence, but the direction and magnitude of this relationship differs across cancer types, geographical regions, and socio-economic parameters. In this nationwide cohort study, we evaluated the association between different individual-level socio-economic and -demographic factors, cancer incidence, and stage at diagnosis in Belgium. Methods: The 2001 census was linked to the nationwide Belgian Cancer Registry for cancer diagnoses between 2004 and 2013. Socio-economic parameters included education level, household composition, and housing conditions. Incidence rate ratios were assessed through Poisson regression models. Stage-specific analyses were conducted through logistic regression models. Results: Deprived groups showed higher risks for lung cancer and head and neck cancers, whereas an inverse relation was observed for malignant melanoma and female breast cancer. Typically, associations were more pronounced in men than in women. A lower socio-economic position was associated with reduced chances of being diagnosed with known or early stage at diagnosis; the strongest disparities were found for male lung cancer and female breast cancer. Conclusions: This study identified population groups at increased risk of cancer and unknown or advanced stage at diagnosis in Belgium. Further investigation is needed to build a comprehensive picture of socio-economic inequality in cancer incidence. abstract_id: PUBMED:26311704 Factors associated with GPs' knowledge of their patients' socio-economic circumstances: a multilevel analysis. Background: To determine appropriate management for individual patients, GPs are supposed to use their knowledge of the patient's socio-economic circumstances. Objective: To analyse factors associated with GPs' knowledge of these circumstances. Methods: Observational survey of GPs who were internship supervisors in the Paris metropolitan area. Each of 52 volunteer GPs completed a self-administered questionnaire about their own characteristics and randomly selected 70 patients from their patient list. Their knowledge was analysed as the agreement between the patients' and GPs' responses to questions about the patients' socio-economic characteristics in questionnaires completed by both groups. The association between agreement and the GPs' characteristics was analysed with a multilevel model adjusted for age, sex and the duration of the GP-patient relationship. Results: Agreement varied according to the socio-economic characteristics considered (from 51% to 90%) and between GPs. Globally, the GPs overestimated their patients' socio-economic level. GP characteristics associated with better agreement were sex (female), long consultations, the use of paper records or an automatic reminder system and participation in continuing medical education and in meetings to discuss difficult cases. Conclusion: Knowledge of some patient characteristics, such as their complementary health insurance coverage or perceived financial situation, should be improved because their overestimation may lead to care that is too expensive and thus result in the patients' abandonment of the treatment. Besides determining ways to help GPs to organize their work more effectively, it is important to study methods to help doctors identify their patients' social-economic circumstances more accurately in daily practice. Answer: Yes, the physician's adherence to prescription guidelines is associated with the patient's socio-economic position. Studies have shown that socio-economic and demographic characteristics of the patient can influence a physician's decision to prescribe recommended medications. For instance, in South Sweden, it was found that the proportion of patients receiving recommended statins was lower among privately administered healthcare practices (HCPs) than among publicly administered HCPs, and among men with low income, adherence to guidelines was higher (PUBMED:19692716). Additionally, factors such as the patient's insurance status, frequency of clinic visits, and the presence of a recent Pap smear were associated with the likelihood of receiving a physician recommendation for mammography, which is an indicator of guideline adherence (PUBMED:10090868). Moreover, patient adherence to medication is also affected by socio-economic status, with individuals living in rural areas and having lower income levels more likely to use non-prescription medicine, indicating potential barriers to accessing prescribed medications (PUBMED:31732873). Furthermore, a study in Belgium found that socio-economic position was associated with cancer incidence and stage at diagnosis, with deprived groups showing higher risks for certain cancers and reduced chances of being diagnosed at an early stage (PUBMED:33668089). Physician adherence to dyslipidemia guidelines was found to be a challenge, with low rates of adherence for lipid profile screening, pharmacotherapy up-titration, and initiation. Physician adherence was positively correlated with factors such as frequent patient visits and treating a higher number of dyslipidemic patients (PUBMED:21566005). Additionally, GPs' knowledge of their patients' socio-economic circumstances was associated with their own characteristics, such as sex, consultation length, and participation in continuing medical education (PUBMED:26311704). Overall, these findings suggest that both patient and physician characteristics, including socio-economic factors, play a role in the adherence to prescription guidelines.
Instruction: Arteriolar constriction in mild-to-moderate essential hypertension: an old concept requiring reconsideration? Abstracts: abstract_id: PUBMED:9211176 Arteriolar constriction in mild-to-moderate essential hypertension: an old concept requiring reconsideration? Objective: To investigate differences between in-vivo properties of a vascular bed in hypertensive patients and normotensive controls. Design: Despite the controversy about the origin of essential hypertension and its accompanying vascular changes, it is generally assumed that the characteristic increase in peripheral resistance when hypertension progresses is caused by arteriolar constriction. Yet, there is little experimental evidence that this assumption generally holds in vivo. Methods: A non-invasive technique was used for studying properties of the complete vascular bed of an upper arm segment under an occluding cuff in 23 previously untreated hypertensive patients and their matched normotensive controls. The method used the segment's electrical impedance to assess the volumes of extravascular fluid and of arterial and venous blood under varying arterial transmural pressures. Results: Compared with that of matched normotensive controls, the compliance of the large arteries of the vascular bed was on average 50.9% lower (P &lt; 0.001) in the hypertensive patients. The compliance of the complete arterial bed at the operating blood pressure level was also lower (40.0%, P &lt; 0.01), but appeared to be significantly higher (45.9%, P &lt; 0.05) at the normotensive blood pressure level. On the venous side, the patients had a higher blood volume (60.0%, P &lt; 0.01) and an increased myogenic response (68.5%, P &lt; 0.05). Conclusions: The increase in vascular resistance in the hypertensive patients is due primarily to changes in the large and small vessels of the arterial bed. We found no evidence for a generally increased arteriolar constriction. abstract_id: PUBMED:966388 Positive feedback hypothesis on development of essential hypertension. A hypothesis is presented on a possible mechanism of development of the essential hypertension. The present theoretical consideration with Laplace's law and Poiseuille's law indicates that arteriolar constriction increases, unless the blood flow is reduced, both arterial blood pressure and vascular circumferential wall tension which is considered a trigger of medial hypertrophy of the constricted arteriole. If the medial hypertrophy aggravates the vascular constriction, it all the more increases arterial pressure and wall tension. Therefore, the initial arteriolar constriction, however slight, may progressively produce hypertension and augment medial hypertrophy by such a positive feedback mechanism. abstract_id: PUBMED:9859740 Effects of endothelin and endothelin receptor antagonism in arteriolar and venolar microcirculation. Background: Endothelin-1 (ET-1) is a potent endogenous vasoconstrictor potentially involved in several cardiovascular diseases. The effect of ET-1 and the selective ETA receptor antagonist LU 135252 on skeletal muscle microcirculation in hypertensive rats was investigated. Methods: The cremaster muscle of anaesthetised spontaneously hypertensive rats was superfused with 10(-8) M ET-1 with and without pre-treatment with LU 135252 10 and 30 mg/kg i.v. Vascular diameters were measured microscopically, recorded on videotape and quantified off-line. Results: Superfusion with ET-1 led to a pronounced arteriolar constriction, which was the stronger the smaller the arterioles were (A1: 45%, A4: 90%). Venolar vasoconstriction was much less pronounced and independent of the vessel size (V1-V4: approx. 25%). LU 135252 (10 and 30 mg/kg i.v. was able to block arteriolar vasoconstriction dose-dependently, most pronouncedly so in the smallest arterioles. Venolar vasoconstriction was only antagonised by the higher dose. During the 30 minutes observation period cardiovascular parameters were not changed significantly with either dose of LU 135252. Conclusion: Selective ETA receptor blockade in hypertensive rats reduced ET-1 induced arteriolar vaso-constriction in resistance arterioles to a much higher degree then venolar constriction. As elevated ET-1 levels are seen in patients with primary hypertension, this new therapeutic principle may have promising clinical potential to treat hypertension by reducing peripheral arterial resistance. abstract_id: PUBMED:144465 Hemodynamics of hypertension and of hypertensive heart disease. Therefore, one may look on the hemodynamic changes in essential hypertension as a continuum of greater degrees of peripheral vasoconstriction. In labile hypertension, the mild arteriolar constriction only slightly increases vascular resistance (the "inappropriately" normal level), and the mild venoconstriction serves to redistribute blood to the heart and lungs, thereby augmenting or increasing cardiac output to its increased level. Thus, in this schema it is possible to find mildly hypertensive patients, or even some subjects with labile hypertension, with a normal cardiac output, especially if their circulating volume is already reduced. With advancing disease, arteriolar constriction increases, providing further increases in total peripheral resistance and stress upon the heart, and the coincident venular constriction reduces circulating plasma volume, venous return, and cardiac output. abstract_id: PUBMED:3364468 Efferent glomerular arteriolar constriction: a possible intrarenal hemodynamic defect in hypertension. A variety of mechanisms involving the kidney subserve the control of arterial pressure and the development and maintenance of hypertension. The precise and direct delineation of intrarenal hemodynamic mechanisms has been possible only by micropuncture techniques. Since these methods can be used only in the anesthetized animal, intrarenal hemodynamic assessment in conscious intact experimental animals or patients with essential hypertension must be indirect. Using indirect methods, calculated pressures in our laboratory have demonstrated differences in intrarenal hemodynamics between SHR and normotensive WKY rats, notably enhanced responsiveness of the efferent arteriole to alpha adrenergic agonist stimulation. When the calcium antagonist diltiazem was administered to the SHR or to patients with essential hypertension, it effected an increased renal blood flow and a well-maintained glomerular filtration rate without hyperfiltration. These indirect data suggest that there may be an efferent arteriolar abnormality in genetically mediated hypertension that may be reversed with certain calcium antagonists. abstract_id: PUBMED:21395023 Arteriolar circulation and arterial pressure in patients with essential hypertension Arteriolar circulation and its relationship with arterial pressure were studied in 127 patients with essential hypertension and 63 healthy subjects using continuous US dopplerography of the microcirculatory system at rest. Linear arteriolar blood flow rate was measured in healthy subjects and 47 patients before and after primary indapamide therapy. It was found to equal 17.7 (mean 9.1) cm/s in systole and 8.7 cm/s in diastole in the patients versus 12.2 (7.3) and 4.3 respectively in the healthy subjects (p = 0.005) giving indirect evidence of narrowed diameter of arterioles. Arteriolar circulation was shown to depend on the duration of arterial hypertension. Both AP and linear arteriolar blood flow rate decreased under effect of hypotensive therapy but there was no significant correlation between the two parameters. abstract_id: PUBMED:6204162 Forearm vasoconstrictor response to ouabain: studies in patients with mild and moderate essential hypertension. The dependence of arteriolar vasoconstrictor tone on the activity of Na+,K+-ATPase was studied in 17 normotensive subjects (NT) and 28 patients with essential hypertension (EH) by measuring the response in forearm blood flow and vascular resistance to intraarterial infusion of the Na+,K+-ATPase inhibitor ouabain. Basal plasma aldosterone concentrations were elevated in mild (n = 14) and moderate (n = 14) EH patients as compared with NT (p less than 0.001 for both) and correlated positively with intraarterial diastolic blood pressure (r = 0.551, p less than 0.001). Ouabain in incremental doses of 0.4-16 micrograms/100 ml tissue induced a vasoconstrictor response in the forearm with a maximal effect to 8 micrograms/100 ml tissue, which was not associated with an increase in regional noradrenaline release. The vasoconstrictor response to ouabain, 8 micrograms/100 ml, expressed as the percentage change in the ratio of forearm vascular resistance (FR) on the experimental side versus the control side, was 29.6 +/- 6.8% in NT (p less than 0.001); 51.9 +/- 8.4% in mild EH (p less than 0.001), and 36.0 +/- 12.7% in moderate EH (p less than 0.05). The vasoconstrictor response was greater (p less than 0.05) in patients with mild EH than in NT but did not differ in NT and patients with moderate EH. Vascular reactivity in the forearm, as assessed by intraarterial infusions of noradrenaline (3, 8, and 20 ng/min/100 ml tissue), did not differ in NT (n = 9) and patients with mild (n = 9) or moderate (n = 9) EH. The results suggest increased activity of arteriolar Na+,K+-ATPase in mild EH, which may represent a mechanism counteracting an accumulation of intracellular sodium and thereby an early increase in forearm vascular resistance (FR).(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:23817492 miR-30a regulates endothelial tip cell formation and arteriolar branching. Microvascular rarefaction increases vascular resistance and pressure in systemic arteries and is a hallmark of fixed essential hypertension. Preventing rarefaction by activation of angiogenic processes could lower blood pressure. Endothelial tip cells in angiogenic sprouts direct branching of microvascular networks; the process is regulated by microRNAs, particularly the miR-30 family. We investigated the contribution of miR-30 family members in arteriolar branching morphogenesis via delta-like 4 (Dll4)-Notch signaling in a zebrafish model. The miR-30 family consists of 5 members (miR-30a-e). Loss-of-function experiments showed that only miR-30a reduced growth of intersegmental arterioles involving impaired tip cell function. Overexpression of miR-30a stimulated tip cell behavior resulting in augmented branching of intersegmental arterioles. In vitro and in vivo reporter assays showed that miR-30a directly targets the Notch ligand Dll4, a key inhibitor of tip cell formation. Coadministration of a Dll4 targeting morpholino in miR-30a morphants rescued the branching defects. Conversely, conditional overexpression of Notch intracellular domain restored arteriolar branching in miR-30a gain-of-function embryos. In human endothelial cells, loss of miR-30a increased DLL4 protein levels, activated Notch signaling as indicated in Notch reporter assays, and augmented Notch downstream effector, HEY2 and EFNB2 (ephrin-B2), expression. In spheroid assays, miR-30a loss- and gain-of-function affected tip cell behavior, consistent with miR-30a targeting Dll4. Our data suggest that miR-30a stimulates arteriolar branching by downregulating endothelial Dll4 expression, thereby controlling endothelial tip cell behavior. These findings could have relevance to the rarefaction process and, therefore, to hypertension. abstract_id: PUBMED:12545841 The relationship between arteriolar pathological changes and brain hemorrhage in primary hypertension cases This study was intended to elucidate the relationship between arteriolar pathological changes and encephalorrhagia in the cases of primary hypertension. Gross anatomy and histology of brain were observed in autopsy of 45 cases of primary hypertension. Forty cases of arteriolar hyalinization including 26 cases of brain hemorrhage were found in those 45 cases of primary hypertension. The hyalinized arterioles were divided to two types. One is characterized by the narrow lumen of terminal arterioles, the other by the dilatant lumen of proximal arterioles. Hemorrhage always occurred in the latter type. Besides these changes, 3 cases of acute necrosis and 5 cases of atherosclerosis in arterioles were related to hemorrhage. These data suggest that brain hemorrhage of primary hypertension mainly occur in dilated and hyalinized arterioles, and some cases are probably related to acute necrosis or atheroscleromatous atrerioles. abstract_id: PUBMED:1528496 Skin blood flow and veno-arteriolar response in essential hypertension High arteriolar resistance and vasoconstriction are considered an important factor in essential hypertension. In 98 patients with essential hypertension and 60 normals we evaluated skin flow at rest (RF) and the veno-arteriolar response (VAR = the vasoconstriction due to leg dependency) using laser-Doppler flowmetry. Measurements were repeated after four weeks in normals and after treatment with nifedipine (10 mg tid) in hypertensives. At the beginning of the study in hypertensives RF and VAR were lower than in normals. After nifedipine treatment a decrease in blood pressure was associated with an increase in RF and VAR. A difference was also observed between responders and non responders (16.3%). RF and VAR at the beginning of the study were both higher in non responders and did not change after treatment indicating that the degree of vasoconstriction in these patients was lower. In conclusion microcirculation changes in hypertension may be quantified by laser-Doppler flowmetry and used to evaluate vasoconstriction and the effects of drugs. Answer: The concept of arteriolar constriction in mild-to-moderate essential hypertension has been a longstanding one, with the assumption that the characteristic increase in peripheral resistance as hypertension progresses is due to arteriolar constriction. However, this concept may require reconsideration based on various studies. One study using a non-invasive technique to assess the properties of the vascular bed in hypertensive patients found no evidence for a generally increased arteriolar constriction. Instead, the increase in vascular resistance in hypertensive patients was primarily due to changes in the large and small vessels of the arterial bed, with a significant reduction in the compliance of the large arteries and the complete arterial bed at the operating blood pressure level. On the venous side, hypertensive patients had a higher blood volume and an increased myogenic response (PUBMED:9211176). Another study suggested that initial arteriolar constriction could progressively produce hypertension and augment medial hypertrophy through a positive feedback mechanism, as indicated by theoretical considerations involving Laplace's law and Poiseuille's law (PUBMED:966388). Research on the effects of endothelin and endothelin receptor antagonism in arteriolar and venolar microcirculation in hypertensive rats showed that superfusion with endothelin-1 led to pronounced arteriolar constriction, which was more pronounced in smaller arterioles. Selective ETA receptor blockade reduced ET-1 induced arteriolar vasoconstriction, suggesting a potential therapeutic approach to treat hypertension by reducing peripheral arterial resistance (PUBMED:9859740). Other studies have indicated that arteriolar constriction may vary with the stage of hypertension. For example, in labile hypertension, mild arteriolar constriction only slightly increases vascular resistance, but with advancing disease, arteriolar constriction increases, providing further increases in total peripheral resistance (PUBMED:144465). Additionally, intrarenal hemodynamic assessment suggested an efferent arteriolar abnormality in genetically mediated hypertension that may be reversed with certain calcium antagonists (PUBMED:3364468). In summary, while arteriolar constriction has been traditionally associated with increased peripheral resistance in hypertension, recent evidence suggests that the concept may need to be reconsidered, as other factors such as compliance of large arteries and changes in the arterial bed, as well as the stage of hypertension, may play significant roles.
Instruction: Triple incision to treat phimosis in children: an alternative to circumcision? Abstracts: abstract_id: PUBMED:16970157 Sleeve circumcision and preputioplasty with modified incision Objective: To evaluate the sleeve circumcision and preputioplasty with modified incision for the treatment of patients with phimosis or redundant prepuce. Methods: Five hundred and seventy-six patients with phimosis or redundant prepuce underwent operations of sleeve circumcision or preputioplasty with modified incision. The conventional incision was modified and changed into two opposite tortuous incisions. Results: The operation with modified incision had the following advantages: less blood loss, slight postoperative edema, no secondary bleeding or infection, quick recovery and good appearance of the penis. Conclusion: The sleeve circumcision and preputioplasty with modified incision is an excellent therapeutic option for phimosis and redundant prepuce. abstract_id: PUBMED:12930440 Triple incision to treat phimosis in children: an alternative to circumcision? Objective: To evaluate the functional and cosmetic results and patient satisfaction after triple incision plasty for phimosis in children. Patients And Methods: The study included 197 boys who had a triple incision for phimosis (mean age 5.8 years, range 0.25-18). The indications for preputial surgery were recurrent balanoposthitis, ballooning during micturition and severe phimotic stenosis. The results after surgery were assessed using a questionnaire about the child's/parent's satisfaction, and an outpatient follow-up examination for functional and cosmetic preputial appearance. Results: Of 128 parents/children responding, 108 (84%) were satisfied with the function and 102 (80%) reported a good cosmetic outcome. Triple incision as preputioplasty would be recommended to other parents by 119 (93%) respondents. Ninety-one (71%) of the parents feared disadvantages in their son's later life if the child had been circumcised. The outpatient examination showed an excellent functional and cosmetic outcome in 71 (77%) of the children. Conclusion: Triple incision is a simple, fast and safe technique for preputial relief, with good functional and cosmetic results, and was well accepted by the patients. abstract_id: PUBMED:34898085 Comparison of the disposable circumcision stapler, disposable prepuce ligator and traditional surgical method in circumcision Objective: To compare the effects and complications of the disposable circumcision stapler, disposable prepuce ligator and traditional surgical method in circumcision. Methods: This retrospective study included 327 cases of phimosis or redundant prepuce treated by circumcision with the disposable circumcision stapler (the DCS group, n = 133), disposable prepuce ligator (the DPL group, n = 105) or traditional surgical method (the TS group, n = 89) in our hospital from June 2019 to June 2020. We compared the three surgical methods in terms of operation time, intraoperative blood loss, pain score, satisfaction of the patients with the penile appearance and incidence rates of incision edema, hematoma, infection and dehiscence. Results: The DCS and DPL groups, compared with the TS group, showed significantly shorter operation time ([9.72 ± 2.17] and [10.57 ± 2.31] vs [36.13 ± 6.85] min, P &lt; 0.01), less intraoperative blood loss ([2.07 ± 0.96] and [2.53 ± 1.46] vs [14.33 ± 4.92] ml, P &lt; 0.01) and higher appearance satisfaction score (4.07 ± 0.80 and 3.93 ± 0.96 vs 3.13 ± 1.06, P &lt; 0.05). The DCS and TS groups, in comparison with the DPL group, exhibited markedly lower pain score (1.87 ± 0.99 and 2.27 ± 1.16 vs 3.87 ± 1.30, P &lt; 0.01) and the rates of postoperative incision hematoma (3.01% and 2.25% vs 9.52%, P &lt; 0.05), and infection and dehiscence (2.45% and 2.04% vs 8.07%, P &lt; 0.05). The postoperative rate of incision edema was remarkably lower in the DCS than in the DPL and CS groups (10.2% vs 20.2% and 23.5%, P &lt; 0.05). Conclusions: Circumcision with the disposable circumcision stapler, with the advantages of simple operation, short operation time, less bleeding, less pain, satisfactory appearance, and lower incidence of complications, deserves clinical application and promotion. abstract_id: PUBMED:18466267 Complications of self-circumcision: a case report and proposal. Introduction: Male circumcision is a common surgical technique that has been performed worldwide for thousands of years for medical, social, cultural, and religious reasons. It is usually conducted in childhood in a clinical setting, but the practice of adult self-circumcision has led to a market for nonmedically approved self-circumcision devices that can be purchased via the Internet. Aims: The aims of this report are to report the case of a 30-year-old white man who suffered complications after trying to perform a self-circumcision with a nonmedically approved device purchased via the Internet, and to propose that urologists should take the lead in investigating the problem of male self-circumcision. Methods: This case report documents the presentation and treatment of an attempted self-circumcision. Results: The attempted self-circumcision was carried out without local anaesthetic and resulted in an incision in the foreskin. The patient presented with uncontrollable local bleeding 2 days after carrying out the procedure. Although questioned as to why he had attempted self-circumcision, the patient was reluctant and/or unable to explain his reasons. Daily local wound care and topical antibiotics resulted in complete wound healing after 2 months, and a clinical clamp circumcision was conducted to treat the remaining severe phimosis. Conclusion: Data on the prevalence and outcomes associated with the use of self-circumcision devices are few. The clinicians who treat the complications are best placed to collect data on self-circumcision and should publish case studies. Eventually there may be sufficient understanding of the sector of the population at risk from this practice to educate those likely to attempt self-circumcision, and enough evidence of harm for controls to be placed on the sale of these nonmedically approved devices via the Internet. abstract_id: PUBMED:35246834 A Modified Disposable Circumcision Suture Device with Application of Plastic Sheet to Avoid Severe Bleeding After Circumcision. Purpose: To evaluate the effectiveness of a modified disposable circumcision suture device (DCSD) with application of plastic sheet to avoid severe bleeding after circumcision and compare the surgical effects and other postoperative complications of two DCSDs. Materials And Methods: A total of 943 excess foreskin patients from January 2018 to January 2020 who underwent circumcision using two different DCSDs were recruited. Preoperative characteristics (patient age, height and weight), main surgical outcomes (surgical time, intraoperative blood loss, incision healing time) and postoperative complications (postoperative hemorrhage and hematoma rate, edema rate, incision infection rate, residual staples rate) were collected and analyzed. Patients' "satisfaction" or "dissatisfaction" was also investigated. Results: Preoperative characteristics showed no significant statistical difference. The modified DCSD group has a lower intraoperative bleeding, postoperative hemorrhage or hematoma rate and residual staples rate compared with the conventional group. Incision healing time and incision infection rate between the two groups were similar. Nevertheless, conventional group has a shorter surgical time, a lower edema rate and a higher satisfaction rate. Conclusion: The modified DCSD with application of plastic sheet can avoid severe bleeding after circumcision effectively and can be served as a new choice for circumcision. abstract_id: PUBMED:15008747 Triple incision to treat phimosis in children: an alternative to circumcision. N/A abstract_id: PUBMED:15008749 Triple incision to treat phimosis in children: an alternative to circumcision. N/A abstract_id: PUBMED:23042449 The ultrasonic harmonic scalpel for circumcision: experimental evaluation using dogs. Male circumcision is one of the most commonly performed operations worldwide, and many novel techniques have been developed for better postoperative outcomes. The purpose of this study was to explore the feasibility of applying the ultracision harmonic scalpel (UHS) for circumcision by using dogs. Sixteen adult male dogs were divided into two groups: the UHS group and the control group. The dogs were circumcised with either the UHS or a conventional scalpel. The UHS circumcision procedure and the effects were imaged 1 week after surgery. The two groups were compared with respect to the operative time and volume of blood loss. Postoperative complications, including oedema, infection, bleeding of the incision and wound dehiscence, were recorded for both groups. The mean operative time for the UHS group was only 5.1 min compared with the 35.5 min of the conventional group. The mean blood loss was less than 2 ml for the UHS group and 15 ml for the conventional group. There was only one case of mild oedema in the UHS group, but the postoperative complications in the conventional group included two cases of mild oedema, one infection of the incision and one case of bleeding of the incision. In conclusion, circumcision using UHS is a novel technique to treat patients with phimosis and excessive foreskin, and this method has a short operative time, less blood loss and fewer complications than the conventional scalpel method. This small animal study provides a basis for embarking on a larger-scale clinical trial of the UHS. abstract_id: PUBMED:29278467 Shang Ring scissor circumcision versus electrotome circumcision for redundant prepuce Objective: To compare the clinical effects of Shang Ring scissor circumcision (SC) and electrotome circumcision (EC) in the treatment of redundant prepuce or phimosis.Methods: Results: Conclusion. Methods: This retrospective study included 524 patients with redundant prepuce or phimosis, 422 treated by SC and 120 by EC. We made comparisons between the two groups of patients in the operation time, intra- and post-operative pain scores, pain scores before, at and after ring removal, wound healing time, and incidence rates of postoperative edema and incision dehiscence. Results: The operation time was longer in the SC than in the EC group ([59.99±5.39] vs [39.94±4.94] sec, P&lt;0.05), but there were no significant differences between the two groups in the intraoperative pain scores (1.02±0.74 vs 1.08±0.59, P&gt;0.05) or the pain scores within 24 h after operation (6.74±1.01 vs 6.56±1.06, P&gt;0.05), 24 h prior to ring removal (1.14±0.69 vs 1.10±0.64, P&gt;0.05), and after ring removal (2.73±0.74 vs 2.85±0.75, P&gt;0.05) except at ring removal, which was remarkably lower in the SC than in the EC group (3.56±0.47 vs 4.77±0.58, P&lt;0.05). The wound healing time was markedly shorter in the former than in the latter ([14.11±1.26] vs [39.78±7.55] d, P&lt;0.05), but the incidence rate of incision dehiscence showed no significant difference between the two groups (4.03% [17/422] vs 9.17% [11/120], P&gt;0.05). The rate of postoperative satisfaction with the external penile appearance was 100% in both of the two groups. Conclusions: Shang Ring scissor circumcision is preferred to electrotome circumcision for its advantages of less pain at ring removal and shorter healing time despite its longer operation time. abstract_id: PUBMED:37477451 Clinical observation of pre-suture ligation and suture knot positioning in single-operator circumcision with the stapler for children Objective: To observe the clinical effects of pre-suture ligation and suture knot positioning in single-operator circumcision with the stapler. Methods: Totally 120 six to fourteen years old children with phimosis or redundant prepuce were equally and randomly assigned to receive traditional single-operator circumcision with the stapler (group 1), single-operator circumcision with double suture knots for positioning the cutting plane with the stapler (group 2), or pre-suture ligation plus single-operator suture knot positioning circumcision with the stapler (group 3). We recorded and comparatively analyzed the operation time, intraoperative blood loss and hematoma, the number of residual suture knots, the patients' satisfaction with foreskin reservation, sutured frenulum and incision aesthetics, and the rates of surgical conversion, severe postoperative dysuria, severe wound exudation and perioperative anxiety of the parents and surgeon. Results: Statistically significant differences were observed among the three groups in the operation time, intraoperative blood loss, incidence of intraoperative hematoma, number of residual suture knots, the patients' satisfaction with foreskin reservation, sutured frenulum and incision aesthetics, and the rates of surgical conversion, severe postoperative dysuria, severe wound exudation and perioperative anxiety of the parents and surgeon (P &lt; 0.05). Compared with group 1, group 3 showed obviously less operation time and intraoperative blood loss and lower incidence of intraoperative hematoma, number of residual suture knots and rates of surgical conversion and perioperative anxiety of the parents and surgeons, and higher rates of patients' satisfaction with foreskin reservation, sutured frenulum and incision aesthetics. The intraoperative blood loss, incidence of intraoperative hematoma, and the rates of severe postoperative dysuria, severe wound exudation and perioperative anxiety of the parents and surgeon were lower in group 3 than in group 2. Conclusion: Pre-suture ligation plus single-operator suture knot positioning circumcision with the stapler helps reduce the operation time, intraoperative blood loss, incidence of intraoperative hematoma, number of residual suture knots and rates of surgical conversion and perioperative anxiety of the parents and surgeon, and increase the patients' satisfaction with foreskin reservation, sutured frenulum and incision aesthetics. Answer: Yes, the triple incision technique is presented as an alternative to circumcision for treating phimosis in children. According to the study that included 197 boys with phimosis, the triple incision plasty showed good functional and cosmetic results, with high patient satisfaction. Of the parents and children who responded to the study's questionnaire, 84% were satisfied with the function and 80% reported a good cosmetic outcome. Moreover, 93% of respondents would recommend triple incision as preputioplasty to other parents. The outpatient examination showed an excellent functional and cosmetic outcome in 77% of the children. The study suggests that triple incision is a simple, fast, and safe technique for preputial relief, and it was well accepted by the patients (PUBMED:12930440).
Instruction: Ureterovascular hydronephrosis in children: is pyeloplasty always necessary? Abstracts: abstract_id: PUBMED:10364659 Ureterovascular hydronephrosis in children: is pyeloplasty always necessary? Objective: To evaluate the efficacy of the vessel transposition technique in ureterovascular hydronephrosis in children. Methods: Over a 25-year period, we treated 111 patients with 112 instances of ureterovascular hydronephrosis. In order to determine the obstructive effect of the vessels, we performed an intraoperative diuretic test. Using this approach, 61 patients judged to have only vascular pyeloureteral junction obstruction underwent vessel transposition. However, 50 patients in whom the intraoperative diuretic test proved doubtful needed pyeloplasty. Results: Surgical success was achieved in 98% of the patients. Only 1 child treated by vessel transposition had an unsatisfactory outcome which necessitated a subsequent pyeloplasty for persistent hydronephrosis. This was due to a previously unrecognized intrinsic pyeloureteral junction obstruction. Conclusion: Based on our clinical experience, the intraoperative diuretic test has proven to be a safe and effective diagnostic tool in children with ureterovascular hydronephrosis. Its use may contribute to treating some cases of ureterovascular hydronephrosis without resorting to pyeloplasty. abstract_id: PUBMED:18721979 Further experience with the vascular hitch (laparoscopic transposition of lower pole crossing vessels): an alternate treatment for pediatric ureterovascular ureteropelvic junction obstruction. Purpose: Standard treatment for ureterovascular ureteropelvic junction obstruction has been dismembered pyeloplasty. We previously reported the alternative technique of laparoscopic transposition of lower pole vessels (the vascular hitch) in pediatric patients. This report is an update of this select group of pediatric patients with intermediate followup. Materials And Methods: Patients underwent diagnostic renal sonography and (99m)technetium-mercaptoacetyltriglycine diuretic renography with additional magnetic resonance angiography in candidate patients. Radiographic criteria included moderate hydronephrosis with no caliceal dilatation and a well preserved cortex, poor renal drainage with preserved split function and lower pole crossing vessels. Intraoperative criteria included a normal ureter and ureteropelvic junction with peristalsis. Postoperatively patients were followed clinically, and with renal sonography and (99m)technetium-mercaptoacetyltriglycine renography at 1 and 2 months, respectively. Success was defined as symptom resolution with radiographic improvement in hydronephrosis and drainage with preserved renal function. Results: Nine boys and 11 girls 7 to 16 years old (mean age 12.5) underwent laparoscopic transposition of crossing vessels, including 3 with da Vinci robot assistance. Mean operative time was 90 minutes (range 47 to 140). Median hospital stay was 24 hours. No ureteral stents or urethral catheters were placed intraoperatively. At a mean followup of 22 months (range 12 to 42) 19 of 20 patients (95%) had been successfully treated. One patient who had recurrent pain underwent successful laparoscopic pyeloplasty. Conclusions: At intermediate followup the laparoscopic vascular hitch procedure has been successful in treating patients with ureterovascular ureteropelvic junction obstruction. In these select patients this technique offers a feasible and durable alternative to standard dismembered pyeloplasty. Ongoing evaluation continues to ensure that the promising results endure. abstract_id: PUBMED:9334581 Impact of 3-dimensional helical computerized tomography on selection of operative methods for ureteropelvic junction obstruction. Purpose: We studied the feasibility of imaging the direct correlation between crossing vessels and obstructed ureteropelvic junction with helical (spiral) computerized tomography (CT) for selecting surgical repair of symptomatic ureteropelvic junction obstruction. Materials And Methods: From July 1995 to December 1995, 4 select patients with symptomatic ureteropelvic junction obstruction underwent contrast enhanced helical CT. In addition to transaxial images, 3-dimensional reformatted images were used for evaluation. Results: We identified 2 cases of ureteropelvic junction obstruction due to crossing vessels regarded as ureterovascular hydronephrosis, which is characterized by the spatial relationship between malrotated renal pelvis and anterior crossing vessels. Laparoscopic or open repair was performed in these 2 patients and operative findings were in agreement with prospective helical CT interpretation. Antegrade endopyelotomy was performed successfully for the remaining 2 patients. Conclusions: The 3-dimensional helical CT is reliable in detecting ureterovascular hydronephrosis preoperatively and in presenting better operative methods for ureteropelvic junction obstruction. abstract_id: PUBMED:10360453 A variation and ureterovascular hydronephrosis. N/A abstract_id: PUBMED:7176065 Ureterovascular hydronephrosis and the "aberrant" renal vessels. The pelvis, an angulated upper segment of the ureter and the lower anterior renal segmental vessels entangle to produce hydronephrosis. However, which of the 3 structures provokes obstruction is conjectural. The structural relations in this anomaly were compared to those of normal kidneys, hydronephroses from other causes and nonrotated kidneys. This anomaly was unique in that the pelvis and ureteropelvic junction bulged over the lower hilar segmental vessels instead of under as in other forms of hydronephrosis. Transient or permanent defects of medial rotation of the renal pelvis may account for the vulnerability of the ureteropelvic junction to obstruction by the lower anterior segmental branch of the renal artery, which was not aberrant in all the examples studied. abstract_id: PUBMED:17303016 Is it always necessary to treat a ureteropelvic junction syndrome? The term ureteropelvic junction (UPJ) obstruction covers different morbid entities, and the old aphorism, "A UPJ is not a UPJ" remains true. Hydronephrosis is readily seen on antenatal ultrasonography but does not necessarily imply obstruction. Although most cases will resolve spontaneously, the probability of a significant pathology is related to the degree of pyelectasis, as seen on the third trimester study. Criteria of obstruction are difficult to define with precision, but two that are well-accepted are size of the renal pelvis (&gt; 15 mm) and relative renal function, as determined by adequate isotopic studies. A new therapeutic standard has been established, and minimally invasive surgery has finally dethroned its open rival. Possibly facilitated by robotic assistance, laparoscopic dismembered pyeloplasty is the present gold standard, albeit endopyelotomy remains the least invasive with similar results in carefully selected patients. abstract_id: PUBMED:11125355 Retroperitoneoscopic pyelotomy combined with the transposition of crossing vessels for ureteropelvic junction obstruction. Purpose: We developed a new approach of retroperitoneoscopic pyelotomy combined with the transposition of crossing vessels for ureteropelvic junction obstruction as an alternative to conventional antegrade or retrograde endopyelotomy. Materials And Methods: From February 1997 to August 1999 we treated 5 cases of ureteropelvic junction obstruction due to crossing vessels that were diagnosed by helical computerized tomography. Ureterovascular hydronephrosis characterized by a malrotated renal pelvis with anterior crossing vessels was observed in 4 cases and ureteropelvic junction obstruction with a posterior crossing artery was present in 1. After endoureterotomy stent insertion under cystoscopic guidance we performed retroperitoneoscopic endopyelotomy with the kidney in standard position. Crossing vessels were transposed to a higher position to remove obstruction and fixed with peripelvic tissue via retroperitoneoscopy. In all cases a longitudinal incision approximately 1.5 cm. long was made with a potassium titanyl phosphate laser. Results: Convalescence was uneventful in all patients and the endoureterotomy stent was removed 4 to 8 weeks after surgery. Postoperatively helical computerized tomography showed the successful transposition of crossing vessels and significant hydronephrosis resolution in all cases. All patients were asymptomatic during followup of 17 to 28 months. Conclusions: Despite our small number of patients our results are sufficient to conclude that retroperitoneoscopic pyelotomy combined with the transposition of crossing vessels is a simple and reliable method for treating ureterovascular hydronephrosis and associated conditions. abstract_id: PUBMED:9633571 The pathophysiology of UPJ obstruction. Current concepts. It took more than half of a century for urologists to recognize that hydronephrosis is not necessarily equivalent to obstruction. Keeping this important truism in mind, particularly when dealing with antenatal hydronephrosis, one must also remember that hydronephrosis is not a normal condition. It is conceivable that although the initial intrinsic stenosis or ureterovascular obstruction may not be clinically significant in terms of renal functional damage, as compensatory renal pelvic dilatation develops, secondary obstructive elements may be recruited to create an insertional anomaly and peripelvic fibrosis. The individual types of UPJ obstruction that are seen in diagnostic studies or on the operating table may represent isolated "snapshots" of evolving pathophysiologic processes. If this is true, patients with asymptomatic congenital hydronephrosis, although lacking obvious renal function loss, require long-term follow-up. abstract_id: PUBMED:28466405 Is it Always Necessary to Treat an Asymptomatic Hydronephrosis Due to Ureteropelvic Junction Obstruction? The postnatal treatment of asymptomatic unilateral hydronephrosis due to ureteropelvic junction obstruction remains controversial, and the timing of and indications for surgical intervention are continuously debated. There is no consensus on the best follow-up during expectant management. The various modalities and parameters have been discussed along with their pros and cons and an attempt has been made to clear up the controversies. abstract_id: PUBMED:18373095 The risk of associated urological abnormalities in children with pre and postnatal occasional diagnosis of solitary, small or ectopic kidney: is a complete urological screening always necessary? Objective: Voiding cystourethrogram (VCUG) and radionuclide scan is recommended for patients with solitary (secondary to aplasia or multicystic dysplasia), hypoplasic or single ectopic kidney, to detect associated anomalies (vesicoureteric reflux, obstructive uropathies). With the increase of occasional diagnosis, mainly by fetal ultrasound (US), the possibility of an unjustified extension of diagnostic work up must be prevented. Aim of this paper was to estimate the incidence of associated anomalies in asymptomatic cases without associated US signs of hydronephrosis. Materials And Methods: Among 158 Patients examined there were 81 solitary kidneys (26 multicystic dysplasia), 27 small kidneys, 50 single ectopic kidneys); prenatal diagnosis was recorded in 86%. Incidence of associated anomalies was compared with figures resulting when symptomatic cases or with hydronephrosis were excluded. Results: Vesicoureteral reflux or obstruction were found in 17% of solitary kidneys, 70% of hypoplasic kidneys and 2% of single ectopic kidneys. Among those (120 cases) without infection or hydronephrosis, incidence decreased, respectively to 5, 60 and 0%. Conclusions: Associated anomalies are reported to affect up to 48% of solitary kidneys and about 30% of single ectopic; 80% of severe reflux are usually associated to small kidneys. In our series of solitary and ectopic kidneys incidence of abnormalities was significantly less and fell to negligible values when occasionally detected, undilated cases were considered. On this basis, indiscriminate urological screening simply based on the occasional pre or postnatal detection of undilated solitary or ectopic kidney appears to be unjustified. Small kidneys deserve special attention and VCUG is always indicated. Answer: Based on the provided abstracts, pyeloplasty is not always necessary for treating ureterovascular hydronephrosis in children. Several studies have explored alternative treatments, such as vessel transposition and the laparoscopic vascular hitch procedure, which have shown promising results. In a study spanning 25 years, 61 patients with ureterovascular hydronephrosis underwent vessel transposition based on an intraoperative diuretic test, and surgical success was achieved in 98% of the patients. Only one child required subsequent pyeloplasty due to an unrecognized intrinsic pyeloureteral junction obstruction (PUBMED:10364659). Another study reported that the laparoscopic vascular hitch procedure was successful in 95% of the patients with ureterovascular ureteropelvic junction obstruction at an intermediate follow-up, suggesting it as a feasible and durable alternative to standard dismembered pyeloplasty (PUBMED:18721979). Additionally, 3-dimensional helical computerized tomography has been found reliable in detecting ureterovascular hydronephrosis preoperatively, which can help in selecting better operative methods for ureteropelvic junction obstruction (PUBMED:9334581). Retroperitoneoscopic pyelotomy combined with the transposition of crossing vessels has also been described as a simple and reliable method for treating ureterovascular hydronephrosis (PUBMED:11125355). However, it is important to note that the decision to treat ureteropelvic junction obstruction, whether by pyeloplasty or alternative methods, should be based on specific criteria such as the size of the renal pelvis and relative renal function, as determined by adequate isotopic studies (PUBMED:17303016). Furthermore, the treatment of asymptomatic hydronephrosis due to ureteropelvic junction obstruction remains controversial, and there is no consensus on the best follow-up during expectant management (PUBMED:28466405). In conclusion, while pyeloplasty has been the standard treatment for ureterovascular hydronephrosis, alternative treatments like vessel transposition and laparoscopic vascular hitch have been successfully used in certain cases, indicating that pyeloplasty is not always necessary.
Instruction: Guidelines to practice gap in the use of glycoprotein IIb/IIIa inhibitors: from ISAR-REACT to overreact? Abstracts: abstract_id: PUBMED:19245382 Guidelines to practice gap in the use of glycoprotein IIb/IIIa inhibitors: from ISAR-REACT to overreact? Unlabelled: Adjunctive use of glycoprotein IIb/IIIa inhibitors (GPI) is associated with favorable outcomes following percutaneous coronary intervention (PCI). Guidelines for use of GPI have been published by various national societies including National Institute of Clinical Excellence (NICE), United Kingdom. The latter has not been updated since publication. The impact of contemporary trials such as ISAR-REACT (which showed no benefit of abciximab and 600 mg of clopidogrel compared with 600 mg of clopidogrel alone, in elective patients) on adherence to NICE guidelines is unknown. Methods: We audited use of GPI against NICE guidelines following publication in May 2002. Data were collected from 1,685 patients between September and November in years 2002, 2003, 2004, and 2007. Results: In 2002 and 2003, only 10.2% and 11.8%, respectively, of patients were noncompliant to NICE guidelines. Over time, there was an increase in patients not given GPI despite meeting NICE criteria. After publication of ISAR-REACT, the comparative figures for noncompliance in 2004 and 2007 were 40.0% and 44.5%. A similar pattern was seen in patients with diabetes; in 2002 and 2003 noncompliance was 16.7% and 11.1%, respectively, and in 2004 and 2007 noncompliance was 38.0% and 44.7%, respectively. Qualitatively, similar findings were recorded in patients with NSTE-ACS. The overall noncompliance to NICE guidelines increased from 11.0% to 42.1% (P &lt; 0.0001) after the ISAR-REACT study. Conclusions: We found a decline in compliance to NICE guidelines on GPI usage during PCI. This was likely influenced by contemporary trials demonstrating little or no benefit of GPI in patients undergoing elective PCI who are adequately pretreated with clopidogrel. Our findings suggest the need for a mechanism whereby regular updates to guidelines can be disseminated following new trial evidence. abstract_id: PUBMED:12909070 From guidelines to clinical practice: the impact of hospital and geographical characteristics on temporal trends in the management of acute coronary syndromes. The Global Registry of Acute Coronary Events (GRACE). Aims: The extent to which hospital and geographic characteristics influence the time course of uptake of evidence from key clinical trials and practice guidelines is unknown. The gap between evidence and practice is well recognized but the factors influencing this disjunction, and the extent to which such factors are modifiable, remain uncertain. Methods And Results: Using chronological data from the GRACE registry (n=12666, July 1999 to December 2001), we test the hypothesis that hospital and geographic characteristics influence the time course of uptake of evidence-based guideline recommendations for acute coronary syndromes (ACS) with and without ST elevation. Certain therapies were widely adopted in both ST-segment elevation myocardial infarction (STEMI) and non-ST-segment elevation myocardial infarction (NSTEMI) patients (aspirin &gt;94% of all patients; beta-blockers 85-95%) and changed only modestly over time. Significant increases in the use of low-molecular-weight heparins and glycoprotein IIb/IIIa inhibitors occurred in STEMI and NSTEMI patients in advance of published practice guidelines (September/November 2000) with marked geographical differences. The highest use of LMWH was in Europe in NSTEMI (86.8%) and the lowest in the USA (24.0%). Contrasting geographical variations were seen in the use of percutaneous coronary intervention (PCI) in NSTEMI: 39.5% USA, 34.6% Europe, 33.5% Argentina/Brazil, 25.0% Australia/New Zealand/Canada (July-December 2001). Theuse of PCI was more than five times greater in hospitals with an on-site catheterization laboratory compared to centres without these facilities, and geographic differences remained after correction for available facilities. Conclusions: Hospital and geographical factors appear to have a marked influence on the uptake of evidence-based therapies in ACS management. The presentation and publication of major international guidelines was not associated with a measurable change in the temporal pattern of practice. In contrast, antithrombotic and interventional therapies changed markedly over time and were profoundly influenced by hospital and geographic characteristics. abstract_id: PUBMED:17174631 A quality guarantee in acute coronary syndromes: the American College of Cardiology's Guidelines Applied in Practice program taken real-time. Background: Wide variation exists in the management of acute coronary syndromes (ACSs), which includes an apparent underutilization of evidence-based therapies. We have previously demonstrated that application of the American College of Cardiology Guidelines Applied in Practice (GAP) tools can improve quality indicator rates and outcomes of patients hospitalized with ACS. Objective: To determine whether a real-time system for monitoring key quality-of-care indicators using GAP would improve both process indicators and outcomes beyond those of the initial implementation of GAP. Design: Prospective patient identification, prospective chart coding, retrospective data abstraction. Patients: All patients with ACS admitted (N = 3189) to our institution between January 1, 1999, and December 2004; 2019 studied before real-time implementation from January 1, 1999, to June 30, 2002, and 1170 studied during real-time implementation from July 1, 2002, to December 31, 2004. Main Outcome Measure: The effect of real-time monitoring of key quality indicators on inhospital therapy and outcomes, and 6-month outcomes in patients admitted with ACS. Results: The real-time GAP implementation correlated with more frequent use of inhospital angiotensin-converting enzyme inhibitors (72.7% vs 63.7%, P &lt; .0001), beta blockers (93.0% vs 89.7%, P = .0016), statins (81.2% vs 65.9%, P &lt; .0001), antiplatelet agents (69.2% vs 22.5%, P &lt; .0001), and glycoprotein IIb/IIIa inhibitors (35.5% vs 26.7%, P &lt; .0001). There were fewer episodes of inhospital congestive heart failure (3.85% vs 8.77%, P &lt; .0001) and major bleeding events (3.2% vs 7.9%, P &lt; .0001) after the real-time system was adopted. Real-time GAP also resulted in higher discharge rates of aspirin (92.1% vs 86.5%, P &lt; .0001), beta blockers (86.8% vs 79.1%, P &lt; .0001), statins (81.2% vs 64.7%, P &lt; .0001), and angiotensin-converting enzyme inhibitors (67.1% vs 55.5%, P &lt; .0001). Real-time GAP implementation was associated with fewer rehospitalizations for heart disease (19.8% vs 25.2%, P = .0014), myocardial infarction (3.5% vs 5.4%, P = .0243), and combined death/cerebrovascular accident/myocardial infarction (9.5% vs 13.9%, P = .0009) during the first 6 months after discharge. Conclusion: The institution of a formal system to review and "guarantee" key quality-of-care indicators real time in the hospital is associated with improved outcomes in patients admitted with ACS. The combination of American College of Cardiology's GAP program and its real-time implementation leads to higher use of evidence-based therapies and correspondingly better outcomes than those associated with the initial GAP implementation. abstract_id: PUBMED:18440324 Review of the 2005 American College of Cardiology, American Heart Association, and Society for Cardiovascular Interventions guidelines for adjunctive pharmacologic therapy during percutaneous coronary interventions: practical implications, new clinical data, and recommended guideline revisions. In 2006, the American College of Cardiology, American Heart Association, and Society for Cardiovascular Interventions published the 2005 update of the evidence-based guidelines for the treatment of patients undergoing percutaneous coronary intervention (PCI). Together with procedural recommendations, these guidelines for percutaneous coronary intervention provide clinicians with guidance in the appropriate use of adjunctive pharmacologic therapy in patients undergoing PCI. However, there remain substantial variations in practice among clinicians and within and across institutions. Furthermore, the guidelines (being a static document) cannot incorporate additional evidence that has accumulated since their publication. Several landmark trials, notably Intracoronary Stenting and Antithrombotic Regimen-Rapid Early Action for Coronary Treatment (ISAR-REACT 2) and Acute Catheterization and Urgent Intervention Triage strategY (ACUITY), have added substantially to the knowledge base about pharmacologic therapy since publication of the guidelines. This article is therefore intended to discuss implementation into clinical practice of the revised guidelines for antiplatelet and antithrombotic pharmacologic therapy during PCI and to evaluate recent clinical evidence and make recommendations for revision of the guidelines incorporating the outcomes of recently completed trials. abstract_id: PUBMED:15116860 Acute coronary syndromes without ST-segment elevation. From randomized clinical trials, to consensus guidelines, to clinical practice in Italy: need to close the circle Recent therapeutic advances in the treatment of acute ischemic heart disease have been proven by randomized clinical trials and approved by formal practice guidelines. This rigorous approach has led to a sizable reduction in mortality and morbidity across the spectrum of acute coronary syndromes (ACS). However, contemporary registries of non-ST-elevation ACS set up by the cardiological community in Italy, as well as in the rest of Europe and in America, have shown only limited compliance to the general indication of treating high-risk patients by an early invasive approach protected by the use of glycoprotein IIb/IIIa receptor blockers. This partial failure in the process of improving patient care may be attributed to several reasons, including the suspect that practice guidelines may be biased by conflict of interest, concern about the applicability of the results of clinical trials to the real world, unrealistic expectations about treatment effects and, finally, logistic and economic obstacles including the availability of cath-labs and the high cost of platelet receptor blockers. Although the practice guidelines may provide a cultural support for translating the results of clinical research into patient care, and national and local cardiological associations can help in increasing awareness of the real benefits of an early aggressive approach in high-risk patients, the health care managers should remove bureaucratic obstacles and reallocate resources from treatments of unproven benefit to those that have been clearly shown to reduce mortality and the risk of reinfarction in ACS patients. abstract_id: PUBMED:16470326 Diagnosis and management of ST elevation myocardial infarction: a review of the recent literature and practice guidelines. There is a large volume of literature available to guide the peri-infarct management of ST elevation myocardial infarction (STEMI). Most of this literature focuses on improving the availability and efficacy of reperfusion therapy. The purpose of this article is to review contemporary scientific evidence and guideline recommendations regarding the diagnosis and therapy of STEMI. Studies and epidemiological data were identified using Medline, the Cochrane Database, and an Internet search engine. Medline was searched for landmark and recent publications using the following key words: STEMI, guidelines, epidemiology, reperfusion, fibrinolytics, percutaneous coronary intervention (PCI), facilitated PCI, transfer, delay, clopidogrel, glycoprotein IIb/IIIa, low-molecular-weight heparin (LMWH), beta-blockers, nitrates, and angiotensin-converting enzyme (ACE) inhibitors. The data accessed indicate that urgent reperfusion with either fibrinolytics or percutaneous intervention should be considered for every patient having symptoms of myocardial infarction with ST segment elevation or a bundle branch block. The utility of combined mechanical and pharmacological reperfusion is currently under investigation. Ancillary treatments may utilize clopidogrel, glycoprotein IIb/IIIa inhibitors, or low molecular weight heparin, depending on the primary reperfusion strategy used. Comprehensive clinical practice guidelines incorporate much of the available contemporary evidence, and are important resources for the evidence-based management of STEMI. abstract_id: PUBMED:15945316 How to administer antithrombotic therapy in non-ST-elevation acute coronary syndromes: guidelines and clinical practice It is well known that the role of platelets and the coagulation cascade in the pathogenesis of acute coronary syndromes (ACS) without ST-segment elevation and the antithrombotic therapy has become the pivotal treatment in these cases. The inhibitors of platelet activity (acetylsalicylic acid, thienopyridines, glycoprotein IIb/IIIa receptor blockers) and of thrombin activation (unfractionated or low-molecular-weight heparin) have gained a growing role in the treatment of patients with ACS without ST-segment elevation. According to the European Society of Cardiology guidelines, all patients with ACS without ST-segment elevation should be treated with acetylsalicylic acid, clopidogrel and heparin, independently of their risk profile; platelet glycoprotein IIb/IIIa receptor inhibitors should be administered intravenously to high-risk patients undergoing an invasive procedure (coronary angiography and percutaneous coronary intervention or coronary surgery). This review on antithrombotic treatment will focus on their correct administration suggested by the European guidelines, and will address some peculiar features related to the individual response to each drug and to the concomitant administration in selected cases. Acetylsalicylic resistance, clopidogrel administration before coronary angiography in high-risk patients, dose adjustment in patients with renal failure or low body weight, the use of low-molecular-weight heparin in the setting of coronary percutaneous intervention, the upstream use of platelet glycoprotein IIb/IIIa receptor inhibitors and the patient's wish are also discussed. abstract_id: PUBMED:15367174 Improved adherence to practice guidelines yields better outcome in high-risk patients with acute coronary syndrome without ST elevation: findings from nationwide FINACS studies. Objectives: Treatment options for acute coronary syndrome (ACS) without ST elevation have evolved rapidly during the recent years, but the successful implementation of practice guidelines incorporating new treatments into practice has been challenging. In this study, we evaluate whether targeted educational intervention could improve adherence to treatment guidelines of ACS without ST elevation. Design, Setting And Subjects: A previous study, FINACS I, evaluated the treatment and outcome of 501 consecutive non-ST elevation ACS patients that were referred in early 2001 to nine hospitals, covering nearly half of the Finnish population. That study revealed poor adherence to ESC guidelines, so targeted educational intervention on optimal practice was arranged before the second study (FINACS II), which was performed in the same hospitals using the same protocol as FINACS I. FINACS II, undertaken in early 2003, evaluated 540 consecutive patients. Interventions. Targeted educational programmes on optimal practice. Main Outcome Measures: The use of evidence-based therapies in non-ST elevation ACS patients. In-hospital event-free (death, new myocardial infarction, refractory angina, readmission with unstable angina and transient cerebral ischaemia/stroke) survival, and event-free survival at 6 months. Results: Baseline characteristics and risk markers were similar in both studies, and no significant changes in resources were seen. In 2003, the in-hospital use of statins, ACE-inhibitors, clopidogrel and glycoprotein (GP) IIb/IIIa receptor antagonists increased significantly, and in-hospital angiography was performed more often, especially in high-risk patients (59% vs. 45%, P &lt; 0.05); waiting time also shortened (4.2 +/- 5.5 vs. 5.8 +/- 4.7 days, P &lt; 0.01). Overall no significant change was seen in the frequency of death either in-hospital (2% vs. 4%, P = NS) or at 6 months (7% vs. 10%, P = NS) in FINACS II. However, the survival of high-risk patients improved both in-hospital (95% vs. 90%, P = 0.05) and at 6 months (89% vs. 78%, P = 0.05). Conclusion: In patients with non-ST elevation ACS-targeted educational interventions appeared to be associated with improved adherence to practical guidelines, which yielded a better outcome in high-risk ACS patients. abstract_id: PUBMED:25696308 Continuously improving the practice of cardiology. Guidelines for the management of patients with cardiovascular disease are designed to assist cardiologists and other physicians in their practice. Surveys are conducted to assess whether guidelines are followed in practice. The results of surveys on acute coronary syndromes, coronary revascularisation, secondary prevention, valvular heart disease and heart failure are presented. Comparing surveys conducted between 1995 and 2002, a gradual improvement in use of secondary preventive therapy is observed. Nevertheless, important deviations from established guidelines are noted, with a significant variation among different hospitals in the Netherlands and in other European countries. Measures for further improvement of clinical practice include more rapid treatment of patients with evolving myocardial infarction, more frequent use of clopidogrel and glycoprotein IIb/IIIa receptor blockers in patients with acute coronary syndromes, more frequent use of β-blockers in patients with heart failure and more intense measures to encourage patients to stop smoking. Targets for the proportion of patients who might receive specific therapies are presented. abstract_id: PUBMED:11434052 Acute coronary syndromes: introducing new therapies into established guidelines. Quantitative (or semiquantitative) risk stratification is becoming a more accurate and feasible way to define both overall risk and likelihood of benefit from more potent and invasive therapies. LMWH appears to be at least as good, and likely modestly better than, unfractionated heparin, and when used for patients undergoing revascularization, it should be continued until revascularization. GP IIb/IIIa inhibitors are highly effective for coronary intervention, as well as for "up-front" use in high-risk ACS patients in an environment in which intervention is commonly used. In spite of the new guidelines, a clear delineation to the practitioner of which treatments are proven effective, in which patients, and in what combinations, remains challenging. Best estimates are that only 50% to 75% of ideal patients are currently receiving proven therapies. Better methods are needed to fully integrate proven effective treatments into practice. Answer: The use of glycoprotein IIb/IIIa inhibitors (GPI) in percutaneous coronary intervention (PCI) has been associated with favorable outcomes, and guidelines for their use have been published by various national societies, including the National Institute of Clinical Excellence (NICE) in the United Kingdom. However, adherence to these guidelines has been inconsistent, particularly following the publication of contemporary trials such as ISAR-REACT. This trial showed no benefit of abciximab and 600 mg of clopidogrel compared with 600 mg of clopidogrel alone in elective patients, which may have influenced clinical practice (PUBMED:19245382). An audit of GPI use against NICE guidelines after their publication in May 2002 revealed that initially, only a small percentage of patients were noncompliant to the guidelines in 2002 and 2003. However, after the publication of ISAR-REACT, noncompliance increased significantly in 2004 and 2007. This pattern was also observed in patients with diabetes and those with non-ST-elevation acute coronary syndromes (NSTE-ACS). The overall noncompliance to NICE guidelines increased from 11.0% to 42.1% after the ISAR-REACT study, suggesting that new trial evidence had a substantial impact on clinical practice (PUBMED:19245382). The gap between evidence and practice is well recognized, and factors such as hospital and geographical characteristics can influence the uptake of evidence-based guideline recommendations. The GRACE registry highlighted that the presentation and publication of major international guidelines did not necessarily result in a measurable change in the temporal pattern of practice. Instead, antithrombotic and interventional therapies changed markedly over time and were profoundly influenced by hospital and geographic characteristics (PUBMED:12909070). The American College of Cardiology's Guidelines Applied in Practice (GAP) program demonstrated that real-time monitoring of key quality-of-care indicators could improve both process indicators and outcomes beyond the initial implementation of GAP. This approach led to higher use of evidence-based therapies and correspondingly better outcomes (PUBMED:17174631). In summary, there appears to be a significant gap between guidelines and clinical practice in the use of GPI, which may have widened following the ISAR-REACT trial. This gap is influenced by various factors, including the publication of new evidence, hospital and geographical characteristics, and the effectiveness of programs designed to improve guideline adherence.
Instruction: Serum leptin, adiponectin and ghrelin concentrations in post-menopausal women: Is there an association with bone mineral density? Abstracts: abstract_id: PUBMED:27105694 Serum leptin, adiponectin and ghrelin concentrations in post-menopausal women: Is there an association with bone mineral density? Objective: Adipokines and ghrelin exert well-documented effects on energy expenditure and glucose metabolism. Experimental data also support a role in bone metabolism, although data from clinical studies are conflicting. The purpose of this cross-sectional study was to investigate the association of serum concentrations of leptin, adiponectin and ghrelin with bone mineral density (BMD) in post-menopausal women. Methods: BMD in lumbar spine and femoral neck, and circulating leptin, adiponectin and ghrelin concentrations were measured in 110 healthy post-menopausal women. Patients with secondary causes of osteoporosis were excluded. Results: Osteoporosis was diagnosed in 30 (27%) women and osteopenia in 54 (49%). Serum leptin concentrations were positively correlated with both lumbar spine (r=0.343, p&lt;0.01) and femoral neck BMD (r=0.370, p&lt;0.01). Adiponectin concentrations were negatively associated with BMD at both sites (r=-0.321, p&lt;0.01 and r=-0.448, p&lt;0.01 respectively). No significant correlation between ghrelin concentrations and BMD was found. Osteoporotic women had lower body weight, body mass index (BMI) and leptin concentrations, but higher adiponectin concentrations compared with non-osteoporotic women. In multivariate stepwise regression analysis, the association of adiponectin concentrations with BMD remained significant only for femoral neck, after adjustment for body weight or BMI. Conclusions: An inverse association between adiponectin and femoral neck BMD was found in post-menopausal women, independently of body weight. The positive association between leptin and BMD was dependent on body weight, whereas no effect of ghrelin on BMD was evident. abstract_id: PUBMED:21778223 Influence of adipokines and ghrelin on bone mineral density and fracture risk: a systematic review and meta-analysis. Context: Adipokines (leptin, adiponectin, resistin, visfatin) and ghrelin may be implicated in bone metabolism. Objective: The aim was to perform an overview of the influence of blood levels of adipokines or ghrelin on bone mineral density (BMD), osteoporotic status, and fracture risk in healthy men and women. Data Sources: We reviewed Medline, Embase, and Cochrane databases up to March 2010 and abstracts of international meetings from 2008 to 2009. Study Selection: Fifty-nine studies meeting the inclusion criteria (healthy men or women evaluated for both BMD or fracture risk and at least one adipokine and/or ghrelin levels) were analyzed in the systematic review of the 931 references found in the electronic databases. Data Extraction: We used a predefined extraction sheet. Data Synthesis: We performed meta-analyses using the method of the inverse of the variance estimated pooled correlations between adipokines/ghrelin and BMD. Inverse correlations between adiponectin levels and BMD were highlighted (pooled r from -0.14 to -0.4). Leptin is positively associated to BMD, especially in postmenopausal women (pooled r from 0.18 to 0.33). High levels of leptin were reported to be predictive of low risk of fractures, whereas high levels of adiponectin may be predictive of high risk of vertebral fractures in men only. No discriminative capacity of osteoporotic status was reported. We found no convincing data to support an association between resistin, visfatin, or ghrelin and BMD. Conclusion: Adiponectin is the most relevant adipokine negatively associated with BMD, independent of gender and menopausal status. Inconsistent associations between adipokines and BMD are probably confounded by body composition, in particular fat mass parameters. abstract_id: PUBMED:37181034 Effect of adipokine and ghrelin levels on BMD and fracture risk: an updated systematic review and meta-analysis. Context: Circulating adipokines and ghrelin affect bone remodeling by regulating the activation and differentiation of osteoblasts and osteoclasts. Although the correlation between adipokines, ghrelin, and bone mineral density (BMD) has been studied over the decades, its correlations are still controversial. Accordingly, an updated meta-analysis with new findings is needed. Objective: This study aimed to explore the impact of serum adipokine and ghrelin levels on BMD and osteoporotic fractures through a meta-analysis. Data Sources: Studies published till October 2020 in Medline, Embase, and the Cochrane Library were reviewed. Study Selection: We included studies that measured at least one serum adipokine level and BMD or fracture risk in healthy individuals. We excluded studies with one or more of the following: patients less than 18 years old, patients with comorbidities, who had undergone metabolic treatment, obese patients, patients with high physical activities, and a study that did not distinguish sex or menopausal status. Data Extraction: We extracted the data that include the correlation coefficient between adipokines (leptin, adiponectin, and resistin) and ghrelin and BMD, fracture risk by osteoporotic status from eligible studies. Data Synthesis: A meta-analysis of the pooled correlations between adipokines and BMD was performed, demonstrating that the correlation between leptin and BMD was prominent in postmenopausal women. In most cases, adiponectin levels were inversely correlated with BMD. A meta-analysis was conducted by pooling the mean differences in adipokine levels according to the osteoporotic status. In postmenopausal women, significantly lower leptin (SMD = -0.88) and higher adiponectin (SMD = 0.94) levels were seen in the osteoporosis group than in the control group. By predicting fracture risk, higher leptin levels were associated with lower fracture risk (HR = 0.68), whereas higher adiponectin levels were associated with an increased fracture risk in men (HR = 1.94) and incident vertebral fracture in postmenopausal women (HR = 1.18). Conclusions: Serum adipokines levels can utilize to predict osteoporotic status and fracture risk of patients. Systematic Review Registration: https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42021224855, identifier CRD42021224855. abstract_id: PUBMED:27600055 Green tea extract and catechol-O-methyltransferase genotype modify the post-prandial serum insulin response in a randomised trial of overweight and obese post-menopausal women. Background: Green tea extract (GTE) may be involved in a favourable post-prandial response to high-carbohydrate meals. The catechol-O-methyltransferase (COMT) genotype may modify these effects. We examined the acute effects of GTE supplementation on the post-prandial response to a high-carbohydrate meal by assessing appetite-associated hormones and glucose homeostasis marker concentrations in women who consumed 843 mg of (-)-epigallocatechin-3-gallate (EGCG) or placebo capsules for 11-12 months. Methods: Sixty Caucasian post-menopausal women (body mass index ≥ 25.0 kg m-2 ) were included in a randomised, double-blind feeding study. GTE was consumed with a breakfast meal [2784.0 kJ (665.4 kcal); 67.2% carbohydrate]. Blood samples were drawn pre-meal, post-meal, and every 30 min for 4 h. Participants completed six satiety questionnaires. Results: Plasma leptin, ghrelin and adiponectin did not differ between GTE and placebo at any time point; COMT genotype did not modify these results. Participants randomised to GTE with the high-activity form of COMT (GTE-high COMT) had higher insulin concentrations at time 0, 0.5 and 1.0 h post-meal compared to all COMT groups randomised to placebo. Insulin remained higher in the GTE-high COMT group at 1.5, 2.0 and 2.5 h compared to Placebo-low COMT (P &lt; 0.02). GTE-high COMT had higher insulin concentrations at times 0, 0.5, 1.0, 1.5 and 2.0 h compared to the GTE-low COMT (P ≤ 0.04). Area under the curve measurements of satiety did not differ between GTE and placebo. Conclusions: GTE supplementation and COMT genotype did not alter acute post-prandial responses of leptin, ghrelin, adiponectin or satiety, although it may be involved in post-meal insulinaemic response of overweight and obese post-menopausal women. abstract_id: PUBMED:21164276 Sex hormones and adipokines in healthy pre-menopausal, post-menopausal and elderly women, and in age-matched men: data from the Brisighella Heart study. Background: Sex hormones and adipokines seem to differently interact in both genders at different ages. Aim: To comparatively evaluate the serum level of adipokines and sex hormones in healthy non-pharmacologically treated premenopausal women, post-menopausal women, and elderly women, and in age-matched men. Subjects: From the historical cohort of the Brisighella Heart Study we selected 199 adult healthy subjects (males: 89; females: 110), aged 62.5±12.4 yr. Men and women included in the age-class subgroups were matched for body mass index (BMI), waist circumference, blood pressure, heart rate, fasting plasma glucose, plasma lipids. Results: Leptin did not differ among various age classes in men, while pre-menopausal women displayed significantly lower serum leptin than post-menopausal women (-6.7 ± 2.2 pg/ml, p=0.036). Post-menopausal women had significantly greater serum leptin when compared with age-matched men (+13.1 ± 2.0 pg/ml, p&lt;0.001); the same was observed for elderly women when compared with elderly men (+11.2 ± 2.3 pg/ml, p&lt;0.001). At any age, women had significantly lower serum testosterone/estrone ratio than age-matched men (p&lt;0.01). Serum DHEAS was inversely proportional to age in both genders. The main predictors of adiponectin level are age in men (p=0.027) and BMI in women (p=0.003). The main predictors of leptin level are BMI and the testosterone/estrone ratio in both sexes (p&lt;0.05). The testosterone/estrone ratio is also the main predictor of ghrelin levels in women (p=0.006). Conclusion: Sex hormones and adipokines show specific interactions in the two genders and in different age-classes in a representative sample of adult healthy subjects. abstract_id: PUBMED:18280066 Change in adipocytokines and ghrelin with menopause. Objectives: To determine if ghrelin and adipocytokine (leptin, adiponectin, resistin) levels vary with menopause stage or with estradiol (E2), testosterone (T), follicle-stimulating hormone (FSH) and sex hormone-binding globulin (SHBG) concentrations measured in three stages of the menopause transition. Methods: A study of adipocytokines and menopause was nested in a population-based, longitudinal study of Caucasian women [Michigan Bone Health and Metabolism Study (MBHMS)]. Annual serum and urine samples, available from the MBHMS repository, were selected to correspond to the pre-, peri-, and postmenopause stages of the menopause transition. Participants included forty women, stratified into obese versus non-obese groups based upon their baseline body mass index, who had specimens corresponding to the three menopause stages. Results: Mean resistin levels were approximately two times higher during premenopause compared to peri- or postmenopause. There were significantly lower adiponectin and higher ghrelin levels in the perimenopause stage, compared to either the pre- or postmenopause stage. Increases in FSH concentrations were significantly and positively associated with higher leptin in non-obese women (P&lt;0.01) but not in obese women (P&lt;0.23). Increases in FSH concentrations were also significantly (P&lt;0.005) and positively associated with higher adiponectin concentrations but were negatively associated with ghrelin concentrations (P&lt;0.005). Associations remained following adjustment for waist circumference, waist circumference change, chronological age, and time between measures. Conclusions: Menopause stages and underlying FSH changes are associated with notable changes in levels of the metabolically active adipocytokines and ghrelin and these changes may be related to selected health outcomes observed in women at mid-life. abstract_id: PUBMED:32049929 Association of hot flushes with ghrelin and adipokines in early versus late postmenopausal women. Objective: Vasomotor flushing (hot flushes) is a common menopausal symptom experienced by most women going through the menopausal transition; flushing continues for a variable period in postmenopause. Primarily due to lack of ovarian estrogen, other biomarkers of hot flushes have not been clearly identified. We examined the relationship of hot flushes with ghrelin and adipokines. Methods: Baseline data from two clinical trials, the Women's Isoflavone Soy Health (WISH) trial and Early versus Late Intervention Trial of Estrogen (ELITE), were used in this post hoc cross-sectional study. Both WISH and ELITE had similar study designs, inclusion criteria, and data collection processes. Study participants were healthy postmenopausal women not taking estrogen-based hormone therapy, free of cardiovascular disease, or any other chronic diseases. Both trials used the same hot flush diary in which participants recorded the number of daily hot flushes by severity over a month on average. Serum concentrations of ghrelin, leptin, adiponectin, and resistin were assessed in stored fasting blood samples using highly specific radioimmunoassay. In this analysis, self-reported flushing experience was tested for an association with leptin, adiponectin, resistin, and ghrelin concentrations using logistic regression and mean comparisons. Results: A total of 898 postmenopausal women from the ELITE and WISH trials contributed to this analysis. Mean (SD) age was 60.4 (7.0) years, body mass index (BMI) 27 (5.3) kg/m, 67% were white, and 47% were within 10 years of menopause. Reported flushing was significantly associated with younger age, lower education, lower BMI, being married, and more recent menopause. Adjusted for these factors other than BMI, women in the highest quartile of ghrelin had significantly greater likelihood of experiencing hot flushes (OR [95% CI] = 1.84 [1.21-2.85]) compared to women in the lowest quartile. The association was more pronounced among overweight or obese women (OR [95% CI] = 2.36 [1.28-4.35]) compared to those with normal BMI (1.24 [0.54, 2.86]; interaction P value = 0.46). The association between ghrelin and hot flushes was similar among early (within 10 y) and late (over 10 y) postmenopausal women. Blood levels of adiponectin and resistin were not associated with hot flushes. Conclusions: Higher concentrations of ghrelin were associated with greater likelihood of hot flushes in both early- and late-postmenopausal women. Leptin, adiponectin, and resistin levels were not associated with hot flushes in postmenopausal women. abstract_id: PUBMED:25405497 Association of endogenous sex hormones with adipokines and ghrelin in postmenopausal women. Context: Sex hormones, adipokines, and ghrelin have been implicated in central control of appetite, energy homeostasis, maintenance of fat mass, and inflammation. Women tend to gain weight after menopause and adipose tissue is a major source of sex steroid postmenopause. Understanding the dynamics of these analytes are of particular importance in postmenopausal women, who are at greater risk for cardiometabolic diseases. Objectives: This study sought to evaluate the associations of adipokines and ghrelin with sex hormone concentrations in postmenopausal women. Design: We conducted a cross-sectional analysis of baseline clinical trial data. Setting: The parent trial was conducted at a university clinical research facility. Participants: Baseline data from 634 postmenopausal women participating in the Early vs Late Intervention Trial with Estradiol (ELITE). PARTICIPANTS had no history of chronic illness in the past 5 years and were not taking exogenous hormone therapy. Main Outcome Measures: Serum levels of estrone (E1), total estradiol (E2), free estradiol (FE2), free testosterone (FT), total testosterone (T), and sex hormone-binding globulin (SHBG). Results: Adjusted for age, race, time since menopause, and body mass index (BMI), leptin concentrations were significantly positively associated with E1, E2, FE2, and FT and inversely associated with SHBG levels. Only the associations of adiponectin with FE2 (inverse) and SHBG (positive) remained significant after controlling for BMI. The inverse associations of adiponectin with E1, E2, and FT were substantially mediated by BMI. Associations of ghrelin with E1, E2, FE2, and SHBG were not independent of BMI. Waist-to-hip circumference ratio was not a mediator in any of the associations. Conclusions: In postmenopausal women, leptin and adiponectin concentrations are substantially correlated with sex hormone and SHBG concentrations regardless of obesity status. abstract_id: PUBMED:21567464 Serum asymmetric dimethylarginine and nitric oxide levels in obese postmenopausal women. Background: It has been reported that estrogen deficiency after menopause might cause a decrement in nitric oxide (NO) bioavailability by increasing the level of asymmetric dimethylarginine (ADMA), a major endogenous nitric oxide synthase inhibitor, thus leading to abnormalities in endothelial function. Because NO plays an important role on feeding behavior, ADMA may be involved in the pathogenesis of obesity, too. This cross-sectional study aimed to evaluate the relations of ADMA and NO with the obesity-linked peptides, such as ghrelin, leptin, and adiponectin in postmenopausal women free of hormone replacement therapy. Methods: Adiponectin, ghrelin, leptin, ADMA, and NO(x) (total nitrite/nitrate) were measured in 22 obese (BMI: 30-47 kg/m(2)) and 19 normal weight (BMI: 21.5-26 kg/m(2)) postmenopausal women.Anthropometric measurements (height, weight, BMI, waist, and hip circumferences) were recorded. Statistics were made by the Mann-Whitney U-test. Results: Ghrelin and adiponectin levels were significantly lower (P&lt;0.001), whereas ADMA and leptin levels were higher in obese women than in normal weight controls (P&lt;0.01 and 0.001, respectively). BMI was correlated negatively with adiponectin and ghrelin and positively with ADMA and leptin levels. No correlation existed between ADMA and NO. Conclusion: Estrogen deficiency alone may not cause an increase in ADMA levels unless the women are prone to disturbances in energy homeostasis. In spite of the high ADMA levels, the unaltered NO levels in plasma may be owing to ongoing inflammatory conditions. abstract_id: PUBMED:22205618 Assessment of ghrelin and leptin receptor levels in postmenopausal women who received oral or transdermal menopausal hormonal therapy. Objective: In postmenopausal women, an increased leptin concentration and reduced levels of ghrelin and adiponectin were observed. The aim of this study was to evaluate the concentrations of the active form of ghrelin, total ghrelin, leptin receptor, lipoprotein(a) (Lp(a)), and plasminogen activator inhibitor type 1 (PAI-1) in postmenopausal women who received oral or transdermal menopausal hormonal therapy (MHT). Methods: The study involved 76 healthy women: 46 women aged from 44 to 58 years who received oral (26) or transdermal (20) MHT; the control group consisted of 30 women aged from 44 to 54 years who did not receive MHT. The plasma concentrations of total ghrelin, the active form of ghrelin, Lp(a), and PAI-1:Ag were measured by enzyme-linked immunosorbent assay (ELISA). The concentration of the leptin receptor was measured by enzyme immunometric assay (EIA). Results: We observed a significantly higher concentration of total ghrelin and the active form of ghrelin in women who received transdermal MHT in comparison with those who took oral MHT. We also found a significantly lower concentration of total ghrelin in women who received oral MHT compared with the control group. A higher concentration of PAI-1:Ag was found in the group of women who took transdermal MHT in comparison with those who took oral MHT and with the control group. The differences were statistically significant. Additionally, we found a significant negative correlation between the concentrations of total ghrelin and PAI-1:Ag and a positive correlation between the concentrations of total ghrelin and leptin receptor in women who received transdermal MHT. Conclusions: The study showed that women who used transdermal MHT had higher levels of total ghrelin than women who took oral MHT. This indicates a beneficial effect of the transdermal route of MHT. However, transdermal therapy was associated with adverse effects with regard to the observed higher levels of PAI-1:Ag, which in turn, can lead to a reduction in fibrinolytic activity. Answer: Yes, there is an association between serum leptin, adiponectin, and ghrelin concentrations and bone mineral density (BMD) in postmenopausal women. Serum leptin concentrations have been found to be positively correlated with BMD at both the lumbar spine and femoral neck. In contrast, adiponectin concentrations have been negatively associated with BMD at these sites. No significant correlation between ghrelin concentrations and BMD was found in the study by PUBMED:27105694. Osteoporotic women had lower body weight, body mass index (BMI), and leptin concentrations, but higher adiponectin concentrations compared with non-osteoporotic women. The inverse association between adiponectin and femoral neck BMD was found to be independent of body weight in postmenopausal women. The positive association between leptin and BMD was dependent on body weight, whereas no effect of ghrelin on BMD was evident (PUBMED:27105694). Furthermore, a systematic review and meta-analysis highlighted inverse correlations between adiponectin levels and BMD, while leptin was positively associated with BMD, especially in postmenopausal women. High levels of leptin were reported to be predictive of a low risk of fractures, whereas high levels of adiponectin may be predictive of a high risk of vertebral fractures in men only (PUBMED:21778223). Another updated systematic review and meta-analysis confirmed that leptin levels were significantly lower and adiponectin levels were higher in the osteoporosis group compared to the control group in postmenopausal women. Higher leptin levels were associated with a lower fracture risk, whereas higher adiponectin levels were associated with an increased fracture risk in men and incident vertebral fracture in postmenopausal women (PUBMED:37181034). These findings suggest that serum adipokine levels, specifically leptin and adiponectin, can be utilized to predict osteoporotic status and fracture risk in postmenopausal women. However, the role of ghrelin in BMD and fracture risk remains less clear, with no convincing data to support a strong association (PUBMED:21778223).
Instruction: Do semi-rigid prosthetic rings affect left ventricular function after mitral valve repair? Abstracts: abstract_id: PUBMED:23676887 Do semi-rigid prosthetic rings affect left ventricular function after mitral valve repair? Background: After reports of cardiac impairment caused by mitral annuloplasty with rigid rings, several prosthetic rings with semi-rigidity were introduced. The influence of semi-rigid rings on postoperative cardiac function remains unknown. This study compared postoperative cardiac function between patients receiving a semi-rigid prosthetic ring and those receiving a flexible ring or band. Methods And Results: Transthoracic echocardiographic data of 305 patients who underwent mitral valve repair for degenerative mitral regurgitation (227 patients receiving a semi-rigid ring and 78 receiving a flexible ring or band) were retrospectively reviewed. The imbalance in the preoperative characteristics between groups was adjusted with propensity score matching. Left ventricular ejection fraction, end-diastolic dimension, and end-systolic dimension were compared at 1 week, 6 months, and 1 year after surgery. Propensity score matching yielded 68 matched pairs of patients for whom there were few group differences in preoperative covariates. Between patients receiving a semi-rigid ring and those receiving a flexible ring or band in the propensity-matched cohorts, there were no significant differences in ejection fraction (P=0.322), end-diastolic dimension (P=0.576), or end-systolic dimension (P=0.567). Conclusions: There was little difference in the influence on postoperative cardiac function between semi-rigid rings and flexible rings or bands. abstract_id: PUBMED:33743744 Comparison of flexible, open with semi-rigid, closed annuloplasty-rings for mitral valve repair. Background: Mitral regurgitation is a frequent valvular disease, with an increasing prevalence. We analysed the long-term outcomes of mitral valve repair procedures conducted over the last 10 years in our clinic using almost exclusively two different annuloplasty ring types. Methods: A single-centre, retrospective analysis of mitral valve surgeries conducted between January 2005 and December 2015 for patients undergoing first-line mitral valve repair with either open (Cosgrove) or closed (CE Physio / Physio II) annuloplasty (OA or CA, respectively) rings. Results: In total, 1120 patient documentations were available of which 528 underwent OA and 592 patients CA. The median age of patients was 64.0 years and 41.1% were female. The majority of these patients underwent the procedure because of degenerative valve disease. Rates of successful repair were about 90%, 72 h procedural mortality was 0.6% and the rate of re-intervention was 0.6% within the first 30 days. Functional (mitral regurgitation, left ventricular ejection fraction, left ventricular end-diastolic and systolic diameter and New York Heart Association class) as well as hard outcomes were comparable. 77.7 and 74.4% of patients were alive at the 10-year follow-up in the OA and CA groups, respectively. Upon multivariable adjustment, the hazard ratio was 0.926 (95% CI: 0.642-1.3135; p = 0.681). Conclusions: The functional outcome and survival rates up to 10 years after mitral valve repair were comparable using open and closed annuloplasty rings. Whether this means these rings are interchangeable or a carefully selection of the best-for-the-patient devices will be subject of future investigations. abstract_id: PUBMED:30546733 Left ventricular pseudoaneurysm as a complication of prosthetic mitral valve infective endocarditis. We report a case of infective endocarditis complicated with left ventricular pseudoaneurysm originating from the posterior annulus of the prosthetic mitral valve in a 56-year-old woman. Despite prolonged antibiotic treatment, transesophageal echocardiography (TEE) showed partial detachment of the prosthesis from the posterior mitral annulus. Three-dimensional rotational computed tomography clearly demonstrated a pseudoaneurysm toward the posterolateral portion of the mitral prosthetic valve, which was not evident by TEE. Valve replacement and repair of the pseudoaneurysm were performed 83 days after initiation of antibiotic therapy. Left ventricular pseudoaneurysm is a rare but serious complication of mitral prosthetic valve endocarditis. It requires prompt diagnosis and early surgical intervention. &lt;Learning objective: We present a case of infective endocarditis (IE) complicated with left ventricular pseudoaneurysm originating from the prosthetic mitral valve. Repeated transesophageal echocardiography is recommended for all IE patients when perivalvular extension is suspected. Electrocardiography-gated three-dimensional-computed tomography is useful for detection and evaluation of pseudoaneurysm, especially in planning surgical procedures.&gt;. abstract_id: PUBMED:11199318 Effects of prosthetic valve placement on mitral annular dynamics and the left ventricular base. Insertion of a rigid mitral prosthesis impairs the function of the mitral annulus and induces systolic narrowing of the left ventricular outflow tract (LVOT). To study this mechanism, we investigated dynamic changes in the left ventricular (LV) base, which consists of the mitral annulus and LVOT orifice. In seven patients with mechanical mitral valve prostheses and eight normal subjects, the image of the LV base was reconstructed three-dimensionally and its dynamic change during systole was studied. In the patients, the rigid prosthetic valve (=mitral annulus) tilted toward the left ventricle with a hinge point at the posterior mitral annulus during systole. The left ventricular base exhibited contraction, but the size of the prosthetic valve was constant. As a consequence, the prosthetic valve occupied more of the left ventricular base, which resulted in narrowing of the LVOT. In the normal subjects, the mitral annulus did not interfere with the region of the LVOT orifice during systole as the mitral annulus underwent both dorsiflexion and contraction. Thus, fixation of the mitral annulus induces an anti-physiologic motion of the annulus. Conscious preservation of annular flexibility in mitral valve surgery is important in avoiding potential dynamic LVOT obstruction. abstract_id: PUBMED:33154921 The Carpentier-Edwards Classic and Physio Annuloplasty Rings in Repair of Degenerative Mitral Valve Disease: A Retrospective Study. Background: Physio ring (SR) is considered an improved version of the Classic rigid ring (RR). Today, SR is more widely used in mitral valve (MV) repair. We sought to compare the long-term outcomes of repair with RR and SR in degenerative mitral valve disease. Methods: In a computerized registry of our institution, 306 patients had MV repair with either RR (139 patients) or SR (167 patients) ring between 2005 and 2015. Fifteen of them had concomitant tricuspid valve repair. Ninety-two (30.1%) had Barlow's disease and 214 (69.9%) had fibroelastic deficiency. The patients had similar demographic and echocardiographic characteristics. Results: There were 4 (1.3%) operative mortalities. Mean follow-up time was 107.4 ± 13.2 months. Left ventricular end diastolic and end systolic diameters significantly improved in both groups but not between groups. Survival at 10 years was 84.6% (93.1% in RR and 91.5% in SR; p = 0.177) and 10-year freedom from recurrent MR ≥ 2+ was 74.5% (88.2% in RR and 86.3% in SR; p = 0.110). Reoperations for repair failure were 8 in RR and 6 in SR. By Cox regression analysis, Barlow's disease and preoperative MR = 4+ were predictors of repair failure. Old age (≥70 years), NYHA functional class IV and pulmonary artery systolic pressure (≥40 mmHg) were predictors of poor survival by univariate analysis. Conclusion: Long-term outcomes of repair for degenerative MV disease with the Classic and Physio rings are comparable. We also reiterate the importance of large size annuloplasty rings for Barlow's disease to avoid the incidence left ventricular outflow obstruction. abstract_id: PUBMED:33744010 Mitral valve repair for isolated posterior mitral valve leaflet prolapse: The effect of respect and resect techniques on left ventricular function. Objective: Posterior mitral valve leaflet prolapse repair can be performed by leaflet resection or chordal replacement techniques. The impact of these techniques on left ventricular function remains a topic of debate, considering the presumed better preservation of mitral-ventricular continuity when leaflet resection is avoided. We explored the effect of different posterior mitral valve leaflet repair techniques on postoperative left ventricular function. Methods: In total, 125 patients were included and divided into 2 groups: leaflet resection (n = 82) and isolated chordal replacement (n = 43). Standard and advanced echocardiographic assessments were performed preoperatively, directly postoperatively, and at late follow-up. In addition, left ventricular global longitudinal strain was measured and corrected for left ventricular end-diastolic volume to adjust for the significant changes in left ventricular volumes. Results: At baseline, no significant intergroup difference in left ventricular function was observed measured with the corrected left ventricular global longitudinal strain (resect: 1.76% ± 0.58%/10 mL vs respect: 1.70% ± 0.57%/10 mL, P = .560). Postoperatively, corrected left ventricular global longitudinal strain worsened in both groups but improved significantly during late follow-up, returning to preoperative values (resect: 1.39% ± 0.49% to 1.71% ± 0.56%/10 mL, P &lt; .001 and respect: 1.30% ± 0.45% to 1.70% ± 0.54%/10 mL, P &lt; .001). Mixed model analysis showed no significant effect on the corrected left ventricular global longitudinal strain when comparing the 2 different surgical repair techniques over time (P = .943). Conclusions: Our study showed that both leaflet resection and chordal replacement repair techniques are effective at preserving postoperative left ventricular function in patients with posterior mitral valve leaflet prolapse and significant regurgitation. abstract_id: PUBMED:24757605 The viable mitral annular dynamics and left ventricular function after mitral valve repair by biological rings. Objective: Considering the importance of annular dynamics in the valvular and ventricular function, we sought to evaluate the effects of treated pericardial annuloplasty rings on mitral annular dynamics and left-ventricular (LV) function after mitral valve repair. The results were compared with the mitral annular dynamics and LV function in patients with rigid and flexible rings and also in those without any heart problems. Materials And Methods: One hundred and thirty-six consecutive patients with a myxomatous mitral valve and severe regurgitation were prospectively enrolled in this observational cohort study. The patients underwent comparable surgical mitral valve reconstruction; of these 100 received autologous pericardium rings (Group I), 20 were given flexible prosthetic rings (Group II), and 16 received rigid rings (Group III). Other repair modalities were also performed, depending on the involved segments. The patients were compared with 100 normal subjects in whom an evaluation of the coronary artery was not indicative of valvular or myocardial abnormalities (Group IV). At follow-up, LV systolic indices were assessed via two-dimensional echocardiography at rest and during dobutamine stress echocardiography. Mitral annular motion was examined through mitral annulus systolic excursion (MASE). Peak transmitral flow velocities (TMFV) and mitral valve area (MVA) were also evaluated by means of continuous-wave Doppler. Results: A postoperative echocardiographic study showed significant mitral regurgitation (&gt;=2+) in one patient in Group I, one patient in Group II, and none in Group III. None of the patients died. There was a noteworthy increase in TMFV with stress in all the groups, the increase being more considerable in the prosthetic ring groups (Group I from 1.10 ± 0.08 to 1.36 ± 0.13 m/s, Group II from 1.30 ± 0.11 to 1.59 ± 0.19 m/s, Group III from 1.33 ± 0.09 to 1.69 ± 0.21 m/s, and Group IV from 1.08 ± 0.08 to 1.21 ± 0.12 m/s). Recruitment of LVEF reserve during stress was observed in the pericardial ring and normal groups (Group I from 54.6±6.2 to 64.6±7.3%, P&lt;0.005; and Group IV from 55.3 ± 5.7 to 66 ± 6.2%, P&lt;0.05), but no significant changes were detected in the prosthetic ring groups (Group II from 50.4 ± 5 to 55.0 ± 5.1, and Group III from 51.1 ± 6.6 to 53.8 ± 4.7). There was a significant MASE increase in both of the studied longitudinal segments at rest and during stress in Groups I and IV compared with the prosthetic ring groups. There was no calcification of the pericardial rings. Conclusions: The use of treated autologous pericardium rings for mitral valve annuloplasty yields excellent mitral annular dynamics, preserves LV function during stress conditions, and leaves no echocardiographic signs of degeneration. abstract_id: PUBMED:36348986 Left ventricular pseudoaneurysm secondary to recurrent mitral prosthetic valve endocarditis. An 86-year-old man who had undergone two mitral valve replacements developed heart failure due to prosthetic valve infection and left ventricular pseudoaneurysm (LVPA). LVPA due to infective endocarditis is rare and caused by the abscess formation in the left ventricular myocardium. Infective endocarditis caused by enterococci requires attention to relapse. abstract_id: PUBMED:7887707 Comparison of the Carpentier and Duran prosthetic rings used in mitral reconstruction. This clinical study was undertaken to evaluate the Duran flexible ring and the Carpentier rigid ring in terms of mitral annulus motion, transmitral flow and left ventricular function. Twenty-six patients (11 receiving rigid rings and 15, flexible rings) with normal sinus rhythm and with no or only trivial mitral valve regurgitation after surgical repair were selected. Angiograms demonstrated no significant differences in left ventricular systolic function between the two groups of patients. The area of the mitral annulus with the flexible ring significantly changed during the cardiac cycle. There were significant differences in the left ventricular fractional shortening (rigid ring, 35.8%; flexible ring, 43.4%) and in the peak velocity (rigid ring, 222 cm/s; flexible ring, 186 cm/s) at peak exercise. These data suggest that the flexible ring interferes less with the normal movements of the mitral annulus during the cardiac cycle, and that, under exercise conditions, it performs better than the rigid ring. We therefore conclude that mitral valve reconstruction using the Duran flexible ring is advantageous in patients with mitral regurgitation due to degenerative disease and sinus rhythm. abstract_id: PUBMED:8246552 Effects of preserving mitral apparatus on ventricular systolic function in mitral valve operations in dogs. The mitral apparatus can affect left ventricular function through various mechanisms, such as the direct mechanical coupling between the mitral anulus and papillary muscle and the mitral annular contraction. To evaluate the relative contribution of these mechanisms, we investigated in five groups of 35 dogs that underwent open chest operations how preservation of the mitral apparatus affects left ventricular systolic function. We performed atriotomy in the first group. We sutured a prosthetic rigid ring around the mitral anulus in the second group. We replaced the mitral valve and preserved the anterior chordae in the third group, the posterior chordae in the fourth group, and no chordae in the fifth group. The postoperative percentage of recovery of ventricular function (as assessed by the slope of the end-systolic pressure-volume relation) from preoperative control values were 92.2% +/- 4.8%, 89.5% +/- 12.8%, 85.7% +/- 9.5%, 75.1% +/- 12.9%, and 61.3% +/- 8.0%, respectively. Preservation of the mitral apparatus significantly improved left ventricular function compared with that of conventional mitral valve replacement. The average relative contribution of the ventricular muscle to left ventricular function, the mitral anulus-papillary muscle continuity, and the mitral annular contraction were 66.5%, 30.6%, and 2.9%, respectively. We conclude that it is more beneficial to preserve the mitral apparatus in mitral valve replacement. The prosthetic ring does not detract from the functional benefit of the preservation of the mitral apparatus. Answer: Based on the provided abstracts, semi-rigid prosthetic rings do not appear to significantly affect left ventricular function after mitral valve repair. A study comparing postoperative cardiac function between patients receiving a semi-rigid prosthetic ring and those receiving a flexible ring or band found little difference in the influence on postoperative cardiac function between the two types of rings (PUBMED:23676887). Additionally, a retrospective analysis of mitral valve surgeries using open (flexible) or closed (semi-rigid) annuloplasty rings showed that functional outcomes and survival rates up to 10 years after mitral valve repair were comparable between the two types of rings (PUBMED:33743744). Furthermore, a study comparing the long-term outcomes of repair with rigid and semi-rigid rings in degenerative mitral valve disease found that the outcomes were comparable, with no significant differences in left ventricular end-diastolic and end-systolic diameters between the two groups (PUBMED:33154921). Another study investigating the effect of different posterior mitral valve leaflet repair techniques on postoperative left ventricular function concluded that both leaflet resection and chordal replacement repair techniques are effective at preserving postoperative left ventricular function (PUBMED:33744010). In contrast, a study on the effects of prosthetic valve placement on mitral annular dynamics and the left ventricular base found that insertion of a rigid mitral prosthesis impairs the function of the mitral annulus and induces systolic narrowing of the left ventricular outflow tract (LVOT), suggesting that preservation of annular flexibility is important (PUBMED:11199318). However, this study focused on rigid prostheses rather than semi-rigid ones. Overall, the evidence from the provided abstracts suggests that semi-rigid prosthetic rings do not have a significant negative impact on left ventricular function after mitral valve repair, and their outcomes are comparable to those with flexible rings or bands.
Instruction: Does the perception of neighborhood built environmental attributes influence active transport in adolescents? Abstracts: abstract_id: PUBMED:23531272 Does the perception of neighborhood built environmental attributes influence active transport in adolescents? Background: Among Belgian adolescents active transport (AT) is a common physical activity (PA) behavior. Preliminary evidence suggests that AT can be an important opportunity for increasing adolescents' daily PA levels. To inform interventions, predictors of this PA behavior need to be further explored. Therefore, in the perspective of the ecological models this study aimed (a) to investigate the relationship between the perception of neighborhood built environmental attributes and adolescents' AT and (b) to explore the contribution of the perception of neighborhood built environmental attributes beyond psychosocial factors. Methods: For the purpose of this study, data from the Belgian Environmental Physical Activity Study in Youth (BEPAS-Y), performed between 2008 and 2009, was used. The final study population consisted of 637 adolescents aged 13-15 years. The participants completed a survey measuring demographic and psychosocial factors, the Flemish Physical Activity Questionnaire and the Dutch version of the Neighborhood Environmental Walkability Scale. Results: A set of stepwise linear regression analyses with backward elimination revealed that a shorter distance to school, perceiving neighborhoods to have connected streets, a lower degree of land use mix diversity, less infrastructure for walking and a lower quality of the infrastructure for walking are associated with more min/day AT to and from school (p all &lt;0.05). Furthermore, marginally significant associations (p &lt; 0.10) were found between residential density and safety from crime and AT to and from school. No relationship between the perception of the neighborhood built environmental attributes and walking for transport during leisure time and cycling for transport during leisure time was found. Conclusions: The substantial contribution of the perception of neighbourhood built environmental attributes to AT found in Belgian adults, could not totally be confirmed by this study for Belgian adolescents. Among Belgian adolescents, the contribution of neighborhood environmental perceptions to explain the variance in AT seems to be dependent of the purpose of AT. Further research is needed to explore this relationship in specific subgroups and to overcome some of the limitations this study had to contend with. abstract_id: PUBMED:30446347 Physical and spatial assessment of school neighbourhood built environments for active transport to school in adolescents from Dunedin (New Zealand). Adolescent active transport to school (ATS) is influenced by demographic, social, environmental and policy factors. Yet, the relationship between school neighbourhood built environment (SN-BE) and adolescents' ATS remains largely unexplored. This observational study examined associations between observed, objectively-measured and perceived SN-BE features and adolescents' ATS in Dunedin (New Zealand). Adolescents' perception of safety of walking to school was the strongest correlate of ATS among adolescents living ≤ 2.25 km of school, whereas assessed micro- and macro-scale SN-BE features were not significantly correlated with ATS. Adolescents' perceptions of walking safety should be considered as a part of comprehensive efforts to encourage ATS. abstract_id: PUBMED:32351120 Race/ethnicity, built environment in neighborhood, and children's mental health in the US. The prevention and treatment of mental health disorders in childhood and adolescence is one among the major public health challenges in the United States today. Prior research has suggested that neighborhood is very important for children and adolescents' mental health. The present study extends the research on neighborhood and mental health by examining the association between childhood mental health and the identified specific built environment attributes in neighborhood as well as its intersection with race/ethnicity in the United States. Statistical analyses results of data from the 2016 National Survey of Children Health (NSCH) indicate that children's mental health and the built environment in neighborhood vary across racial/ethnic groups, with minority groups being more likely to live in the disadvantaged neighborhoods and to experience more mental health disorders, particularly American Indian children. Further, the relationship between built environment neighborhood mental health among children varies across race/ethnicity in the United States. abstract_id: PUBMED:32213522 Built environment changes and active transport to school among adolescents: BEATS Natural Experiment Study protocol. Introduction: Natural experiments are considered a priority for examining causal associations between the built environment (BE) and physical activity (PA) because the randomised controlled trial design is rarely feasible. Few natural experiments have examined the effects of walking and cycling infrastructure on PA and active transport in adults, and none have examined the effects of such changes on PA and active transport to school among adolescents. We conducted the Built Environment and Active Transport to School (BEATS) Study in Dunedin city, New Zealand, in 2014-2017. Since 2014, on-road and off-road cycling infrastructure construction has occurred in some Dunedin neighbourhoods, including the neighbourhoods of 6 out of 12 secondary schools. Pedestrian-related infrastructure changes began in 2018. As an extension of the BEATS Study, the BEATS Natural Experiment (BEATS-NE) (2019-2022) will examine the effects of BE changes on adolescents' active transport to school in Dunedin, New Zealand. Methods And Analysis: The BEATS-NE Study will employ contemporary ecological models for active transport that account for individual, social, environmental and policy factors. The published BEATS Study methodology (surveys, accelerometers, mapping, Geographic Information Science analysis and focus groups) and novel methods (environmental scan of school neighbourhoods and participatory mapping) will be used. A core component continues to be the community-based participatory approach with the sustained involvement of key stakeholders to generate locally relevant data, and facilitate knowledge translation into evidence-based policy and planning. Ethics And Dissemination: The BEATS-NE Study has been approved by the University of Otago Ethics Committee (reference: 17/188). The results will be disseminated through scientific publications and symposia, and reports and presentations to stakeholders. Trial Registration Number: ACTRN12619001335189. abstract_id: PUBMED:25506554 Variations in active transport behavior among different neighborhoods and across adult lifestages. Objective: Built environment characteristics are closely related to transport behavior, but observed variations could be due to residents own choice of neighborhood called residential self-selection. The aim of this study was to investigate differences in neighborhood walkability and residential self-selection across life stages in relation to active transport behavior. Methods: The IPEN walkability index, which consists of four built environment characteristics, was used to define 16 high and low walkable neighborhoods in Aarhus, Denmark (250.000 inhabitants). Transport behavior was assessed using the IPAQ questionnaire. Life stages were categorized in three groups according to age and parental status. A factor analysis was conducted to investigate patterns of self-selection. Multivariable logistic regression analyses were carried out to evaluate the association between walkability and transport behavior i.e. walking, cycling and motorized transport adjusted for residential self-selection and life stages. Results: A total of 642 adults aged 20-65 years completed the questionnaire. The highest rated self-selection preference across all groups was a safe and secure neighborhood followed by getting around easily on foot and by bicycle. Three self-selection factors were detected, and varied across the life stages. In the multivariable models high neighborhood walkability was associated with less motorized transport (OR 0.33 95%CI 0.18-0.58), more walking (OR 1.65 95%CI 1.03-2.65) and cycling (OR 1.50 95% CI 1.01-2.23). Self-selection and life stage were also associated with transport behavior, and attenuated the association with walkability. Conclusion: This study supports the hypothesis that some variation in transport behavior can be explained by life stages and self-selection, but the association between living in a more walkable neighborhood and active transport is still significant after adjusting for these factors. Life stage significantly moderated the association between neighborhood walkability and cycling for transport, and household income significantly moderated the association between neighborhood walkability and walking for transport. Getting around easily by bicycle and on foot was the highest rated self-selection factor second only to perceived neighborhood safety. abstract_id: PUBMED:34996917 Workplace neighbourhood built-environment attributes and sitting at work and for transport among Japanese desk-based workers. Workplace settings-both internal and external-can influence how workers are physically active or sedentary. Although research has identified some indoor environmental attributes associated with sitting at work, few studies have examined associations of workplace neighbourhood built-environment attributes with workplace sitting time. We examined the cross-sectional associations of perceived and objective workplace neighbourhood built-environment attributes with sitting time at work and for transport among desk-based workers in Japan. Data were collected from a nationwide online survey. The Abbreviated Neighborhood Environment Walkability Scale (n = 2137) and Walk Score® (for a subsample of participants; n = 1163) were used to assess perceived and objective built-environment attributes of workplace neighbourhoods. Self-reported daily average sitting time at work, in cars and in public transport was measured using a Japanese validated questionnaire. Linear regression models estimated the associations of workplace neighbourhood built-environment attributes with sitting time. All perceived workplace neighbourhood built-environment attributes were positively correlated with Walk Score®. However, statistically significant associations with Walk Score® were found for sitting for transport but not for sitting at work. Workers who perceived their workplace neighbourhoods to be more walkable reported a longer time sitting at work and in public transport but a shorter sitting time in cars. Our findings suggest that walkable workplace neighbourhoods may discourage longer car use but have workplaces where workers spend a long time sitting at work. The latter finding further suggests that there may be missed opportunities for desk-based workers to reduce sitting time. Future workplace interventions to reduce sitting time may be developed, taking advantage of the opportunities to take time away from work in workplace neighbourhoods. abstract_id: PUBMED:33854906 A national examination of neighborhood socio-economic disparities in built environment correlates of youth physical activity. Adolescents in the U.S. do not meet current physical activity guidelines. Ecological models of physical activity posit that factors across multiple levels may support physical activity by promoting walkability, such as the neighborhood built environment and neighborhood socioeconomic status (nSES). We examined associations between neighborhood built environment factors and adolescent moderate-to-vigorous physical activity (MVPA), and whether nSES moderated associations. Data were drawn from a national sample of adolescents (12-17 years, N = 1295) surveyed in 2014. MVPA (minutes/week) were estimated from self-report validated by accelerometer data. Adolescents' home addresses were geocoded and linked to Census data from which a nSES Index and home neighborhood factors were derived using factor analysis (high density, older homes, short auto commutes). Multiple linear regression models examined associations between neighborhood factors and MVPA, and tested interactions between quintiles of nSES and each neighborhood factor, adjusting for socio-demographics. Living in higher density neighborhoods (B(SE): 9.22 (2.78), p = 0.001) and neighborhoods with more older homes (4.42 (1.85), p = 0.02) were positively associated with adolescent MVPA. Living in neighborhoods with shorter commute times was negatively associated with MVPA (-5.11 (2.34), p = 0.03). Positive associations were found between MVPA and the high density and older homes neighborhood factors, though associations were not consistent across quintiles. In conclusion, living in neighborhoods with walkable attributes was associated with greater adolescent MVPA, though the effects were not distributed equally across nSES. Adolescents living in lower SES neighborhoods may benefit more from physical activity interventions and environmental supports that provide opportunities to be active beyond neighborhood walkability. abstract_id: PUBMED:32993682 Different neighborhood walkability indexes for active commuting to school are necessary for urban and rural children and adolescents. Background: Literature focusing on youth has reported limited evidence and non-conclusive associations between neighborhood walkability measures and active commuting to and from school (ACS). Moreover, there is a lack of studies evaluating both macro- and micro-scale environmental factors of the neighborhood when ACS is analyzed. Likewise, most studies on built environment attributes and ACS focus on urban areas, whereas there is a lack of studies analyzing rural residential locations. Moreover, the relationship between built environment attributes and ACS may differ in children and adolescents. Hence, this study aimed to develop walkability indexes in relation to ACS for urban and rural children and adolescents, including both macro- and micro-scale school-neighborhood factors. Methods: A cross-sectional study of 4593 participants from Spain with a mean age of 12.2 (SD 3.6) years was carried out. Macro-scale environmental factors were evaluated using geographic information system data, and micro-scale factors were measured using observational procedures. Socio-demographic characteristics and ACS were assessed with a questionnaire. Several linear regression models were conducted, including all the possible combinations of six or less built environment factors in order to find the best walkability index. Results: Analyses showed that intersection density, number of four-way intersections, and residential density were positively related to ACS in urban participants, but negatively in rural participants. In rural children, positive streetscape characteristics, number of regulated crossings, traffic calming features, traffic lanes, and parking street buffers were also negatively related to ACS. In urban participants, other different factors were positively related to ACS: number of regulated crossings, positive streetscape characteristics, or crossing quality. Land use mix acted as a positive predictor only in urban adolescents. Distance to the school was a negative predictor on all the walkability indexes. However, aesthetic and social characteristics were not included in any of the indexes. Conclusions: Interventions focusing on improving built environments to increase ACS behavior need to have a better understanding of the walkability components that are specifically relevant to urban or rural samples. abstract_id: PUBMED:27221127 Built Environment and Active Transport to School (BEATS) Study: protocol for a cross-sectional study. Introduction: Active transport to school (ATS) is a convenient way to increase physical activity and undertake an environmentally sustainable travel practice. The Built Environment and Active Transport to School (BEATS) Study examines ATS in adolescents in Dunedin, New Zealand, using ecological models for active transport that account for individual, social, environmental and policy factors. The study objectives are to: (1) understand the reasons behind adolescents and their parents' choice of transport mode to school; (2) examine the interaction between the transport choices, built environment, physical activity and weight status in adolescents; and (3) identify policies that promote or hinder ATS in adolescents. Methods And Analysis: The study will use a mixed-method approach incorporating both quantitative (surveys, anthropometry, accelerometers, Geographic Information System (GIS) analysis, mapping) and qualitative methods (focus groups, interviews) to gather data from students, parents, teachers and school principals. The core data will include accelerometer-measured physical activity, anthropometry, GIS measures of the built environment and the use of maps indicating route to school (students)/work (parents) and perceived safe/unsafe areas along the route. To provide comprehensive data for understanding how to change the infrastructure to support ATS, the study will also examine complementary variables such as individual, family and social factors, including student and parental perceptions of walking and cycling to school, parental perceptions of different modes of transport to school, perceptions of the neighbourhood environment, route to school (students)/work (parents), perceptions of driving, use of information communication technology, reasons for choosing a particular school and student and parental physical activity habits, screen time and weight status. The study has achieved a 100% school recruitment rate (12 secondary schools). Ethics And Dissemination: The study has been approved by the University of Otago Ethics Committee. The results will be actively disseminated through reports and presentations to stakeholders, symposiums and scientific publications. abstract_id: PUBMED:27618495 Associations of Perceived and Objectively Measured Neighborhood Environmental Attributes With Leisure-Time Sitting for Transport. Background: This study examined associations of perceived and objectively measured neighborhood environmental attributes with leisure-time sitting for transport among middle-to-older aged Japanese adults. Method: Data were collected using a postal survey of 998 adults aged 40 to 69 years. Generalized linear modeling with a gamma distribution and a log link was used to examine associations of perceived (International Physical Activity Questionnaire-Environmental module) and Geographic Information Systems (GIS)-derived built environment attributes with self-reported leisure-time sitting for transport. Results: Mean leisure-time sitting time for transport was 20.4 min/day. After adjusting for potential confounders, perceived higher residential density, GIS-measured higher population density, better access to destinations, better access to public transport, longer sidewalk length, and higher street connectivity, were associated significantly with lower sitting time for transport. Conclusion: Residents living in neighborhoods with attributes previously found to be associated with more walking tended to spend less time sitting for transport during leisure-time. The health benefits of walkability-related attributes may accrue not only through increased physical activity, but also through less sedentary time. Answer: Yes, the perception of neighborhood built environmental attributes does influence active transport (AT) in adolescents. Studies have found various associations between adolescents' AT and their perceptions of the neighborhood environment. For instance, among Belgian adolescents, a shorter distance to school, connected streets, lower land use mix diversity, less infrastructure for walking, and lower quality of walking infrastructure were associated with more minutes per day of AT to and from school (PUBMED:23531272). Similarly, in Dunedin, New Zealand, adolescents' perception of safety when walking to school was the strongest correlate of AT among those living within 2.25 km of school (PUBMED:30446347). However, the relationship between perceived neighborhood built environmental attributes and AT can be complex and may depend on the purpose of the AT, as no significant relationship was found between these perceptions and walking or cycling for transport during leisure time among Belgian adolescents (PUBMED:23531272). Additionally, the influence of perceived neighborhood attributes on AT may vary by demographic factors, such as race/ethnicity, as seen in the United States where minority groups were more likely to live in disadvantaged neighborhoods and experience more mental health disorders, indicating a potential intersection between neighborhood environment, mental health, and AT (PUBMED:32351120). Furthermore, the BEATS Natural Experiment Study in New Zealand aims to examine the effects of built environment changes on adolescents' AT to school, highlighting the importance of understanding the impact of environmental attributes on AT behaviors (PUBMED:32213522). The study's approach acknowledges the multifaceted nature of AT, considering individual, social, environmental, and policy factors. In summary, the perception of neighborhood built environmental attributes does influence AT in adolescents, but the extent and nature of this influence can vary based on the specific attributes considered, the purpose of the AT, and demographic factors. Further research is needed to explore these relationships in specific subgroups and contexts (PUBMED:23531272).
Instruction: Coordination of end-of-life care for patients with lung cancer and those with advanced COPD: are there transferable lessons? Abstracts: abstract_id: PUBMED:24477771 Coordination of end-of-life care for patients with lung cancer and those with advanced COPD: are there transferable lessons? A longitudinal qualitative study. Background: Care coordination is defined as good communication between professionals to enable access to services based on need. Aims: To explore patients' experience of care coordination in order to inform current debates on how best to coordinate care and deliver services in end-of-life for patients with lung cancer and those with chronic obstructive pulmonary disease (COPD). Methods: A qualitative study involving serial interviews was performed in 18 patients recruited from three hospital outpatient clinics situated in a hospital. Interviews were transcribed verbatim and data were analysed thematically. Results: Data comprised 38 interviews. Patients experiencing services related to lung cancer reported good access enabled by the involvement of a keyworker. This contrasted with COPD patients' experiences of services. The keyworker coordinated care between and within clinical settings, referred patients to community palliative care services, helped them with financial issues, and provided support. Conclusions: For patients with lung cancer, the keyworker's role augmented access to various services and enabled care based on their needs. The experiences of patients with COPD highlight the importance of providing a keyworker for this group of patients in both secondary and primary care. abstract_id: PUBMED:14674335 End of life care for patients with COPD Chronic obstructive pulmonary disease(COPD) is a common cause of morbidity and mortality. It currently fourth leading cause of death in world wide and importance for end of life care for end-stage patients with COPD is increasing. Patients with COPD experience acute exacerbation once disease progressed. Once patients with COPD admitted to the hospital by acute exacerbation, prognosis of these patients is poor and these patients should be considered as an end-stage COPD. When compared with advanced lung cancer patients, patients with COPD have more deterioration of quality of life, more symptoms such as despnea, general fatigue, appetiteloss, anxiety, and deterioration of activities of daily living. However, these patients with COPD are more treated with life sustained interventions, palliation for these symptoms are not sufficients. In caring patients with severe COPD, consideration should be given to implementing palliative treatments more aggressively. In order to improve end of life care for patients with advanced COPD, it is also important to establish local support system for caring these patients. abstract_id: PUBMED:30708124 End-of-Life Health Care Utilization Between Chronic Obstructive Pulmonary Disease and Lung Cancer Patients. Context: At the end of life, chronic obstructive pulmonary disease (COPD) and lung cancer (LC) patients exhibit similar symptoms; however, a large-scale study comparing end-of-life health care utilization between these two groups has not been conducted in East Asia. Objectives: To explore and compare end-of-life resource use during the last six months before death between COPD and LC patients. Methods: Using data from the Taiwan National Health Insurance Research Database, we conducted a nationwide retrospective cohort study in COPD (n = 8640) and LC (n = 3377) patients who died between 1997 and 2013. Results: The COPD decedents were more likely to be admitted to intensive care units (57.59% vs 29.82%), to have longer intensive care unit stays (17.59 vs 9.93 days), and to undergo intensive procedures than the LC decedents during their last six months; they were less likely to receive inpatient (3.32% vs 18.24%) or home-based palliative care (0.84% vs 8.17%) and supportive procedures than the LC decedents during their last six months. The average total medical cost during the last six months was approximately 18.42% higher for the COPD decedents than for the LC decedents. Conclusion: Higher intensive health care resource use, including intensive procedure use, at the end of life suggests a focus on prolonging life in COPD patients; it also indicates an unmet demand for palliative care in these patients. Avoiding potentially inappropriate care and improving end-of-life care quality by providing palliative care to COPD patients are necessary. abstract_id: PUBMED:23503567 End-of-life care in a general respiratory ward in the United Kingdom. Introduction: Patients with advanced chronic lung disease such as chronic obstructive pulmonary disease (COPD) often have an unpredictable clinical course and a high symptom burden. Their prognosis is similar to that of patients with lung cancer. Aim And Methods: We retrospectively assessed end of life care in all patients who were admitted and subsequently died on a general respiratory ward in a central teaching hospital over a period of 11 months (1st June 2010-1st May 2011). We compared our practice with guidelines set out in Living and Dying Well, a national action plan for palliative and end of life care in Scotland. Results: There were 66 deaths, data was obtained for 57 patients (86.4%). Patients with lung cancer had higher rates of recorded discussions regarding their prognosis in comparison to those with COPD (60%, n=9 vs. 8.3%, n=1 respectively). In addition, they had greater levels of in-patient palliative care involvement (50%, n= 7 vs. 0% respectively) and higher rates of recorded wishes end of life care destination (28.6%, n=4 vs. 8.3%, n=1 respectively). This is despite patients with lung cancer having a lower mean number of end of life clinical indicators (2.64 vs. 3.17 respectively) and a lower mean number of admissions in the 12 months preceding death (1.67 vs. 4.08). Conclusions: Palliative care involvement and discussion of patients' end of life care wishes is poor in COPD. Timely and effective discussions regarding disease prognosis and patient wishes, including early consideration for initiating anticipatory care planning needs to be instituted. abstract_id: PUBMED:32876751 Palliative and high-intensity end-of-life care in schizophrenia patients with lung cancer: results from a French national population-based study. Schizophrenia is marked by inequities in cancer treatment and associated with high smoking rates. Lung cancer patients with schizophrenia may thus be at risk of receiving poorer end-of-life care compared to those without mental disorder. The objective was to compare end-of-life care delivered to patients with schizophrenia and lung cancer with patients without severe mental disorder. This population-based cohort study included all patients aged 15 and older who died from their terminal lung cancer in hospital in France (2014-2016). Schizophrenia patients and controls without severe mental disorder were selected and indicators of palliative care and high-intensity end-of-life care were compared. Multivariable generalized log-linear models were performed, adjusted for sex, age, year of death, social deprivation, time between cancer diagnosis and death, metastases, comorbidity, smoking addiction and hospital category. The analysis included 633 schizophrenia patients and 66,469 controls. The schizophrenia patients died 6 years earlier, had almost twice more frequently smoking addiction (38.1%), had more frequently chronic pulmonary disease (32.5%) and a shorter duration from cancer diagnosis to death. In multivariate analysis, they were found to have more and earlier palliative care (adjusted Odds Ratio 1.27 [1.03;1.56]; p = 0.04), and less high-intensity end-of-life care (e.g., chemotherapy 0.53 [0.41;0.70]; p &lt; 0.0001; surgery 0.73 [0.59;0.90]; p &lt; 0.01) than controls. Although the use and/or continuation of high-intensity end-of-life care is less important in schizophrenia patients with lung cancer, some findings suggest a loss of chance. Future studies should explore the expectations of patients with schizophrenia and lung cancer to define the optimal end-of-life care. abstract_id: PUBMED:32484762 Comparison of end-of-life care in people with chronic obstructive pulmonary disease or lung cancer: A systematic review. Background: Palliative care has been widely implemented in clinical practice for patients with cancer but is not routinely provided to people with chronic obstructive pulmonary disease. Aim: The study aims were to compare palliative care services, medications, life-sustaining interventions, place of death, symptom burden and health-related quality of life among chronic obstructive pulmonary disease and lung cancer populations. Design: Systematic review with meta-analysis (PROSPERO: CRD42019139425). Data Sources: MEDLINE, EMBASE, PubMed, CINAHL and PsycINFO were searched for studies comparing palliative care, symptom burden or health-related quality of life among chronic obstructive pulmonary disease, lung cancer or populations with both conditions. Quality scores were assigned using the QualSyst tool. Results: Nineteen studies were included. There was significant heterogeneity in study design and sample size. A random effects meta-analysis (n = 3-7) determined that people with lung cancer had higher odds of receiving hospital (odds ratio: 9.95, 95% confidence interval: 6.37-15.55, p &lt; 0.001) or home-based palliative care (8.79, 6.76-11.43, p &lt; 0.001), opioids (4.76, 1.87-12.11, p = 0.001), sedatives (2.03, 1.78-2.32, p &lt; 0.001) and dying at home (1.47, 1.14-1.89, p = 0.003) compared to people with chronic obstructive pulmonary disease. People with lung cancer had lower odds of receiving invasive ventilation (0.26, 0.22-0.32, p &lt; 0.001), non-invasive ventilation (0.63, 0.44-0.89, p = 0.009), cardiopulmonary resuscitation (0.29, 0.18-0.47, p &lt; 0.001) or dying at a nursing home/long-term care facility (0.32, 0.16-0.64, p &lt; 0.001) than people with chronic obstructive pulmonary disease. Symptom burden and health-related quality of life were relatively similar between the two populations. Conclusion: People with chronic obstructive pulmonary disease receive less palliative measures at the end of life compared to people with lung cancer, despite a relatively similar symptom profile. abstract_id: PUBMED:23614168 Comparing end-of-life care for hospitalized patients with chronic obstructive pulmonary disease and lung cancer in Taiwan. When it comes to end-of-life care, chronic obstructive pulmonary disease (COPD) patients are often treated differently from lung cancer patients. However, few reports have compared end-of-life care between these two groups. We investigated the differences between patients with end-stage COPD and end-stage lung cancer based on end-of-life symptoms and clinical practice patterns using a retrospective study of COPD and lung cancer patients who died in an acute care hospital in Taiwan. End-stage COPD patients had more comorbidities and spent more days in the intensive care unit (ICU) than end-stage lung cancer patients. They were more likely to die in the ICU and less likely to receive hospice care. COPD patients also had more invasive procedures, were less likely to use narcotic and sedative drugs, and were less likely to have given do-not-resuscitate consent. Symptoms were similar between these two groups. Differences in treatment management suggest that COPD patients receive more care aimed at prolonging life than care aimed at relieving symptoms and providing end-of-life support. It may be more difficult to determine when COPD patients are at the end-of-life stage than it is to identify when lung cancer patients are at that stage. Our findings indicate that in Taiwan, more effort should be made to give end-stage COPD patients the same access to hospice care as end-stage lung cancer patients. abstract_id: PUBMED:34429078 Benefits, for patients with late stage chronic obstructive pulmonary disease, of being cared for in specialized palliative care compared to hospital. A nationwide register study. Background: In early stage chronic obstructive pulmonary disease (COPD), dyspnea has been reported as the main symptom; but at the end of life, patients dying from COPD have a heavy symptom burden. Still, specialist palliative care is seldom offered to patients with COPD; they more often receive end of life care in hospitals. Furthermore, symptoms, symptom relief and care activities in the last week of life for COPD patients are rarely studied. The aim of this study was to compare patient and care characteristics in late stage COPD patients treated in specialized palliative care (SPC) versus hospital. Methods: Two nationwide registers were merged, the Swedish National Airway Register (SNAR) and the Swedish Register of Palliative Care (SRPC). Patients with COPD and &lt; 50% of predicted forced expiratory volume in 1 s (FEV1), who had died in inpatient or outpatient SPC (n = 159) or in hospital (n = 439), were identified. Clinical COPD characteristics were extracted from the SNAR, and end of life (EOL) care characteristics from the SRPC. Descriptive statistics were used to describe the sample and the registered care and treatments. Independent samples t-test, Mantel-Haenszel chi-square test and Fisher's exact test was used to compare variables. To examine predictors of place of death, bivariate and multivariate logistic regression analyses were performed with a dependent variable with demographic and clinical variables used as independent variables. Results: The patients in hospitals were older and more likely to have heart failure or hypertension. Pain was more frequently reported and relieved in SPC than in hospitals (p = 0.001). Rattle, anxiety, delirium and nausea were reported at similar frequencies between the settings; but rattle, anxiety, delirium, and dyspnea were more frequently relieved in SPC (all p &lt; 0.001). Compared to hospital, SPC was more often the preferred place of care (p &lt; 0.001). In SPC, EOL discussions with patients and families were more frequently held than in hospital (p &lt; 0.001). Heart failure increased the probability of dying in hospital while lung cancer increased the probability of dying in SPC. Conclusion: This study provides evidence for referring more COPD patients to SPC, which is more focused on symptom management and psychosocial and existential support. abstract_id: PUBMED:29902554 Resource Use During the Last Six Months of Life Among COPD Patients: A Population-Level Study. Context: Chronic obstructive pulmonary disease (COPD) patients often have several comorbidities, such as cardiovascular diseases (CVDs) or lung cancer (LC), which might influence resource use in the final months of life. However, no previous studies documented end-of-life resource use in COPD patients at a population level, thereby differentiating whether COPD patients die of their COPD, CVD, or LC. Objectives: The objectives of the study were to describe end-of-life resource use in people diagnosed with COPD and compare this resource use between those dying of COPD, CVD, and LC. Methods: We performed a full-population retrospective analysis of all Belgian decedents. Those who died of COPD were selected based on the primary cause of death. Those who died with COPD but with CVD or LC as a primary cause of death were identified based on a validated algorithm expanded with COPD as intermediate or associated. Results: Resource use among 13,086 patients dying of or with COPD was studied. Those who died of COPD received less opioids, sedatives, and morphine; used less palliative care services; and received more invasive and noninvasive ventilation as compared to the other two groups. Those who died of LC had more specialist contacts, hospital admissions, and medical imaging as compared to those who died of COPD or CVD. Those who died of CVD used less palliative care services when compared to those who died of LC and had a comparable use of hospital, intensive care unit, home care, opioids, sedatives, and morphine when compared to those who died of COPD. Conclusion: The presence of lung cancer and CVDs influences resource use in COPD patients at life's end. We recommend that future research on end-of-life care in COPD patients systematically accounts for specific comorbidities. abstract_id: PUBMED:11083884 How well do we care for patients with end stage chronic obstructive pulmonary disease (COPD)? A comparison of palliative care and quality of life in COPD and lung cancer. Background: Patients with severe chronic obstructive pulmonary disease (COPD) have a poor quality of life and limited life expectancy. This study examined whether these patients were relatively disadvantaged in terms of medical and social care compared with a group with inoperable lung cancer. Methods: An open two group comparison was made of 50 patients with severe COPD (forced expiratory volume in one second (FEV(1)) &lt;0.75 l and at least one admission for hypercapnic respiratory failure) and 50 patients with unresectable non-small cell lung cancer (NSCLC). A multi-method design was used involving standardised quality of life tools, semi-structured interviews, and review of documentation. Results: The patients with COPD had significantly worse activities of daily living and physical, social, and emotional functioning than the patients with NSCLC (p&lt;0.05). The Hospital Anxiety and Depression Scale (HADS) scores suggested that 90% of patients with COPD suffered clinically relevant anxiety or depression compared with 52% of patients with NSCLC. Patients were generally satisfied with the medical care received, but only 4% in each group were formally assessed or treated for mental health problems. With regard to social support, the main difference between the groups was that, while 30% of patients with NSCLC received help from specialist palliative care services, none of the patients with COPD had access to a similar system of specialist care. Finally, patients in both groups reported a lack of information from professionals regarding diagnosis, prognosis and social support, although patients' information needs were disparate and often conflicting. Conclusion: This study suggests that patients with end stage COPD have significantly impaired quality of life and emotional well being which may not be as well met as those of patients with lung cancer, nor do they receive holistic care appropriate to their needs. Answer: The coordination of end-of-life care for patients with lung cancer and those with advanced COPD presents both challenges and opportunities for transferable lessons. Patients with lung cancer often have access to better-coordinated care services, largely due to the involvement of a keyworker who facilitates access to various services, coordinates care across clinical settings, and provides support for financial issues and referrals to community palliative care services (PUBMED:24477771). In contrast, patients with COPD often experience less coordinated care and may benefit from the introduction of a keyworker role in both secondary and primary care settings to improve their access to necessary services (PUBMED:24477771). Studies have shown that patients with COPD are more likely to receive life-sustaining interventions and have a higher symptom burden, including dyspnea, fatigue, and anxiety, but they receive less palliative care compared to lung cancer patients (PUBMED:14674335). This suggests that palliative treatments should be implemented more aggressively for patients with severe COPD (PUBMED:14674335). In Taiwan, COPD patients were found to have higher intensive care unit admissions and longer stays, as well as undergoing more intensive procedures than lung cancer patients in their last six months of life. They were also less likely to receive inpatient or home-based palliative care, indicating an unmet demand for palliative care in COPD patients (PUBMED:30708124). In the UK, palliative care involvement and discussions regarding end-of-life care wishes were found to be poor for COPD patients compared to those with lung cancer, despite similar prognoses. This highlights the need for timely discussions about disease prognosis and patient wishes, including early consideration for anticipatory care planning (PUBMED:23503567). A systematic review also revealed that people with COPD receive less palliative care at the end of life compared to those with lung cancer, despite having a similar symptom profile (PUBMED:32484762). This disparity in care is further evidenced by the fact that COPD patients in Taiwan were less likely to receive hospice care and more likely to die in the ICU, despite having similar end-of-life symptoms to lung cancer patients (PUBMED:23614168). In summary, there are clear lessons to be learned from the care coordination of lung cancer patients that could be applied to those with advanced COPD.
Instruction: Is the association of maternal smoking and pregnancy-induced hypertension dependent on fetal growth? Abstracts: abstract_id: PUBMED:17547883 Is the association of maternal smoking and pregnancy-induced hypertension dependent on fetal growth? Objective: The risk of pregnancy-induced hypertension (PIH) is decreased by smoking, but the mechanisms remain unclear. Our objective was to determine whether this association is dependent on decreased fetal growth. Methods: A population-based, retrospective cohort study in the United States was performed consisting of nulliparous women who delivered a singleton birth (n = 8,025,295) between 1995 and 2002. Fetal growth was defined as birthweight for gestational age and characterized as less than 1, 1-2, 3-4, 5-9, 10-19, ..., 90 or greater centiles. Risk and relative risk of PIH before and after adjusting for confounders were estimated. Results: Smoking was associated with decreased risk of PIH with up to a 46% decreased risk of PIH for growth-restricted babies (less than the 10th centile). This association, however, decreased with increasing birthweight centile and was nonsignificant at 20th or greater centile among heavy smokers, at 60th or greater centile for moderate, and 80th or greater centile for light smokers. Conclusion: Smoking was primarily associated with decreased risk of PIH among growth-restricted babies. abstract_id: PUBMED:36408054 The causal association between maternal smoking around birth on childhood asthma: A Mendelian randomization study. To explore the causal relationship between maternal smoking around birth and childhood asthma using Mendelian randomization (MR). Using the data from large-scale genome-wide association studies, we selected independent genetic loci closely related to maternal smoking around birth and maternal diseases as instrumental variables and used MR methods. In this study, we considered the inverse variance weighted method (MR-IVW), weighted median method, and MR-Egger regression. We investigated the causal relationship between maternal smoking around birth and maternal diseases in childhood asthma using the odds ratio (OR) as an evaluation index. Multivariable MR (MVMR) included maternal history of Alzheimer's disease, illnesses of the mother: high blood pressure and illnesses of the mother: heart diseaseas covariates to address potential confounding. Sensitivity analyses were evaluated for weak instrument bias and pleiotropic effects. It was shown with the MR-IVW results that maternal smoking around birth increased the risk of childhood asthma by 1.5% (OR = 1.0150, 95% CI: 1.0018-1.0283). After the multivariable MR method was used to correct for relevant covariates, the association effect between maternal smoking around birth and childhood asthma was still statistically significant (P &lt; 0.05). Maternal smoking around birth increases the risk of childhood asthma. abstract_id: PUBMED:11101275 Effects of maternal captopril treatment on growth, blood glucose and plasma insulin in the fetal spontaneously hypertensive rat. In the spontaneously hypertensive rat (SHR) fetal growth and metabolism are abnormal. It has been speculated that maternal hypertension may be the cause of these abnormalities. Captopril treatment, which reduces maternal blood pressure, during pregnancy and lactation, is reported to have a beneficial effect postnatally, normalizing the blood pressure of offspring in the SHR. In the present study, the effects of maternal captopril treatment on fetal growth and plasma metabolites were investigated in the fetuses of two rat strains (SHR and Wistar-Kyoto (WKY)), in order to determine whether normalizing maternal blood pressure also normalized abnormalities in fetal growth and metabolism. On fetal Day 20, SHR fetuses were lighter and placentae were heavier than for the corresponding WKY. Captopril had no effect on fetal weight in the SHR, but decreased it in the WKY. There was no effect of captopril on placental weight. Fetal plasma insulin levels were higher in the SHR than in the WKY and were decreased by captopril treatment in both strains. Fetal blood glucose was elevated and fetal blood lactate was decreased in captopril-treated litters from both strains. Captopril had no effect on fetal plasma IGF-1 but fetal plasma IGF-2 levels were lower in the captopril-treated SHR than in the captopril-treated WKY. These findings suggest that maternal captopril treatment decreases insulin secretion in the fetal rat. High levels of fetal plasma insulin suggest that the SHR fetus is insulin resistant. Fetal insulin levels may contribute to the adverse consequences of gestational captopril treatment observed in many species. The differences in the effect of captopril on the two strains suggest that there are underlying endocrine differences in the SHR. abstract_id: PUBMED:851157 Fetal malnutrition: an appraisal of correlated factors. Fetal malnutrition has emerged as a significant health problem over the past decade. Present evidence suggests that maternal environment plays the major etiologic role in fetal malnutrition. The association of fetal malnutrition in mothers with chronic hypertension is well known, but fetal malnutrition is associated with maternal hypertension in less than 25 per cent of cases. Among a group of 182 pregnant women studied at midpregnancy for blood levels of vitamins, trace metals, proteins, amino acids, and parameters of maternal leukocyte energy metabolism, it was found that the concentration of 10 amino acids, alpha-1-globulin, zinc, and total carotenes had a statistically significant relationship to fetal growth. Similarly significant correlations were found for maternal leukocyte adenosine disphosphate, phosphofructokinase activity, ribonucleic acid (RNA) synthesis, and cell size. Maternal cigarette smoking was correlated with reduced fetal growth. Analysis showed that there was a significant reduction in leukocyte RNA synthesis and phosphokinase activity and in the plasma levels of 14 amino acids, and carotene in smoking mothers. This information lends support to the hypothesis that factors which affect the growth of fetal cells also will affect maternal leukocytes in a definable way. abstract_id: PUBMED:32482116 Severity of fetal growth restriction stratified according to maternal obesity. Objective: The primary objective of this study was to ascertain if among women with fetal growth restriction (FGR; estimated fetal weight [EFW] &lt; 10th percentile) the frequency of severe FGR (sFGR; EFW &lt; 3rd percentile for gestational age) differed among various classes of obesity. Study Design: This was a retrospective cohort study of all pregnancies complicated by FGR from August 2016- March 2019 at a single center, undergoing weekly antenatal surveillance (biophysical profiles and umbilical artery Doppler). Exclusion criteria included multiple gestation, prenatally diagnosed fetal anomalies, and unknown maternal body mass index (BMI) at the time of the ultrasound exam. We defined fetal growth restriction as an estimated fetal weight less than the 10th percentile for gestational age using Hadlock criteria. Severe FGR was defined as the estimated fetal weight below 3rd percentile for gestational age. Maternal BMI was categorized as non-obese (BMI ≤ 29.9), Class I obesity (30.0-34.9), and Class II or III obesity (≥35.0 kg/m2). Abnormal Dopplers were defined as absent or reversed end diastolic flow. Maternal characteristics and ultrasound findings were compared between groups. Categorical variables were compared by χ2 or Fisher's exact test and continuous variables were compared by t test or nonparametric Wilcoxon rank sum test. Logistic regression was used to calculate odds ratios (ORs) and 95% confidence intervals by adjusting for potential confounders including maternal age, hypertensive disorders, pre-gestational and gestational diabetes, auto-immune disorders, and gestational age at diagnosis. Results: Of 974 women that met the inclusion and exclusion criteria, 678 (70%) were not obese, 151 (15%) had class I obesity, and 145 (15%) had class II or III obesity. Obese women were significantly more likely to be multiparous and had a lower mean gestational age at diagnosis of FGR. Hypertensive disorders were more common with increasing BMI, as was type II diabetes mellitus (p &lt; .01). There were no statistically significant differences between the obesity groups with regards to other comorbidities. Women with obesity classes I and II/III had significantly higher frequency of severe FGR (37.8%) as compared to non-obese women (29%; p &lt; .05). The rates of abnormal Dopplers was more frequent with worsening obesity: 31.4%, 34.4%, and 46.2% for non-obese, class I obesity, and class II or III obesity, respectively (p &lt; .01). There were no significant differences in amniotic fluid abnormalities or antenatal testing results. After adjustment for potential confounders, women with class I obesity had higher odds of having severe FGR (aOR = 1.4; 95% CI = 1.0-2.1). There was also an increased odds of abnormal Dopplers among women with class II/III obesity, as compared to non-obese women, after adjusting for confounders (aOR = 1.7; 95% CI = 1.2-2.6). Conclusion: Among women with FGR, obese women were more likely to have severe FGR and abnormal Dopplers compared to non-obese women. These findings warrant further study into predictors of adverse outcomes among obese women with FGR. Such information could be useful in counseling patients as to the possible course of disease after diagnosis of fetal growth restriction. abstract_id: PUBMED:20601540 Maternal and fetal thrombophilia in intrauterine growth restriction in the presence or absence of maternal hypertensive disease. Intrauterine growth restriction (IUGR) depends on the placental capacity to transfer oxygen and nutrients from the maternal to the fetal circulation. Placental insufficiency may be caused by impairment of the maternal or fetal circulation by a thrombotic event, possibly associated with thrombophilic disorders. The goals of our study were to define the role of maternal/fetal gain-of-function factor V Leiden and prothrombin G20210A mutations in the development of IUGR and to evaluate whether maternal pregnancy-induced hypertensive diseases would modify any such association. This is a case-control study: controls were 259 normal pregnancies, cases were 77 IUGR, 28 with and 49 without preeclampsia (PE) or pregnancy-induced hypertension (PIH). An association was found between IUGR and fetal thrombophilia (OR 2.09 CI 95% 1-4.5). The association was stronger in IUGR without PE and PIH (OR 2.9 CI 95% 1.3-6.6). This suggests a role for the fetal genotype in the development of IUGR. abstract_id: PUBMED:24893615 Effects of smoking and preeclampsia on birth weight for gestational age. Objective: A counterintuitive interaction between smoking during pregnancy and preeclampsia on birth weight for gestational age (BWGA) outcomes was recently reported. In this report, we examine the relationship between these factors in a well-documented study population with exposure data on trimester of maternal smoking. Methods: Preeclamptic (n = 238), gestational hypertensive (n = 219), and normotensive women (n = 342) were selected from live-births to nulliparous Iowa women. Disease status was verified by medical chart review, and smoking exposure was assessed by self-report. Fetal growth was assessed as z-score of BWGA. Multiple linear regression was used to test for the association of maternal smoking and preeclampsia with BWGA z-score. Results: There was no interaction between smoking with preeclampsia or gestational hypertension on fetal growth. BWGA z-scores were significantly lower among women with preeclampsia and those who smoked any time during pregnancy (β = -0.33, p = &lt;0.0001 and β = -0.25, p = 0.05) compared to normotensive and non-smoking women, respectively. Infants of women with gestational hypertension were comparable in size to infants born to normotensive women. Conclusions: Women who developed preeclampsia and those who smoked during pregnancy delivered infants that were significantly smaller than infants of women who did not develop preeclampsia and non-smoking women, respectively. abstract_id: PUBMED:16721103 Fetal growth restriction: the etiology. Fetal growth restriction (FGR) is etiologically associated with various maternal, fetal and placental factors, although such an association may not be present in many cases. Maternal factors include hypertensive diseases, autoimmune disorders, certain medications, severe malnutrition, and maternal lifestyle including smoking, alcohol and cocaine use. Fetal etiologies include aneuploidy, malformations, syndromes related to abnormal genomic imprinting, perinatal viral or protozoan infections, preterm birth, and multiple gestation. Placental factors may involve many conditions including anatomical, vascular, chromosomal and morphological abnormalities. Better understanding of these etiologic conditions may lead to improved prediction, prevention and management of FGR. abstract_id: PUBMED:24216305 Prevalence, risk factors, maternal and fetal morbidity and mortality of intrauterine growth restriction and small-for-gestational age Objectives: To assess the prevalence of fetal growth restriction (FGR) and small for gestational age (SGA) in France and other populations, the risk factors associated with SGA and its impact on fetal well-being and obstetrical outcome. Methods: A critical review of studies identified from searches of PubMed and the Cochrane libraries using the following keywords "intra-uterine growth retardation", "intra-uterine growth restriction", "small for gestational age", "epidemiology", "risk factors", "pregnancy outcome", "maternal morbidity", "perinatal death". Results: Studies of FGR use multiple definitions, both with respect to cutoffs for defining restricted growth as well as growth norms; however the most common definition for epidemiological research was SGA using a birthweight less than the 10 th percentile. Following this definition, SGA births accounted for 8.9% of all live births in 2010 in France. Major risk factors identified in the literature were previous SGA birth (4 fold increase in risk) (LE2), diabetes and vascular diseases (5 fold) (LE3), chronic hypertension (2 fold) (LE2), preeclampsia (5 to 12 fold according to severity) (LE2), pregnancy induced hypertension (2 fold) (LE2), smoking (2-3 fold) (LE2), drug and alcohol use (2-4 fold) (LE2), maternal age over 35 (3 fold) (LE2) and ethnic origin (2-3 fold for African-American or Asian origins) (LE2). Other risk factors with adjusted odds ratios around 1.5 were primiparity (LE2), multiple pregnancy (but only starting at 30 weeks of gestation) (LE2), socioeconomic disadvantage (LE2) and body mass index (BMI&lt;18.5 kg/m(2)) (LE2) SGA is associated with a four-fold increased risk of stillbirth (LE2) as well as higher rates of cesarean and induced labor before 37 weeks. Conclusions: FGR is a complication of pregnancy with adverse consequences for fetal wellbeing. Sociodemographic and clinical risk factors can help to identify pregnant women at risk for this complication. abstract_id: PUBMED:19001134 Maternal smoking, preeclampsia, and infant health outcomes in New York City, 1995-2003. A number of previous studies have reported an inverse association between maternal smoking and preeclampsia. Additionally, some have suggested that smokers who develop preeclampsia have worse maternal and fetal outcomes than nonsmokers who develop preeclampsia. The authors examined the relation of smoking to preeclampsia among 674,250 singleton pregnancies in New York City between 1995 and 2003. Although smoking was associated with a reduced risk of preeclampsia overall (adjusted odds ratio = 0.88, 95% confidence interval: 0.82, 0.94), no association was found for preeclampsia superimposed on chronic hypertension (adjusted odds ratio = 1.04, 95% confidence interval: 0.90, 1.21). Furthermore, the apparent protection conferred by maternal smoking was restricted to women aged &lt; or =30 years. Contrary to previous reports, the authors found evidence of a negative interaction between smoking and preeclampsia with respect to preterm delivery and birth weight; smokers who developed preeclampsia had a lower risk of preterm delivery, and a lower adjusted mean difference in birth weight, than would have been expected based on the independent effects of smoking and preeclampsia. These data suggest that smoking is only protective against preeclampsia without pre gestational hypertension, and even then principally among younger women. Additionally, smokers who develop these disorders have no increased risk of adverse birth outcomes relative to nonsmokers who develop the same conditions. Answer: Yes, the association of maternal smoking and pregnancy-induced hypertension (PIH) appears to be dependent on fetal growth. A study found that smoking was associated with a decreased risk of PIH, with up to a 46% decreased risk for growth-restricted babies (less than the 10th centile). However, this association decreased with increasing birthweight centile and became nonsignificant at higher birthweight centiles among heavy smokers, at the 60th or greater centile for moderate smokers, and at the 80th or greater centile for light smokers (PUBMED:17547883). This suggests that the protective effect of smoking on the risk of PIH is primarily observed among growth-restricted babies.
Instruction: Do the medical history and physical examination predict low lung function? Abstracts: abstract_id: PUBMED:8250649 Do the medical history and physical examination predict low lung function? Background: We sought to determine whether an abnormal respiratory history or chest physical examination could be used to identify men with low lung function. Methods: We analyzed pulmonary function, physical examination, and questionnaire data from 4461 middle-aged male Vietnam-era army veterans. Main Results: The study sample consisted of 1161 never smokers, 1292 former smokers, and 2008 current smokers. Clinical indicators of respiratory disease (respiratory symptoms, respiratory signs, or a history of respiratory disease), were present in 26.1% of the never smokers, 31.7% of the former smokers, and 47.2% of the current smokers. We defined low forced expiratory volume in 1 second as a value less than 81.2% of the predicted value. Seven percent of the never smokers, 8% of the former smokers, and 17.3% of the current smokers demonstrated low forced expiratory volume in 1 second. Among those with a clinical indicator for spirometry only 11% of the never smokers, 13% of the former smokers, and 21% of the current smokers actually had a low forced expiratory volume in 1 second. Among those without a clinical indicator 6% of the never smokers, 6% of the former smokers, and 14% of the current smokers actually had a low forced expiratory volume in 1 second. Conclusions: The use of clinical indicators as a basis for obtaining pulmonary function tests in middle-aged men misses many with low lung function, especially current smokers. abstract_id: PUBMED:9477411 Occupational low back pain: history and physical examination. While treatment failures in low back pain often are considered to result from patient psychological factors, they may, in fact, be due to diagnostic and treatment errors. Issues related to the Workers' Compensation system can present challenges to clinicians managing workers with low back pain. An understanding of the nuances of the history and physical examination in the setting of workplace injury can ease these difficulties. abstract_id: PUBMED:1536065 Contributions of the history, physical examination, and laboratory investigation in making medical diagnoses. We report an attempt to quantitate the relative contributions of the history, physical examination, and laboratory investigation in making medical diagnoses. In this prospective study of 80 medical outpatients with new or previously undiagnosed conditions, internists were asked to list their differential diagnoses and to estimate their confidence in each diagnostic possibility after the history, after the physical examination, and after the laboratory investigation. In 61 patients (76%), the history led to the final diagnosis. The physical examination led to the diagnosis in 10 patients (12%), and the laboratory investigation led to the diagnosis in 9 patients (11%). The internists' confidence in the correct diagnosis increased from 7.1 on a scale of 1 to 10 after the history to 8.2 after the physical examination and 9.3 after the laboratory investigation. These data support the concept that most diagnoses are made from the medical history. The results of physical examination and the laboratory investigation led to fewer diagnoses, but they were instrumental in excluding certain diagnostic possibilities and in increasing the physicians' confidence in their diagnoses. abstract_id: PUBMED:11273467 A study on relative contributions of the history, physical examination and investigations in making medical diagnosis. Here we report an attempt to quantitate the relative contributions of the history, physical examination and investigations in making medical diagnosis. In this prospective study of 100 patients, with new or previously undiagnosed conditions, we listed their differential diagnosis with confidence score; after the history, after physical examination and after the investigations. In two patients no definite final diagnoses could be arrived even after extensive investigation--these two cases were excluded from the study. In seventy seven patients (78.58%) patients, the history led to diagnosis. The physical examination led to diagnosis in eight patients (8.17%); and investigations led to diagnosis in 13 patients (13.27%). The confidence in correct diagnosis increased from 6.36 on a scale of one to ten after the history to 7.57 after physical examination and 9.84 after investigations--implying that history, physical examination and investigation have their own limitation at each stage and an integrative approach is needed in making a medical diagnosis with more emphasis on history. abstract_id: PUBMED:606127 Senior medical students as patient-preceptors to introduce basic history and physical examination skills to second year medical students. Senior medical students are used as the patient and the preceptor to introduce the fundamentals of history taking and physical examination to sophomore medical students and this technique compared to the established method for teaching basic skills at the University of Iowa. Senior medical students were equally as effective as staff (residents, fellows, and faculty) in teaching the techniques of history and physical examination and statistically better than staff in providing the sophomore with a) suggestions as to how to improve their technique, and b) how their approach might affect the patient's attitude and behavior. abstract_id: PUBMED:27723170 Systematic review of patient history and physical examination to diagnose chronic low back pain originating from the facet joints. Patient history and physical examination are frequently used procedures to diagnose chronic low back pain (CLBP) originating from the facet joints, although the diagnostic accuracy is controversial. The aim of this systematic review is to determine the diagnostic accuracy of patient history and/or physical examination to identify CLBP originating from the facet joints using diagnostic blocks as reference standard. We searched MEDLINE, EMBASE, CINAHL, Web of Science and the Cochrane Collaboration database from inception until June 2016. Two review authors independently selected studies for inclusion, extracted data and assessed the risk of bias. We calculated sensitivity and specificity values, with 95% confidence intervals (95% CI). Twelve studies were included, in which 129 combinations of index tests and reference standards were presented. Most of these index tests have only been evaluated in single studies with a high risk of bias. Four studies evaluated the diagnostic accuracy of the Revel's criteria combination. Because of the clinical heterogeneity, results were not pooled. The published sensitivities ranged from 0.11 (95% CI 0.02-0.29) to 1.00 (95% CI 0.75-1.00), and the specificities ranged from 0.66 (95% CI 0.46-0.82) to 0.91 (95% CI 0.83-0.96). Due to clinical heterogeneity, the evidence for the diagnostic accuracy of patient history and/or physical examination to identify facet joint pain is inconclusive. Patient history and physical examination cannot be used to limit the need of a diagnostic block. The validity of the diagnostic facet joint block should be studied, and high quality studies are required to confirm the results of single studies. Significance: Patient history and physical examination cannot be used to limit the need of a diagnostic block. The validity of the diagnostic facet joint block should be studied, and high quality studies are required to confirm the results of single studies. abstract_id: PUBMED:25182351 AAOA allergy primer: history and physical examination. Background: Allergic disease is very common in the general population and makes a significant impact on the quality of life of patients. Immunoglobulin E (IgE)-mediated allergic disease manifests throughout the body, but many signs and symptoms of inhalant allergy are centered in the head and neck region. Methods: A thorough yet focused history of allergic symptoms and potential physical examination findings of inhalant allergy are described. Results: History should include types and timing of symptoms, environmental and occupational exposures, family history, associated diseases, and prior treatment, if any. Physical examination should include the skin and structures of the head and neck region. Nasal endoscopy can be helpful in visualization of nasal polyps. Conclusion: Many times, history alone can serve to make the diagnosis, but physical examination also demonstrates specific findings that confirm the practitioner's presumptive diagnosis of allergic disease. However, should medical treatment fail or the diagnosis be in doubt, further diagnostic investigation with allergy testing should be pursued. abstract_id: PUBMED:27254942 Diagnosis and examination for COPD; medical interview/physical finding/ blood examination Chronic obstructive pulmonary disease(COPD) is common disease. To diagnose COPD, pulmonary function test is required, however, usual physical examination and medical interview are both crucial. From both of them, we can acquire very important information and also this information will tell us much rather than pulmonary function test or images. In this article, we would like to summarize recent update about these topics. We focused on history taking from patients, examination of physical finding, and some possible biomarkers from blood examination. All of them may represent several important information of each patients, such as severity of disease, risk of future events, and also prognosis. For all physicians, it is necessary to evaluate these things. abstract_id: PUBMED:18471676 Discordance between data acquired by history and findings of physical examination: a phenomenal paradox. Background: History and physical examination are the basis of any diagnosis. The findings of the physical examination usually corroborate the thoughts generated by the medical history. However, it is not uncommon for the physician to note discrepancies between the history and the physical examination. Methods: Two physicians provided a list of diseases they have encountered in which there is occasionally discordance between the data retrieved from the patient's medical history and the findings of the physical examination. To verify the data from knowledge and experience, we resorted to two main medical textbooks and relevant articles from PubMed. Results: We identified 25 diseases where a discordance between history and physical examination may frequently be observed. These diseases cover a wide spectrum of systems (i.e., neurological, cardiological, respiratory, and gastrointestinal). Discussion: A variety of diseases are associated with a discordance between the findings one might expect based on the medical history and the actual findings of the physical examination. Physicians need to be aware of this paradox, as misdiagnosis can be detrimental in some of these diseases. We acknowledge that this is just a sample, and not an exhaustive list, of all diseases with such a discordance. abstract_id: PUBMED:26914780 Why the history and physical examination still matter. The history and physical examination (H/PE) have been the foundation of medical diagnosis for centuries. However, as laboratory tests and diagnostic imaging has expanded, physical examination skills have been deemphasized in medical education, and clinicians have become more reliant on tests and imaging. This article describes the historical contributions of the H/PE and its resurgence in a refined form to improve diagnosis. Answer: The medical history and physical examination have been traditionally used to predict various medical conditions, including lung function. However, the ability of these clinical tools to accurately predict low lung function is not definitive. According to a study analyzing pulmonary function, physical examination, and questionnaire data from 4461 middle-aged male Vietnam-era army veterans, clinical indicators of respiratory disease were present in a significant proportion of smokers and non-smokers. Despite this, many individuals with low forced expiratory volume in 1 second (FEV1) were not identified using clinical indicators alone. Specifically, among those with a clinical indicator for spirometry, only 11% of never smokers, 13% of former smokers, and 21% of current smokers actually had low FEV1. Among those without a clinical indicator, 6% of never smokers, 6% of former smokers, and 14% of current smokers had low FEV1. This suggests that relying solely on clinical indicators from medical history and physical examination may miss many cases of low lung function, particularly among current smokers (PUBMED:8250649). In general, the history and physical examination are crucial components of the diagnostic process. In a study of 80 medical outpatients, the history led to the final diagnosis in 76% of patients, while the physical examination led to the diagnosis in 12%, and laboratory investigation in 11% (PUBMED:1536065). Another study with 100 patients found similar results, with history leading to the diagnosis in 78.58% of patients, physical examination in 8.17%, and investigations in 13.27% (PUBMED:11273467). These studies highlight the importance of history and physical examination in the diagnostic process, although they also indicate that these tools have limitations and should be used in conjunction with other diagnostic methods. In conclusion, while medical history and physical examination are valuable tools in the diagnostic process, they may not be sufficient on their own to predict low lung function accurately. It is important to use these tools as part of a comprehensive diagnostic approach that may include pulmonary function tests and other investigations to ensure accurate diagnosis and management of respiratory conditions.
Instruction: Helical CT of the body: are settings adjusted for pediatric patients? Abstracts: abstract_id: PUBMED:26587939 Pediatric Chest CT: Wide-Volume and Helical Scan Modes in 320-MDCT. Objective: The purpose of this study was to compare wide-volume and helical pediatric 320-MDCT of the chest with respect to radiation dose and image quality. Materials And Methods: From November 2012 to September 2013, 59 wide-volume and 47 helical pediatric chest 320-MDCT images were obtained. The same tube potential and effective tube current-time product were applied in the two groups according to patient weight (group A, &lt; 10 kg, n = 18; group B, 10-19.9 kg, n = 60; group C, 20-39.9 kg, n = 28). To compensate for overranging, adjusted CT dose index (CTDI) was calculated by dividing dose-length product (DLP) by the scan ranges imaged. Adjusted CTDI, DLP, overall image quality, motion artifact, noise, and scan ranges were compared by Mann-Whitney U test or t test. Results: The adjusted CTDI was significantly lower in the group who underwent wide-volume CT than in the group who underwent helical CT (weight group A, p &lt; 0.001; group B, p &lt; 0.001; group C, p = 0.003). The DLP was lower in the wide-volume group than in the helical CT group in weight groups A (p &lt; 0.001) and B (p &lt; 0.001) but not in group C (p = 0.162). All CT scans were of diagnostic quality, and there was no significant difference between the wide-volume and helical CT groups (p = 0.318). The motion artifact score was significantly higher in the wide-volume group than in the helical CT group in groups B (p &lt; 0.001) and C (p = 0.010) but not in group A (p = 0.931). The noise was significantly lower in the wide-volume group than in the helical CT group (p &lt; 0.001). Conclusion: In pediatric chest CT, use of wide-volume CT can decrease radiation exposure while preserving image quality. It is associated with less noise than helical CT but may be subject to more motion artifact. abstract_id: PUBMED:11159060 Helical CT of the body: are settings adjusted for pediatric patients? Objective: Our objective was to determine whether adjustments related to patient age are made in the scanning parameters that are determinants of radiation dose for helical CT of pediatric patients. Subjects And Methods: This prospective investigation included all body (chest and abdomen) helical CT examinations (n = 58) of neonates, infants, and children (n = 32) referred from outside institutions for whom radiologic consultation was requested. Information recorded included tube current, kilovoltage, collimation, and pitch. Examinations were arbitrarily grouped on the basis of the individual's age: group A, 0-4 years; group B, 5-8 years; group C, 9-12 years; and group D, 13-16 years old. Results: Thirty-one percent (18/58) of the CT examinations were of the chest and 69% (40/58) were of the abdomen. Sixteen percent (9/58) of the CT examinations were combined chest and abdomen. In 22% (2/9) of these combined examinations, tube current was adjusted between the chest and abdomen CT; in one (11%) of these examinations, the tube current was higher for the chest than for the abdomen portion of the CT examination. The mean tube current setting for chest was 213 mA and was 206 mA for the abdomen, with no evident adjustment in tube current based on the age of the patient. Fifty-six percent of the examinations of neonates, infants, or children 8 years old or younger were performed at a collimation of greater than 5 mm and 53% of these examinations were performed using a pitch of 1.0. Conclusion: Pediatric helical CT parameters are not adjusted on the basis of the examination type or the age of the child. In particular, these results suggest that pediatric patients may be exposed to an unnecessarily high radiation dose during body CT. abstract_id: PUBMED:9124104 Effect of helical CT on the frequency of sedation in pediatric patients. Objective: We compared the use of sedation for helical CT examination of pediatric patients with that for conventional CT studies. Materials And Methods: We retrospectively compared two 4-month periods of CT examinations that differed only in that conventional CT was routinely used in one period and helical CT was exclusively used in the other period. For these two periods, we compared the type and number of CT examinations, the sedation used (if any), and the age of patients who required sedation. Results: We performed 1055 conventional CT examinations in 762 pediatric cancer patients. Of the 264 children who were 8 years old or younger, 107 had been sedated. In comparison, 1195 helical CT examinations were performed on 838 patients: of the 246 children 8 years old or younger, 51 received sedation. For both study groups, the mean and median age of the patients was 4 years old. The mean age of patients requiring sedation was 21 (conventional CT) or 20 months (helical CT); the median age of patients who required sedation was 2 years old for both study groups. Patients who were 8 years old or younger and who underwent helical CT required sedation 49% less frequently than such patients who underwent conventional CT. The most dramatic reduction occurred among patients who were 3 years old or younger (p &lt; or = .004). Conclusion: Use of helical CT reduced the need for sedation among our pediatric patients. Fewer sedations may reduce the risk of complications, decrease disruption of the patient's normal daily activities, and improve patient throughout. The associated savings in personnel time and pharmaceutical costs can be redistributed. abstract_id: PUBMED:31032440 The prevalence of Klippel-Feil syndrome in pediatric patients: analysis of 831 CT scans. Background: To evaluate the prevalence of Klippel-Feil syndrome (KFS) in pediatric patients obtaining cervical CT imaging in the emergency room (ER). Methods: We evaluated CT scans of the cervical spine of pediatric patients treated in the ER of a Level I Trauma Center from January 2013 to December 2015. Along with analysis of the CT scans for KFS, the following demographics were collected: age, sex, race and ethnicity. Mechanism of injury was also established for all patients. If KFS was present, it was classified using Samartzis classification as type I (single level fusion), type II (multiple, noncontiguous fused segments) or type III (multiple, contiguous fused segments). Results: Of the 848 cervical CTs taken for pediatric ER patients during the study period, 831 were included. Of these patients, 10 had KFS, a prevalence of 1.2%. According to Samartzis classification, 9 were type I and 1 type III. The average age of patients with KFS was 16.02 years (10-18 years), with 8 males (80%) and 2 females (20%). Three had congenital fusions at vertebral levels C2-C3, two at C3-C4, three at C5-C6, one at C6-C7, and one with multiple levels of cervical fusion. Conclusions: The prevalence of KFS amongst 831 pediatric patients, who underwent cervical CT imaging over a 3-year period, was 1.2% (approximately 1 in 83). The most commonly fused spinal levels were C2-C3 and C5-C6. The prevalence of KFS in our study was higher than previously described, and thus warrants monitoring. abstract_id: PUBMED:12540442 Helical CT of the body: a survey of techniques used for pediatric patients. Objective: Our purpose was to assess the current practice of helical CT of the body in pediatric patients through a survey of members of the Society for Pediatric Radiology. Materials And Methods: The survey consisted of 53 questions addressing demographics; oral and IV contrast media administration; and age-based (age groups, 0-4, 5-8, 9-12, and 13-16 years) scanning parameters, including tube current, kilovoltage, slice thickness, and pitch. Respondents accessed the Web-based survey via a uniform resource locator link included in an e-mail to the members of the Society for Pediatric Radiology automatically sent every week for three weeks. Survey results were automatically tabulated. Results: Most (83%) respondents were based in children's or university hospitals at the time of the survey. Virtually all (99%) used nonionic IV contrast material. For body scanning, 21-32% used less than 2.0 mL/kg of body weight; we found the percentage of respondents who used power injection to be approximately equal to the percentage of those who used manual injection (47%). Age-based adjustments are made; however, 11-26% of CT examinations of children younger than 9 years are performed using more than 150 mA. A notable finding was that 20-25% of respondents did not know specific parameters used for their examinations. Conclusion: Although pediatric radiologists do practice age-adjusted helical CT, variable scanning techniques are used, potentially delivering high doses of radiation. Information on current practices in helical CT of the body in children can serve as a foundation for future recommendations and investigations into helical CT in pediatric patients. abstract_id: PUBMED:38360501 Trends in cardiac CT utilization for patients with pediatric and congenital heart disease: A multicenter survey study. Background: The use of cardiac CT (CCT) has increased dramatically in recent years among patients with pediatric and congenital heart disease (CHD), but little is known about trends and practice pattern variation in CCT utilization for this population among centers. Methods: A 21-item survey was created to assess CCT utilization in the pediatric/CHD population in calendar years 2011 and 2021. The survey was sent to all non-invasive cardiac imaging directors of pediatric cardiology centers in North America in September 2022. Results: Forty-one centers completed the survey. In 2021, 98% of centers performed CCT in pediatric and CHD patients (vs. 73% in 2011), and 61% of centers performed &gt;100 CCTs annually (vs. 5% in 2011). While 62% of centers in 2021 utilized dual-source technology for high-pitch helical acquisition, 15% of centers reported primarily performing CCT on a 64-slice scanner. Anesthesia utilization, use of medications for heart rate control, and type of subspecialty training for physicians interpreting CCT varied widely among centers. 50% of centers reported barriers to CCT performance, with the most commonly cited concerns being radiation exposure, the need for anesthesia, and limited CT scan staffing or machine access. 37% (11/30) of centers with a pediatric cardiology fellowship program offer no clinical or didactic CCT training for categorical fellows. Conclusion: While CCT usage in the CHD/pediatric population has risen significantly in the past decade, there is broad center variability in CCT acquisition techniques, staffing, workflow, and utilization. Potential areas for improvement include expanding CT scanner access and staffing, formal CCT education for pediatric cardiology fellows, and increasing utilization of existing technological advances. abstract_id: PUBMED:23845254 Pitfalls and mimickers at 64-section helical CT that cause negative appendectomy: an analysis from 1057 appendectomies. Purpose: To determine the rate of negative appendectomy and clarify the causes of negative appendectomy in patients with clinically suspected acute appendicitis who had surgery after 64-section helical computed tomography (CT). Material And Methods: A retrospective analysis of 1057 patients who had appendectomy after 64-section helical CT was performed to determine the rate of negative appendectomy. The 64-section helical CT examinations obtained with submillimeter and isotropic voxels in the patients with negative appendectomy were analyzed by two readers and compared to clinical, operative and histopathological reports, discharge summaries and original radiology reports. Results: The negative appendectomy rate was 1.7% (18/1057). Appendix enlargement (&gt;6 mm) and fat stranding were present in 17 (17/18; 94%) and 6 patients (6/18; 33%), respectively. In 13 patients (13/18; 72%) 64-section helical CT findings were consistent with acute appendicitis. Interpretive errors in original imaging reports were identified in five patients (5/18; 28%). Conclusion: The preoperative use of 64-section helical CT results in a very low rate of negative appendectomy. Patients with negative appendectomy have 64-section helical CT findings consistent with a diagnosis of acute appendicitis in the majority of cases. Interpretive errors are less frequent. abstract_id: PUBMED:34453559 International consensus on the use of [18F]-FDG PET/CT in pediatric patients affected by epilepsy. Purpose: Positron emission tomography (PET) with 18F-fluorodeoxyglucose ([18F]-FDG) has been increasingly applied in precise localization of epileptogenic focus in epilepsy patients, including pediatric patients. The aim of this international consensus is to provide the guideline and specific considerations for [18F]-FDG PET in pediatric patients affected by epilepsy. Methods: An international, multidisciplinary task group is formed, and the guideline for brain [18F]-FDG PET/CT in pediatric epilepsy patients has been discussed and approved, which include but not limited to the clinical indications, patient preparation, radiopharmaceuticals and administered activities, image acquisition, image processing, image interpretation, documentation and reporting, etc. CONCLUSION: This is the first international consensus and practice guideline for brain [18F]-FDG PET/CT in pediatric epilepsy patients. It will be an international standard for this purpose in clinical practice. abstract_id: PUBMED:33850821 3D CT airway evaluation-guided intraluminal placement of endobronchial blocker in pediatric patients: a randomized controlled study. Background: The aim of the present study was to propose a new approach for 3D computed tomography (CT) airway evaluation-guided endobronchial blocker placement in pediatric patients, and to determine its efficiency in clinical application. Methods: A total of 127 pediatric patients aged 0.5-3 years who were scheduled for elective thoracic surgery using one-lung ventilation (OLV) were randomized into the bronchoscopy (BRO) group and the CT group. The degree of lung collapse, postoperative airway mucosal injury, pulmonary infection within 72 h after surgery, and hoarseness after tracheal extubation; duration of postoperative mechanical ventilation, intensive care unit (ICU) stay and hospitalization; success rate of first blocker positioning; and required time and repositioning for successful blocker placement were compared between the 2 groups. Results: The degree of lung collapse, postoperative airway mucosal injury, pulmonary infection within 72 h after surgery, and hoarseness after tracheal extubation; duration of postoperative mechanical ventilation, ICU stay and hospitalization; success rate of first blocker positioning; and required time and repositioning for successful blocker placement were similar between the 2 groups (all P&gt;0.05). Conclusions: For pediatric patients undergoing surgery with OLV, preoperative 3D CT airway evaluation could be used to guide endobronchial blocker placement, with a blocking efficiency similar to that of BRO-guided blocker placement. abstract_id: PUBMED:32419977 PET/CT in pediatric oncology. The use of PET/CT in adult oncology has been consolidated by several and authoritative multicentric studies, metanalyses and systematic reviews. International guidelines help everyday nuclear medicine specialists, oncologists and radiologists in choosing the most suitable diagnostic path for each patient. Classifications based on traditional imaging and PET/CT findings define the most appropriate treatment and can predict the outcome for different types of malignancies. However, compared to adult patients the use of PET/CT in pediatric oncology is often burdened by lack of systematic and large multicentric studies and consequently accurate and precise guidelines. The cause of this shortage of large trials may be attributed to the rarity of these neoplasms and to the fear of long-term radiation effects on this peculiar category of patients. The aim of this article is to review the applications of PET/CT for imaging the most common pediatric neoplasms. Answer: The investigation into whether settings for helical CT of the body are adjusted for pediatric patients reveals that pediatric helical CT parameters are not consistently adjusted based on the examination type or the age of the child. Specifically, a study found that pediatric patients may be exposed to unnecessarily high radiation doses during body CT because adjustments related to patient age are not made in the scanning parameters that determine radiation dose (PUBMED:11159060). Despite the recognition that pediatric radiologists do practice age-adjusted helical CT, there is variability in scanning techniques used, which could potentially deliver high doses of radiation (PUBMED:12540442). Moreover, the use of wide-volume CT in pediatric chest CT has been shown to decrease radiation exposure while preserving image quality, suggesting that there are alternative scanning modes that can be optimized for pediatric patients to reduce radiation dose (PUBMED:26587939). However, the study also noted that wide-volume CT may be subject to more motion artifact compared to helical CT. In summary, while there is an awareness of the need for age-adjusted helical CT parameters in pediatric patients, the actual practice varies, and settings are not always adjusted appropriately, which may lead to higher radiation exposure for pediatric patients undergoing helical CT of the body.
Instruction: Is there an association between pleural plaques and lung cancer without asbestosis? Abstracts: abstract_id: PUBMED:8016601 Is there an association between pleural plaques and lung cancer without asbestosis? Objectives: A recent review or meta-analysis of epidemiologic studies concluded that persons with asbestos-related pleural plaques do not have an increased risk of lung cancer in the absence of parenchymal asbestosis. The reviewer inferred that this conclusion provided indirect supportive evidence for the proposition that asbestosis is a necessary precursor of asbestos-related lung cancer. The objective of the present communication is to contest these claims. Methods: Finnish epidemiologic data and population statistics were used to estimate the apparent risk ratio of lung cancer associated with radiographic signs of pleural plaques. Power calculations were applied to compute the needed population sizes to demonstrate that the association is statistically significant. Results: Unrealistically large population studies would be needed to observe the statistical relation between pleural plaques and lung cancer, quantitated as a risk ratio of 1.1, resulting from relatively low levels of environmental asbestos exposure. In realistic and valid epidemiologic studies on heavily exposed subpopulations, a two- or threefold risk can be identified. Conclusions: Uninformative studies should not be interpreted as providing suppressive evidence that pleural plaques are a noncausal risk indicator of lung cancer. Even for the null hypothesis, the inference that asbestosis is a necessary causal link between asbestos and lung cancer is illogical. abstract_id: PUBMED:11409599 Association between pleural plaques and coronary heart disease. Objective: The aim of this study was to verify a clinical impression that patients with coronary heart disease disproportionately frequently have calcified pleural plaques. Methods: Chest X-rays were collected from 148 patients referred consecutively to the Helsinki University Central Hospital for coronary angiography and from 100 consecutive lung cancer patients seen at the same hospital. The radiographs were analyzed for the presence of calcified pleural plaques according to the classification the International Labour Office. A generalized linear model with binomial distribution and log link was used to estimate the relative risks and their 95% confidence intervals (95% CI). Results: The prevalence of calcified pleural plaques was 35% for the coronary patients and 19% for the lung cancer patients. Calcified pleural plaques were more common among the men than the women, and the risk increased with age. The relative risk of calcified pleural plaques, adjusted for age and gender, was 2.19 (95% CI 1.44-3.32) for the coronary patients as compared with the lung cancer patients. Conclusions: Further studies with better information on past exposure to asbestos and other potential risk factors are warranted to confirm the observations and to examine whether the association between coronary heart disease and calcified pleural plaques is related to an etiologic or an individual susceptibility factor common to both of these conditions. abstract_id: PUBMED:6488907 Parietal pleural plaques, asbestos bodies, and neoplasia. A clinical, pathologic, and roentgenographic correlation of 25 consecutive cases. An investigation was made to correlate autopsy and roentgenographic findings of pleural plaques with occupational exposure to asbestos and occurrence of respiratory tract tumors. Of the 434 autopsies performed over a 2 1/2 year period, 25 (5.8 percent) had pleural plaques but no gross evidence of parenchymal fibrosis. Review of the posterior-anterior chest roentgenograms using the International Labor Office criteria for classification of pneumoconiosis (1980) revealed that only seven of the 25 cases had detectable pleural thickening or calcification, which demonstrates the poor sensitivity of standard x-ray films. There was no detectable difference in frequency of known or presumed exposure to asbestos between the pleural plaque cases and controls as determined by occupational information obtained from chart review. Asbestos bodies were identified in lung tissue digests from all 25 cases with pleural plaques, and exceeded the normal range for our laboratory in 14 cases (56 percent). Of the 25 cases with pleural plaques, four also had bronchogenic and three had laryngeal carcinoma. The prevalence of bronchogenic carcinoma in patients with plaques was not different from those without plaques (p greater than 0.50). However, the association between plaques and laryngeal carcinoma was highly significant (p = 0.004). abstract_id: PUBMED:29714657 On the diagnosis of malignant pleural mesothelioma: A necropsy-based study of 171 cases (1997-2016). Background: Malignant pleural mesothelioma (MPM) diagnosis is known to be difficult. We report on the diagnostic elements available in life in an MPM necropsy case series and describe the frequency of non-neoplastic asbestos-related diseases as biological exposure indices. Methods: We reviewed pathologic and clinical records of an unselected series of autopsies (1977-2016) in patients with MPM employed in the Monfalcone shipyards or living with shipyard workers. We assessed the consistency with autopsy results of diagnoses based on, respectively, radiologic, cytologic, and histologic findings, with and without immunophenotyping. Results: Data on 171 cases were available: for 169, autopsy confirmed the MPM diagnosis. In life, 119 cases had histologic confirmation of diagnosis, whereas 7 were negative; all cases without immunophenotypization were autoptic MPMs. Cytology alone had been positive in 18 autoptic MPM cases, negative in 14. Radiologic imaging alone had been positive in another 16, negative in 11. In the 2 cases not confirmed at autopsy, MPM had been suspected by chest computed tomography only. Bilateral pleural plaques were found in 144 and histologic evidence of asbestosis in 62 cases. Conclusions: Autopsies confirmed 169/171 cases, including cases that would not be considered as certain based on diagnosis in life. Radiologic imaging, cytologic examination of pleural effusions, or both combined had low sensitivity but high positive predictive value: when they are positive, proceeding to thoracoscopy should be justified. MPM has been correctly diagnosed even without immunohistochemistry. The prevalence of pleural plaques and asbestosis was high due to severity of asbestos exposures in these cases. abstract_id: PUBMED:9167232 Asbestos, asbestosis, pleural plaques and lung cancer. Inhalation of asbestos fibers increases the risk of bronchial carcinoma. It has been claimed that asbestosis is a necessary prerequisite for the malignancy, but epidemiologic studies usually do not have enough statistical strength to prove that asbestos-exposed patients without asbestosis are without risk. Several recent studies do actually indicate that there is a risk for such patients. In addition, case-referent studies of patients with lung cancer show an attributable risk for asbestos of 6% to 23%, which is much higher than the actual occurrence of asbestosis among these patients. Thus there is an increasing body of evidence that, at low exposure levels, asbestos produces a slight increase in the relative risk of lung cancer even in the absence of asbestosis. Consequently, all exposure to asbestos must be minimized. abstract_id: PUBMED:12502232 Apportionment in asbestos-related disease for purposes of compensation. Workers' compensation systems attempt to evaluate claims for occupational disease on an individual basis using the best guidelines available to them. This may be difficult when there is more than one risk factor associated with the outcome, such as asbestos and cigarette smoking, and the occupational exposures is not clearly responsible for the disease. Apportionment is an approach that involves an assessment of the relative contribution of work-related exposures to the risk of the disease or to the final impairment that arises for the disease. This article discusses the concept of apportionment and applies it to asbestos-associated disease. Lung cancer is not subject to a simple tradeoff between asbestos exposure and smoking because of the powerful biological interaction between the two exposures. Among nonsmokers, lung cancer is sufficiently rare that an association with asbestos can be assumed if exposure has occurred. Available data suggest that asbestos exposure almost invariably contributes to risk among smokers to the extent that a relationship to work can be presumed. Thus, comparisons of magnitude of risk between smokers and nonsmokers are irrelevant for this purpose. Indicators of sufficient exposure to cause lung cancer are useful for purposes of establishing eligibility and screening claims. These may include a chest film classified by the ILO system as 1/0 or greater (although 0/1 does not rule out an association) or a history of exposure roughly equal to or greater than 40 fibres/cm3 x y. (In Germany, 25 fibres/cm3 x y is used.) The mere presence of pleural plaques is not sufficient. Mesothelioma is almost always associated with asbestos exposure and the association should be considered presumed until proven otherwise in the individual case. These are situations in which only risk of a disease is apportioned because the impairment would be the same given the disease whatever the cause. Asbestosis, if the diagnosis is correct, is by definition an occupational disease unless there is some source of massive environmental exposure; it is always presumed to be work-related unless proven otherwise. Chronic obstructive airways disease (COAD) accompanies asbestosis but may also occur in the context of minimal parenchymal fibrosis and may contribute to accelerated loss of pulmonary function. In some patients, particularly those with smoking-induced emphysema, this may contribute significantly to functional impairment. An exposure history of 10 fibre x years is suggested as the minimum associated with a demonstrable effect on impairment, given available data. Equity issues associated with apportionment include the different criteria that must be applied to different disorders for apportionment to work, the management of future risk (eg. risk of lung cancer for those who have asbestosis), and the narrow range in which apportionment is really useful in asbestos-associated disorders. Apportionment, attractive as it may be as an approach to the adjudication of asbestos-related disease, is difficult to apply in practice. Even so, these models may serve as a general guide to the assessment of asbestos-related disease outcomes for purposes of compensation. abstract_id: PUBMED:19768667 Relevance of pathological examinations and lung dust analyses in the context of asbestos-associated lung cancer-No. 4104 of the list of occupational diseases in Germany This report discusses the relevance of pathological-anatomical examinations and lung dust analyses in the context of asbestos-related lung cancer on the basis of three case reports. The cases one and two demonstrate a limited performance of conventional computed tomography scanning with a resolution of 3 mm for the detection of asbestos-related pleural diseases. In these cases, only the autopsy was able to confirm the diagnosis of pleural plaques and, therefore, the German criteria for occupational disease No. 4104 of the list of the occupational diseases were fulfilled. Case three clearly shows that routine pathological examinations, especially without the consideration of an occupational disease, could not always successfully obtain the diagnosis of a grade I asbestosis. Only intense histological examinations (iron-staining, 400 x magnification) in combination with lung dust analysis were able to provide such a diagnosis. As shown here, pathological-anatomic examinations including lung dust analysis are highly valuable for the estimation of asbestos-related lung diseases. A merely partial consideration of all possible evidence forms can be responsible for the rejection of a reasonable compensation claim for an occupational disease. Therefore pathological-anatomic examinations are indispensable today and in the future. A definite rejection of occupational disease No. 4104 without an analysis of lung parenchyma is not justified. abstract_id: PUBMED:18420450 Genetic susceptibility to malignant pleural mesothelioma and other asbestos-associated diseases. Exposure to asbestos fibers is a major risk factor for malignant pleural mesothelioma (MPM), lung cancer, and other non-neoplastic conditions, such as asbestosis and pleural plaques. However, in the last decade many studies have shown that polymorphism in the genes involved in xenobiotic and oxidative metabolism or in DNA repair processes may play an important role in the etiology and pathogenesis of these diseases. To evaluate the association between diseases linked to asbestos and genetic variability we performed a review of studies on this topic included in the PubMed database. One hundred fifty-nine citations were retrieved; 24 of them met the inclusion criteria and were evaluated in the review. The most commonly studied GSTM1 polymorphism showed for all asbestos-linked diseases an increased risk in association with the null genotype, possibly linked to its role in the conjugation of reactive oxygen species. Studies focused on GSTT1 null and SOD2 Ala16Val polymorphisms gave conflicting results, while promising results came from studies on alpha1-antitrypsin in asbestosis and MPO in lung cancer. Among genetic polymorphisms associated to the risk of MPM, the GSTM1 null genotype and two variant alleles of XRCC1 and XRCC3 showed increased risks in a subset of studies. Results for the NAT2 acetylator status, SOD2 polymorphism and EPHX activity were conflicting. Major limitations in the study design, including the small size of study groups, affected the reliability of these studies. Technical improvements such as the use of high-throughput techniques will help to identify molecular pathways regulated by candidate genes. abstract_id: PUBMED:24410115 Occupational asbestos exposure and lung cancer--a systematic review of the literature. The objective of this study was to evaluate the scientific literature concerning asbestos and lung cancer, emphasizing low-level exposure. A literature search in PubMed and Embase resulted in 5,864 citations. Information from included studies was extracted using SIGN. Twenty-one statements were evidence graded. The results show that histology and location are not helpful in differentiating asbestos-related lung cancer. Pleural plaques, asbestos bodies, or asbestos fibers are useful as markers of asbestos exposure. The interaction between asbestos and smoking regarding lung cancer risk is between additive and multiplicative. The findings indicate that the association between asbestos exposure and lung cancer risk is basically linear, but may level off at very high exposures. The relative risk for lung cancer increases between 1% and 4% per fiber-year (f-y)/mL, corresponding to a doubling of risk at 25-100 f-y/mL. However, one high-quality case-control study showed a doubling at 4 f-y/mL. abstract_id: PUBMED:22033203 Asbestos-related diseases observed in Palermo (Italy) among workers exposed to asbestos The aim of this study was to evaluate cases of asbestos-related diseases in workers exposed to asbestos in the province of Palermo (Italy) from 2005 to 2009. Data were collected from medical records and from reports from the Prevention and Safety in the Workplace Unit of the provincial health authorities of and between Palermo. Multinomial logistic regression showed a significant association between tobacco smoke and lung cancer and between starting work at an early stage and presence of asbestosis and pleural plaques. Results confirm that over eighteen years after the entry into force of Law 257/1992, which established the cessation of all activities related to asbestos, asbestos-related diseases continue being observed in clinical practice and represent a serious public health problem. Answer: The association between pleural plaques and lung cancer without asbestosis is a subject of debate and investigation. Some studies suggest that there is an increased risk of lung cancer associated with pleural plaques even in the absence of asbestosis. For instance, Finnish epidemiologic data indicate that in heavily exposed subpopulations, a two- or threefold risk of lung cancer can be identified in relation to pleural plaques (PUBMED:8016601). However, the same study acknowledges that large population studies would be needed to observe a statistical relation between pleural plaques and lung cancer at lower levels of environmental asbestos exposure, and that uninformative studies should not be interpreted as evidence against an association. Another study found no significant difference in the prevalence of bronchogenic carcinoma between patients with pleural plaques and those without, suggesting that pleural plaques may not be a strong independent risk factor for lung cancer (PUBMED:6488907). However, this study did find a highly significant association between pleural plaques and laryngeal carcinoma. Further research indicates that asbestos exposure, even at low levels, can produce a slight increase in the relative risk of lung cancer in the absence of asbestosis, suggesting that all exposure to asbestos should be minimized (PUBMED:9167232). Additionally, a systematic review of the literature on occupational asbestos exposure and lung cancer emphasizes that pleural plaques can be useful markers of asbestos exposure and that the relative risk for lung cancer increases with the level of asbestos exposure (PUBMED:24410115). In conclusion, while there is evidence to suggest an association between pleural plaques and lung cancer without asbestosis, particularly in heavily exposed individuals, the strength of this association and its implications for those with lower levels of exposure remain areas where further research is needed to draw definitive conclusions.
Instruction: Dose escalation with three-dimensional conformal radiotherapy for prostate cancer. Is more dose really better in high-risk patients treated with androgen deprivation? Abstracts: abstract_id: PUBMED:17051950 Dose escalation with three-dimensional conformal radiotherapy for prostate cancer. Is more dose really better in high-risk patients treated with androgen deprivation? Aims: To determine the effect of radiation dose on biochemical control in prostate cancer patients treated in a single institution with three-dimensional conformal radiotherapy (3DCRT) and the additional effect of androgen deprivation in prostate cancer patients. Materials And Methods: In total, 363 men with T1-T3b prostate cancer treated in a sequential radiation dose-escalation trial from 66.0 to 84.1 Gy (International Commission Radiation Units and Measurement [ICRU] reference point) between 1995 and 2003, and with a minimum follow-up of 24 months, were included in the analysis. One hundred and forty-eight (41%) men were treated with 3DCRT alone; 74 (20%) men received neoadjuvant androgen deprivation (NAD) 4-6 months before and during 3DCRT; and 141 (39%) men received NAD and adjuvant androgen deprivation (AAD) 2 years after 3DCRT. Univariate, stratified and multivariate analyses were carried out separately for defined risk groups (low, intermediate and high) to determine the effect of radiation dose on biochemical control and its interaction with hormonal manipulation and clinical prognostic variables. Results: The median follow-up was 59 months (range 24-147 months). The actuarial biochemical disease-free survival (bDFS) at 5 years for all patients was 75% (standard error 3%). For low-risk patients, the bDFS was 82% (standard error 5%), for intermediate-risk patients it was 64% (standard error 6%) and for high-risk patients it was 77% (standard error 3%) (P = 0.031). In stratified and multivariate analyses, high-dose 3DCRT for all risk groups, and for high-risk patients, the use of long-term AAD vs NAD, contributed independently and significantly to improve the outcome of prostate cancer patients. Conclusion: The present study indicates an independent benefit on biochemical outcome of high-dose 3DCRT for low-, intermediate- and high-risk patients and of long-term AAD in high-risk prostate cancer patients. abstract_id: PUBMED:27274277 Can we avoid high levels of dose escalation for high-risk prostate cancer in the setting of androgen deprivation? Aim: Both dose-escalated external beam radiotherapy (DE-EBRT) and androgen deprivation therapy (ADT) improve outcomes in patients with high-risk prostate cancer. However, there is little evidence specifically evaluating DE-EBRT for patients with high-risk prostate cancer receiving ADT, particularly for EBRT doses &gt;74 Gy. We aimed to determine whether DE-EBRT &gt;74 Gy improves outcomes for patients with high-risk prostate cancer receiving long-term ADT. Patients And Methods: Patients with high-risk prostate cancer were treated on an institutional protocol prescribing 3-6 months neoadjuvant ADT and DE-EBRT, followed by 2 years of adjuvant ADT. Between 2006 and 2012, EBRT doses were escalated from 74 Gy to 76 Gy and then to 78 Gy. We interrogated our electronic medical record to identify these patients and analyzed our results by comparing dose levels. Results: In all, 479 patients were treated with a 68-month median follow-up. The 5-year biochemical disease-free survivals for the 74 Gy, 76 Gy, and 78 Gy groups were 87.8%, 86.9%, and 91.6%, respectively. The metastasis-free survivals were 95.5%, 94.5%, and 93.9%, respectively, and the prostate cancer-specific survivals were 100%, 94.4%, and 98.1%, respectively. Dose escalation had no impact on any outcome in either univariate or multivariate analysis. Conclusion: There was no benefit of DE-EBRT &gt;74 Gy in our cohort of high-risk prostate patients treated with long-term ADT. As dose escalation has higher risks of radiotherapy-induced toxicity, it may be feasible to omit dose escalation beyond 74 Gy in this group of patients. Randomized studies evaluating dose escalation for high-risk patients receiving ADT should be considered. abstract_id: PUBMED:27073327 Can we avoid dose escalation for intermediate-risk prostate cancer in the setting of short-course neoadjuvant androgen deprivation? Background: Both dose-escalated external beam radiotherapy (DE-EBRT) and androgen deprivation therapy (ADT) improve the outcomes in patients with intermediate-risk prostate cancer. Despite this, there are only few reports evaluating DE-EBRT for patients with intermediate-risk prostate cancer receiving neoadjuvant ADT, and virtually no studies investigating dose escalation &gt;74 Gy in this setting. We aimed to determine whether DE-EBRT &gt;74 Gy improved the outcomes for patients with intermediate-risk prostate cancer who received neoadjuvant ADT. Findings: In our institution, patients with intermediate-risk prostate cancer were treated with neoadjuvant ADT and DE-EBRT, with doses sequentially increasing from 74 Gy to 76 Gy and then to 78 Gy between 2006 and 2012. We identified 435 patients treated with DE-EBRT and ADT, with a median follow-up of 70 months. For the 74 Gy, 76 Gy, and 78 Gy groups, five-year biochemical disease-free survival rates were 95.0%, 97.8%, and 95.3%, respectively; metastasis-free survival rates were 99.1%, 100.0%, and 98.6%, respectively; and prostate cancer-specific survival rate was 100% for all three dose levels. There was no significant benefit for dose escalation either on univariate or multivariate analysis for any outcome. Conclusion: There was no benefit for DE-EBRT &gt;74 Gy in our cohort of intermediate-risk prostate cancer patients treated with neoadjuvant ADT. Given the higher risks of toxicity associated with dose escalation, it may be feasible to omit dose escalation in this group of patients. Randomized studies evaluating dose de-escalation should be considered. abstract_id: PUBMED:18191332 Dose escalation for prostate cancer using the three-dimensional conformal dynamic arc technique: analysis of 542 consecutive patients. Purpose: To present the results of dose escalation using three-dimensional conformal dynamic arc radiotherapy (3D-ART) for prostate cancer. Methods And Materials: Five hundred and forty two T1-T3N0M0 prostate cancer patients were treated with 3D-ART. Dose escalation (from 76 Gy/38 fractions to 80 Gy/40 fractions) was introduced in September 2003; 32% of patients received 80 Gy. In 366 patients, androgen deprivation was added to 3D-ART. Radiation Therapy Oncology Group/European Organization for Research and Treatment of Cancer criteria and Houston definition (nadir + 2) were used for toxicity and biochemical failure evaluation, respectively. Median follow-up was 25 months. Results: Acute toxicity included rectal (G1-2 28.9%; G3 0.5%) and urinary events (G1-2 57.9%; G3-4 2.4%). Late toxicity included rectal (G1-2 15.8%; G3-4 3.1%) and urinary events (G1-2 26.9%; G3-4 1.6%). Two-year failure-free survival and overall survival rates were 94.1% and 97.9%, respectively. Poor prognostic group (GS, iPSA, T), transurethral prostate resection, and dose &gt;76 Gy showed significant association to high risk of progression in multivariate analysis (p = 0.014, p = 0.045, and p = 0.04, respectively). The negative effect of dose &gt;76 Gy was not observed (p = 0.10), when the analysis was limited to 353 patients treated after September 2003 (when dose escalation was introduced). Higher dose was not associated with higher late toxicity. Conclusions: Three-dimensional-ART is a feasible modality allowing for dose escalation (no increase in toxicity has been observed with higher doses). However, the dose increase from 76 to 80 Gy was not associated with better tumor outcome. Further investigation is warranted for better understanding of the dose effect for prostate cancer. abstract_id: PUBMED:31061806 Dose escalation of external beam radiotherapy for high-risk prostate cancer-Impact of multiple high-risk factor. Objective: To retrospectively investigate the treatment outcomes of external beam radiotherapy with androgen deprivation therapy (ADT) in high-risk prostate cancer in three radiotherapy dose groups. Methods: Between 1998 and 2013, patients with high-risk prostate cancer underwent three-dimensional conformal radiotherapy or intensity-modulated radiotherapy of 66 Gy, 72 Gy, or 78 Gy with ADT. Prostate-specific antigen (PSA) relapse was defined using the Phoenix definition. PSA relapse-free survival (PRFS) was evaluated in each radiotherapy dose group. Moreover, high-risk patients were divided into H-1 (patients with multiple high-risk factors) and H-2 (patients with a single high-risk factor) as risk subgroups. Results: Two hundred and eighty-nine patients with a median follow-up period of 77.3 months were analyzed in this study. The median duration of ADT was 10.1 months. Age, Gleason score, T stage, and radiotherapy dose influenced PRFS with statistical significance both in univariate and multivariate analyses. The 4-year PRFS rates in Group-66 Gy, Group-72 Gy and Group-78 Gy were 72.7%, 81.6% and 90.3%, respectively. PRFS rates in the H-1 subgroup differed with statistical significance with an increasing radiotherapy dose having a more favorable PRFS, while PRFS rates in H-2 subgroup did not differ with increase in radiotherapy dose. Conclusion: Dose escalation for high-risk prostate cancer in combination with ADT improved PRFS. PRFS for patients in the H-1 subgroup was poor, but dose escalation in those patients was beneficial, while dose escalation in the H-2 subgroup was not proven to be effective for improving PRFS. abstract_id: PUBMED:9166467 The feasibility of dose escalation with three-dimensional conformal radiotherapy in patients with prostatic carcinoma. Purpose: To evaluate the acute morbidity, late toxicity, and response to treatment in patients with prostate cancer treated on a phase I dose-escalation study with three-dimensional conformal radiotherapy. Methods: A group of 432 patients with stages T1c-T3 prostate cancer were treated with three-dimensional conformal radiotherapy targeting the prostate and seminal vesicles, but effectively excluding the surrounding normal tissue structures from the high-dose volume. A minimum tumor dose of 64.8 to 66.6 Gy was given to 89 patients (20%), 70.2 Gy to 199 patients (46%), 75.6 Gy to 98 patients (23%), and 81.0 Gy to 46 patients (11%). Results: Treatment was well tolerated, and the acute toxicities and long-term complications observed were of minimal severity (grade 1 or 2) regardless of dose. Acute grade 2 rectal symptoms were observed in 15% of patients, whereas 40% developed grade 2 urinary symptoms. Among patients who received from 64.8 to 70.2 Gy, the 2-year actuarial likelihood of grade 2 late toxicity was 2% for rectal and 1% for urinary complications, compared to 11% and 5%, respectively, for those treated with doses ranging from 75.6 to 81 Gy. Only three patients (0.7%) have so far developed severe (grade 3 or 4) late urethral or rectal complications. The rate of prostate-specific antigen normalization from abnormal pretreatment levels to a value of &lt; or = 1.0 ng/mL was used as an endpoint to evaluate the initial response to treatment. When the analysis was restricted to patients with pretreatment prostate-specific antigen levels of &lt; or = 20 ng/mL, patients who received 70.2 Gy had a significantly higher rate of prostate-specific antigen normalization than patients who received 64.8 to 66.6 Gy. Evaluation of the prostate-specific antigen response at 75.6 Gy and 81.0 Gy was not possible because of the short follow-up time in many of these patients. Conclusions: Three-dimensional conformal radiotherapy technique has made it possible safely to escalate radiation doses to unprecedented levels in patients with prostatic cancer. Preliminary evidence for an improved initial prostate-specific antigen response with higher doses indicates a potential for an improved therapeutic ratio with the three-dimensional conformal radiotherapy approach. abstract_id: PUBMED:16170164 Risk-adapted androgen deprivation and escalated three-dimensional conformal radiotherapy for prostate cancer: Does radiation dose influence outcome of patients treated with adjuvant androgen deprivation? A GICOR study. Purpose: Multicenter study conducted to determine the impact on biochemical control and survival of risk-adapted androgen deprivation (AD) combined with high-dose three-dimensional conformal radiotherapy (3DCRT) for prostate cancer. Results of biochemical control are reported. Patients And Methods: Between October 1999 and October 2001, 416 eligible patients with prostate cancer were assigned to one of three treatment groups according to their risk factors: 181 low-risk patients were treated with 3DCRT alone; 75 intermediate-risk patients were allocated to receive neoadjuvant AD (NAD) 4-6 months before and during 3DCRT; and 160 high-risk patients received NAD and adjuvant AD (AAD) 2 years after 3DCRT. Stratification was performed for treatment/risk group and total radiation dose. Results: After a median follow-up of 36 months (range, 18 to 63 months), the actuarial biochemical disease-free survival (bDFS) at 5 years for all patients was 74%. The corresponding figures for low-risk, intermediate-risk, and high-risk disease were 80%, 73%, and 79%, respectively (P = .847). Univariate analysis showed that higher radiation dose was the only significant factor associated with bDFS for all patients (P = .0004). When stratified for treatment group, this benefit was evident for low-risk patients (P = .009) and, more interestingly, for high-risk patients treated with AAD. The 5-year bDFS for high-risk patients treated with AAD was 63% for radiation doses less than 72 Gy and 84% for those &gt; or = 72 Gy (P = .003). Conclusion: The results of combined AAD plus high-dose 3DCRT are encouraging. To our knowledge, this is the first study showing an additional benefit of high-dose 3DCRT when combined with long-term AD for unfavorable disease. abstract_id: PUBMED:30862436 Variations in patterns of concurrent androgen deprivation therapy use based on dose escalation with external beam radiotherapy vs. brachytherapy boost for prostate cancer. Purpose: Retrospective data suggest less benefit from androgen deprivation therapy (ADT) in the setting of dose-escalated definitive radiation for prostate cancer, especially when a combination of external beam radiotherapy (EBRT) and brachytherapy approaches are used. This study aimed to test the hypothesis that patients with prostate cancer with intermediate- or high-risk disease undergoing extreme dose escalation with a brachytherapy boost are less likely to receive ADT. Methods And Materials: Data from the National Cancer Database were extracted for men aged 40-90 years diagnosed with node-negative, non-metastatic prostate cancer from 2004 to 2015. Only patients with intermediate- or high-risk disease who were treated with definitive radiotherapy were included. The association and patterns of care between dose escalated radiotherapy and ADT receipt were assessed using multivariable logistic regression. Results: Patients with unfavorable intermediate- and high-risk prostate cancer were significantly less likely to receive ADT if they underwent dose escalation with a combination of EBRT and brachytherapy (odds ratio 0.67, p &lt; 0.0001). Over time, this decrease in ADT utilization has widened for patients with unfavorable intermediate-risk disease. There was no difference in ADT utilization when comparing patients treated with non-dose-escalated EBRT to those treated with dose-escalated EBRT (without brachytherapy). Conclusion: In this large national database, patients with unfavorable intermediate- and high-risk prostate cancer were significantly less likely to receive guideline-indicated ADT if they underwent extreme dose escalation with combined radiation modalities. As we await prospective data guiding the utility of ADT with dose escalated radiation, these findings suggest potential underutilization of ADT in patients at higher risk of advanced disease. abstract_id: PUBMED:10367171 Survival advantage for prostate cancer patients treated with high-dose three-dimensional conformal radiotherapy. Purpose: The value of treating prostate cancer has been questioned, and some insist that a survival benefit is demonstrated to justify treatment. Prospective dose-escalation studies with three-dimensional conformal radiotherapy technique have demonstrated improvement in biochemical freedom from disease and local control. We report the outcomes of high-dose treatment with three-dimensional conformal radiotherapy compared with low-dose treatment for biochemical freedom from disease, freedom from distant metastasis, cause-specific survival, and overall survival. Patients And Methods: The study design was retrospective, involving pairs matched on independent prognostic variables in which each patient treated with low-dose radiotherapy was matched with a patient treated with high-dose radiotherapy. Outcomes were compared for two groups of patients: Group I: Three-dimensional conformal radiotherapy treatment--296 patients treated with more than 74 Gy matched on stage, grade, and prostate-specific antigen level, to 296 patients treated with less than 74 Gy. Group II: Three-dimensional conformal radiotherapy treatment--357 patients treated with more than 74 Gy matched on stage and grade to 357 patients treated with less than 74 Gy. Results: Univariate analysis showed that dose is a significant predictor of biochemical freedom from disease, freedom from distant metastasis, and cause-specific survival for group I and biochemical freedom from disease, freedom from distant metastasis, cause-specific survival, and overall survival for group II. Multivariate analysis showed that dose is a significant independent predictor in group I for biochemical freedom from disease and freedom from distant metastasis and for biochemical freedom from disease, freedom from distant metastasis, cause-specific survival, and overall survival in group II. Discussion: These data provide strong support for the definitive treatment of prostate cancer with high-dose (&gt; 74 Gy) three-dimensional conformal radiotherapy. These doses can be safely delivered with three-dimensional conformal radiotherapy techniques. Various institutions and industry must collaborate to expand the technology allowing the use of high-dose three-dimensional conformal radiotherapy in the national practice beyond centers of technological excellence. abstract_id: PUBMED:29151227 Significance of prostate-specific antigen kinetics after three-dimensional conformal radiotherapy with androgen deprivation therapy in patients with localized prostate cancer. Background: To evaluate the relationship between biochemical recurrence and post-radiation prostate-specific antigen (PSA) kinetics in patients with localized prostate cancer treated by radiotherapy with various durations of androgen deprivation therapy (ADT). Methods: We reviewed our single-institution, retrospectively maintained data of 144 patients with T1c-T3N0M0 prostate cancer who underwent three-dimensional conformal radiotherapy (3D-CRT) between December 2005 and December 2015 and 113 patients were fulfilled the inclusion criteria. In this cohort, 3D-CRT was delivered with a dose in the range from 70.0 to 72.0 Gy with ADT. All patients received ADT as concurrent regimens. Biochemical recurrence was defined on the basis of the following: "PSA nadir + 2.0 ng/ml or the clinical judgement of attending physicians". Kaplan-Meier, log-rank, and Cox regression analyses were carried out. Results: The median follow-up period was 54.0 months. The median duration of ADT was 17 months (interquartile range, 10-24 months). There was a trend toward statistical significant correlation between post-radiation PSA decline rate of ≥ 90% and PSA recurrence (p = 0.056). The same correlation could be observed in D'Amico high-risk patients (p = 0.036). However, it was not observed between PSA nadir and PSA recurrence (p = 0.40) in univariate analysis. Furthermore, multivariate analysis showed that post-radiation PSA decline rate of ≥ 90% was a significant predictor of biochemical recurrence in patients who received radiotherapy with various durations of ADT (p = 0.044). Conclusions: Post-radiation PSA decline rate of ≥ 90% was a prognostic factor for biochemical recurrence in localized prostate cancer patients received 3D-CRT with various durations of ADT. Answer: The question of whether higher doses of three-dimensional conformal radiotherapy (3DCRT) improve outcomes in high-risk prostate cancer patients treated with androgen deprivation therapy (ADT) has been explored in several studies, with mixed results. One study found that high-dose 3DCRT provided an independent benefit on biochemical outcome for low-, intermediate-, and high-risk patients, and that long-term adjuvant ADT (AAD) contributed significantly to improved outcomes in high-risk prostate cancer patients (PUBMED:17051950). This suggests that more dose can indeed be better for high-risk patients, particularly when combined with long-term ADT. However, another study found no benefit of dose-escalated external beam radiotherapy (DE-EBRT) greater than 74 Gy in a cohort of high-risk prostate patients treated with long-term ADT (PUBMED:27274277). This study suggests that dose escalation beyond 74 Gy may not be necessary in this group of patients and that randomized studies evaluating dose escalation for high-risk patients receiving ADT should be considered. Similarly, a study on intermediate-risk prostate cancer patients receiving short-course neoadjuvant ADT found no benefit for DE-EBRT greater than 74 Gy, indicating that dose escalation may be omitted in this group of patients (PUBMED:27073327). Another study reported that dose escalation from 76 Gy to 80 Gy using three-dimensional conformal dynamic arc radiotherapy (3D-ART) was not associated with better tumor outcomes, although higher doses did not increase late toxicity (PUBMED:18191332). In contrast, a retrospective study indicated that dose escalation for high-risk prostate cancer in combination with ADT improved prostate-specific antigen relapse-free survival (PRFS), especially in patients with multiple high-risk factors (PUBMED:31061806). Overall, the evidence suggests that while higher doses of 3DCRT may benefit certain high-risk prostate cancer patients, especially when combined with long-term ADT, the benefit of dose escalation beyond 74 Gy is not consistently supported across studies. Further research, including randomized controlled trials, is needed to clarify the role of dose escalation in the context of ADT for high-risk prostate cancer patients.
Instruction: Is microdialysis useful for early detection of acute rejection after kidney transplantation? Abstracts: abstract_id: PUBMED:25865085 Is microdialysis useful for early detection of acute rejection after kidney transplantation? Introduction: Acute rejection following kidney transplantation (KTx) is still one of the challenging complications leading to chronic allograft failure. The aim of this study was to investigate the role of microdialysis (MD) in the early detection of acute graft rejection factor following KTx in porcine model. Methods: Sixteen pigs were randomized after KTx into case (n = 8, without immunosuppressant) and control groups (n = 8, with immunosuppressant). The rejection diagnosis in our groups was confirmed by histopathological evidences as "acute borderline rejection". Using MD, we monitored the interstitial concentrations of glucose, lactate, pyruvate, glutamate and glycerol in the transplanted grafts after reperfusion. Results: In the early post-reperfusion phase the lactate level in our case group was significantly higher comparing to the control group and remained in higher levels until the end of monitoring. The lactate to pyruvate ratio showed a considerable increase in the case group during the post-reperfusion phase. The other metabolites (glucose, glycerol, glutamate) were nearly at the same levels at the end of our monitoring in both study groups. Conclusion: The increase in lactate and lactate to pyruvate ratios seems to be an indicator for early detection of acute rejection after KTx. Therefore, MD as a minimally invasive measurement tool may help to identify the need to immunosuppression adjustment in the early KTx phase before the clinical manifestation of the rejection. abstract_id: PUBMED:21719714 Using microdialysis for early detection of vascular thrombosis after kidney transplantation in an experimental porcine model. Background: In kidney transplantation (KTx), vascular thrombosis has a major impact on morbidity and graft survival. The ischaemia, caused by thrombosis, can lead to interstitial metabolite changes. The aim of this experimental study was to create conditions in which the graft would be prone to vascular thrombosis following KTx and then to evaluate the role of microdialysis (MD) for its early detection. Methods: Sixteen randomized pigs in the control group received heparin and immunosuppressive drugs, while the case group received none. Based on histopathological evidence of vascular thrombosis, the case group was subdivided into mildly and severely congested subgroups. Using MD, we evaluated the interstitial concentrations of glucose, lactate to pyruvate ratio, glutamate and glycerol in the transplanted grafts during different phases of KTx. Results: Following reperfusion, we noted considerable changes. The severely congested subgroup showed a low and decreasing level of glucose. Only in this group did the lactate to pyruvate ratio continue to increase until the end of monitoring. The glycerol level increased continuously in the entire case group and this increase was most significant in the severely congested subgroup. In all of the study groups, glutamate concentration remained in a low steady state until the end of monitoring. Conclusion: MD can be an appropriate method for early detection of vascular complications after KTx. Decreasing glucose levels, increased lactate to pyruvate ratio and increased glycerol levels are appropriate indicators for early detection of vascular thromboses following KTx. Particularly, the glycerol level could predict the necessity and urgency of intervention needed to ultimately save the transplanted kidney. abstract_id: PUBMED:32543055 COQ8B nephropathy: Early detection and optimal treatment. Background: Mutations in COQ8B (*615567) as a defect of coenzyme Q10 (CoQ10) cause steroid resistant nephrotic syndrome (SRNS). Methods: To define the clinical course and prognosis of COQ8B nephropathy, we retrospectively assessed the genotype and phenotype in patients with COQ8B mutations from Chinese Children Genetic Kidney Disease Database. We performed the comparing study of renal outcome following CoQ10 treatment and renal transplantation between early genetic detection and delayed genetic detection group. Results: We identified 20 (5.8%) patients with biallelic mutations of COQ8B screening for patients with SRNS, non-nephrotic proteinuria, or chronic kidney disease (CKD) of unknown origin. Patients with COQ8B mutations showed a largely renal-limited phenotype presenting with proteinuria and/or advanced CKD at the time of diagnosis. Renal biopsy uniformly showed focal segmental glomerulosclerosis. Proteinuria was decreased, whereas the renal function was preserved in five patients following CoQ10 administration combined with angiotensin-converting enzyme (ACE) inhibitor. The renal survival analysis disclosed a significantly better outcome in early genetic detection group than in delayed genetic detection group (Kaplan-Meier plot and log rank test, p = .037). Seven patients underwent deceased donor renal transplantation without recurrence of proteinuria or graft failure. Blood pressure showed decreased significantly during 6 to 12 months post transplantation. Conclusions: COQ8B mutations are one of the most common causes of adolescent-onset proteinuria and/or CKD of unknown etiology in the Chinese children. Early detection of COQ8B nephropathy following CoQ10 supplementation combined with ACE inhibitor could slow the progression of renal dysfunction. Renal transplantation in patients with COQ8B nephropathy showed no recurrence of proteinuria. abstract_id: PUBMED:19719730 Chronic allograft nephropathy--a clinical syndrome: early detection and the potential role of proliferation signal inhibitors. Chronic allograft nephropathy (CAN) leads to the majority of late graft loss following renal transplantation. Detection of CAN is often too late to permit early intervention and successful management. Most current strategies for managing CAN rely on minimizing or eliminating calcineurin inhibitors (CNIs) once CAN has become established. The proliferation signal inhibitors everolimus and sirolimus have potent immunosuppressive and antiproliferative actions, with the potential to alter the natural history of CAN by reducing CNI exposure whilst avoiding acute rejection. Whilst data will be forthcoming from a number of clinical trials investigating this potential, we discuss early detection of CAN and the rationale for a role for this class of agent. abstract_id: PUBMED:20844185 C4d staining in post-reperfusion renal biopsy is not useful for the early detection of antibody-mediated rejection when CDC crossmatching is negative. Background: Sensitized patients (pts) may develop acute antibody-mediated rejection (AMR) due to preformed donor-specific antibodies, undetected by pre-transplant complement-dependent cytotoxicity (CDC) crossmatch (XM). We hypothesized that C4d staining in 1-h post-reperfusion biopsies (1-h Bx) could detect early complement activation in the renal allograft due to preformed donor-specific antibodies. Methods: To test this hypothesis, renal transplants (n = 229) performed between June 2005 and December 2007 were entered into a prospective study of 1-h Bx and stained for C4d by immunofluorescence. Transplants were performed against a negative T-cell CDC-XM with the exception of three cases with a positive B-cell XM. Results: All 229 1-h Bx stained negative for C4d. Fourteen pts (6%) developed AMR. None of the 14 protocol 1-h Bx stained positive for C4d in peritubular capillaries (PTC). However, all indication biopsies-that diagnosed AMR-performed at a median of 8 days after transplantation stained for C4d in PTC. Conclusions: These data show that C4d staining in 1-h Bx is, in general, not useful for the early detection of AMR when CDC-XM is negative. abstract_id: PUBMED:6361745 Early detection of obstructed ureter by ultrasound following renal transplantation. Serial ultrasound examinations were carried out following 144 renal transplants. Eleven patients (8%) required surgery for ureteric obstruction and in all cases the ultrasound correctly identified the obstruction at an early stage. One false positive result was obtained with the ultrasound and this compared favourably with both the intravenous urogram (IVU) and isotope renogram. There were false positive and false negative results with both the IVU and renogram in addition to which neither of these techniques, particularly the IVU, is as simple or atraumatic for the patient as the ultrasound. Serial ultrasound examinations have a useful role in the detection of ureteric obstruction as well as being of value in the detection of perinephric fluid collections and acute rejection. abstract_id: PUBMED:22902496 Chronic renal allograft injury: early detection, accurate diagnosis and management. Chronic renal allograft injury (CRAI) is a multifactorial clinical/pathological entity characterised by a progressive decrease in glomerular filtration rate, generally associated with proteinuria and arterial hypertension. Classical views tried to distinguish between immunological (sensitization, low HLA compatibility, acute rejection episodes) and non-immunological factors (donor age, delayed graft function, calcineurin inhibitors [CNI] toxicity, arterial hypertension, infections) contributing to its development. Defining it as a generic idiopathic entity has precluded more comprehensive attempts for therapeutic options. Consequently, it is necessary to reinforce the diagnostic work-up to add etiopathogenetic diagnosis in any case of graft dysfunction, specially transplant vasculopathy and transplant glomerulopathy, reserving the term interstitial fibrosis and tubular atrophy (IFTA) when a case of CRAI is unspecific and no clear contributing factors or a specific etiology is possible in diagnosis. Earlier detection and intervention of CRAI remain as key challenges for transplant physicians. Changes in SCr levels and proteinuria often occur late in disease progression and may not accurately represent the underlying renal damage. Deterioration of renal function over time, determined through slope analysis, is a more accurate indicator of CRAI, and earlier identification of renal deterioration may prompt earlier changes in immunosuppressive therapies. The crucial point is probably to distinguish between nonimmunological or toxic CRAI and immunological-derived CRAI cases. Conversion to nonnephrotoxic immunosuppressants, such as mTOR inhibitors, holds promise in reducing the impact of toxic CRAI by both avoiding and reducing the impact of CNIs and reducing smooth muscle cell proliferation in the kidney. CRAI due to chronic antibody mediated rejection is an important entity, better and better defined that carries a bad prognosis and is associated with graft loss. The best prevention is adequate immunosuppression and tight patient monitoring, from the clinical, analytical and histological standpoint. While clinical trial evidence is needed for early detection and intervention in patients with CRAI, this review represents the current knowledge upon which clinicians can base their strategies. New prospective, ideally well-controlled trials are needed to establish the usefulness of different potentially therapeutic regimens. These evidences should demonstrate the benefits before extended uncontrolled use of drugs such as rituximab, bortezomib or eculizumab, which are expensive and frequently iatrogenic. abstract_id: PUBMED:21865292 Genomic-derived markers for early detection of calcineurin inhibitor immunosuppressant-mediated nephrotoxicity. Calcineurin inhibitor (CI) therapy has been associated with chronic nephrotoxicity, which limits its long-term utility for suppression of allograft rejection. In order to understand the mechanisms of the toxicity, we analyzed gene expression changes that underlie the development of CI immunosuppressant-mediated nephrotoxicity in male Sprague-Dawley rats dosed daily with cyclosporine (CsA; 2.5 or 25 mg/kg/day), FK506 (0.6 or 6 mg/kg/day), or rapamycin (1 or 10 mg/kg/day) for 1, 7, 14, or 28 days. A significant increase in blood urea nitrogen was observed in animals treated with CsA (high) or FK506 (high) for 14 and 28 days. Histopathological examination revealed tubular basophilia and mineralization in animals given CsA (high) or FK506 (low and high). We identified a group of genes whose expression in rat kidney is correlated with CI-induced kidney injury. Among these genes are two genes, Slc12a3 and kidney-specific Wnk1 (KS-Wnk1), that are known to be involved in sodium transport in the distal nephrons and could potentially be involved in the mechanism of CI-induced nephrotoxicity. The downregulation of NCC (the Na-Cl cotransporter coded by Slc12a3) in rat kidney following CI treatment was confirmed by immunohistochemical staining, and the downregulation of KS-Wnk1 was confirmed by quantitative real-time-polymerase chain reaction (qRT-PCR). We hypothesize that decreased expression of Slc12a3 and KS-Wnk1 could alter the sodium chloride reabsorption in the distal tubules and contribute to the prolonged activation of the renin-angiotensin system, a demonstrated contributor to the development of CI-induced nephrotoxicity in both animal models and clinical settings. Therefore, if validated as biomarkers in humans, SLC12A3 and KS-WNK1 could potentially be useful in the early detection and reduction of CI-related nephrotoxicity in immunosuppressed transplant patients when monitoring the health of kidney xenographs in clinical practice. abstract_id: PUBMED:9067691 Increased beta 2-microglobulin (B2M) is useful in the detection of post-transplant lymphoproliferative disease (PTLD). This study examines whether changes in beta 2-microglobulin (B2M) serum levels are useful in the early detection of post-transplant lymphoproliferative disease (PTLD). Serum B2M is monitored daily post-transplant at our center as a marker of change in lymphocyte activation. We identified 16 cases (16/1359; 1.2%) of PTLD from among 1359 kidney and kidney-pancreas transplants. Those with CNS lymphoma (two patients) and titer change only (one) were not included in this review. Thirteen patients had serum titer and clinical evidence of EBV activity; 12 of these patients had histological evidence of PTLD (lymph node 6, kidney 3, and generalized disease 3). Three patients died with disseminated PTLD infection. Nine are alive but only two have the original transplant kidney. All patients received quadruple immunosuppression for induction, and 11 were subsequently treated with OKT3 or ALG for rejection. The mean number of days of induction ALG therapy was 14.8 d (20 mg/kg/d). The mean number of days of OKT3 therapy for rejection was 14.4 d (5 mg/d). During rejection the highest mean creatinine level was 6.8 mg/dL, and the highest mean B2M level was 16.4. With PTLD, the highest mean creatinine level was 7.0 mg/dL and the highest mean B2M level was 32.3 mg/L. The difference in creatinine levels was not significant, but the difference in B2M levels was significant (p &lt; 0.01). We conclude that B2M levels are useful markers in differentiating rejection from PTLD. abstract_id: PUBMED:37676965 Implantable bioelectronic systems for early detection of kidney transplant rejection. Early-stage organ transplant rejection can be difficult to detect. Percutaneous biopsies occur infrequently and are risky, and measuring biomarker levels in blood can lead to false-negative and -positive outcomes. We developed an implantable bioelectronic system capable of continuous, real-time, long-term monitoring of the local temperature and thermal conductivity of a kidney for detecting inflammatory processes associated with graft rejection, as demonstrated in rat models. The system detects ultradian rhythms, disruption of the circadian cycle, and/or a rise in kidney temperature. These provide warning signs of acute kidney transplant rejection that precede changes in blood serum creatinine/urea nitrogen by 2 to 3 weeks and approximately 3 days for cases of discontinued and absent administration of immunosuppressive therapy, respectively. Answer: Yes, microdialysis (MD) appears to be useful for the early detection of acute rejection after kidney transplantation. A study conducted on a porcine model showed that the lactate level in the case group (without immunosuppressant) was significantly higher compared to the control group (with immunosuppressant) in the early post-reperfusion phase. The lactate to pyruvate ratio also showed a considerable increase in the case group during the post-reperfusion phase. These findings suggest that an increase in lactate and lactate to pyruvate ratios could be indicators for early detection of acute rejection after kidney transplantation. Therefore, MD, as a minimally invasive measurement tool, may help to identify the need for immunosuppression adjustment in the early phase of kidney transplantation before the clinical manifestation of rejection (PUBMED:25865085).
Instruction: Is Bentall Procedure Still the Gold Standard for Acute Aortic Dissection with Aortic Root Involvement? Abstracts: abstract_id: PUBMED:26090885 Is Bentall Procedure Still the Gold Standard for Acute Aortic Dissection with Aortic Root Involvement? Introduction: The "ideal" treatment of acute aortic dissection type A (AADA) with dissected and dilated root is controversial. We compared the outcome of classical Bentall procedure (biological and mechanical) with valve-sparing David procedure. Methods: Between January 2002 and July 2011, 119 patients with AADA and aortic root involvement underwent surgery at our center. Thirty-one patients (group 1) received biological conduits, 41 (group 2) received mechanical conduits, and 47 (group 3) underwent David procedures. Results: Cross-clamp, cardiopulmonary bypass, and circulatory arrest times were 151 ± 52, 232 ± 84, and 36 ± 30 minutes (group 1); 148 ± 44, 237 ± 91, and 45 ± 29 minutes (group 2); and 160 ± 46, 231 ± 63, and 35 ± 17 minutes (group 3), respectively. The 30-day mortality rates were 32.3% (group 1), 22% (group 2), and 12.8% (group 3). The 1-year rates for freedom from valve-related reoperation were 100% (group 1), 92.5% (group 2), and 95.2% (group 3) (p = 0.172). The 1-year survival rates were 61% (group 1), 61% (group 2), and 84.1% (group 3) (p = 0.008). Conclusion: Even in AADA patients with root involvement, David procedure has acceptable results. David procedure (if possible) or a Bio-Bentall (for pathological valves) seems to be the optimal technique. abstract_id: PUBMED:33691047 Bentall-de Bono procedure for acute aortic dissection. We present a patient with an acute type A aortic dissection that involves the aortic root. The high mortality of patients with this condition is often associated with operations performed by surgeons with minimal experience dealing with aortic diseases. Therefore, less-experienced surgeons often opt for less complicated techniques like supracoronary ascending aortic replacement. However, according to the latest guidelines for the management of aortic diseases, the aortic root should be replaced when it is compromised by the dissection. The Bentall-de Bono technique treats the aortic root and demands less experience than valve-sparing aortic surgery. abstract_id: PUBMED:31903260 Root reconstruction for proximal repair in acute type A aortic dissection. Background: Retrospective compared the results of root reconstruction and root replacement for acute type A aortic dissection (ATAAD) patients and observed the rate of aortic insufficiency (AI) and aortic root dilation in the midterm follow-up period. Methods: From 2008-2016, 427 ATAAD patients received surgical therapy in our center. There were 328 male and 99 female patients, aging from 22 to 83 years with a mean age of (51.1±12.5) years. These patients were divided into two major groups: 298 cases with root reinforcement reconstruction (Root Reconstruction), 129 cases with Bentall procedure (Root Replacement). Results: The 30-day mortality was 7.7% (33/427), while no difference between the 2 procedures (8.1% and 7.0%, P=0.844). Cross-clamp, cardiopulmonary bypass, and circulatory arrest times of all the patients were 252.5±78.1, 173.6±68.9, 30.7±9.5 minutes, respectively. In the average follow-up time of (34.5±26.1) months, midterm survival rates were similar between the 2 procedures (86.2% and 86.0%, P=0.957). Only one patient received redo Bentall procedure because of severe aortic regurgitation and dilated aortic root (50 mm) in the Root Reconstruction Group. Conclusions: The indication of root management of ATAAD is based on the diameter of aortic root, structure of aortic root, and the dissection involvement. For most ATAAD patients, aortic root reinforcement reconstruction is a feasible and safe method. abstract_id: PUBMED:35251818 The David Operation Offers Shorter Hemostasis Time Than the Bentall in Case of Acute Aortic Dissection Type A. Background The aim of the present study was to compare the clinical outcome of the David operation and the Bentall operation in patients with Stanford type A acute aortic dissection (AADA) from the viewpoint of hemostasis. Methods Between April 2016 and April 2020, 235 patients underwent emergent surgery for AADA. Of them, 38 patients required aortic root replacement (ARR: The David operation 17, the Bentall operation 21). The mean age was 59.3±12.6 years. In the present series, the David operation was the first choice for relatively young people, and the Bentall operation was performed for relatively elderly patients and cases in which valve-sparing seemed impossible. Results Between the David and the Bentall group, the 30-day mortality rate did not differ significantly. However, hemostasis time (144.6±50.3 vs. 212.5±138.1 min, p=0.047), defined as the interval from the cessation of cardio-pulmonary bypass (CPB) to the end of the operation, and total operation time (477.8±85.7 vs. 578.3±173.6 min, p=0.027) were significantly shorter in the David group than in the Bentall group, and the amount of blood transfusion was less in the David group than in the Bentall group (red blood cells: 3.5±3.6 vs. 9.2±5.9 units, p=0.013; fresh frozen plasma: 4.1±4.7 vs 9.4±5.1 units, p=0.002; platelet concentrate: 33.2±11.3 vs 42.2±12.0 units, p=0.025). Conclusion David operation offers a shorter hemostasis time and consequently shorter operation time than the Bentall operation in the setting of AADA, probably due to double suture lines, despite its surgical complexity. abstract_id: PUBMED:37951742 The mid-term outcomes of aortic-root repair is not inferior to Bentall procedure in acute type-A aortic dissection. Objective: Bentall procedure used to be standard operation for involved aortic root in acute type A aortic dissection (ATAAD). But aortic root repair for preserving valve is still controversial in ATAAD. This study aimed to evaluate the midterm outcomes of aortic root repair by comparing with Bentall approach. Methods: A retrospective analysis of 1075 ATAAD patients with aortic root involvement was conducted. The patients were divided into aortic root repair group (n = 447) and Bentall group (n = 628). The propensity score matching analysis (PSMA) was used to adjust the baseline. Results: The median follow-up was 44 months (interquartile range, 17-65 months; range, 1-130 months). The 30-day mortality in the repair and replacement groups was 15.0 % and 12.9 % (P = 0.327) respectively; the late overall mortality was 15.9 % and 14.0 % (P = 0.394) respectively. The Kaplan-Meier 10-year survival and free-from-reoperation was 86.0 % and 92.5 % respectively in the repair group. After PSMA, the cumulative survival rate [Hazard Ratio (HR) 0.685; 95 % Confidence Interval (CI) 0.457-1.027; P = 0.747]) and reoperation rate (HR 0.308; 95 % CI 0.070-1.355; P = 0.157) was not significantly higher in the repair group than in the Bentall group. Conclusion: The mid-term outcome of aortic root repair is probably not inferior to Bentall procedure. Therefore, root repair is an alternative approach in ATAAD with the advantage of preserving native valve. abstract_id: PUBMED:31365078 Is valve-sparing root replacement a safe option in acute type A aortic dissection? A systematic review and meta-analysis. Objectives: There are conflicting views regarding the status of valve-sparing root replacement (VSRR) as a proper treatment for acute type A aortic dissection (AAAD). Our goal was to compare the early and late outcomes of VSRR versus those of the Bentall procedure in patients with AAAD. Methods: We performed a systematic review and meta-analysis of 9 studies to compare the outcomes of VSRR with those of the Bentall procedure in patients with AAAD. We focused on the following issues: early and late mortality rates, re-exploration, thromboembolization/bleeding events, infective endocarditis and reintervention rates. Results: A total of 706 patients with AAAD who underwent aortic root surgery were analysed; 254 patients were treated with VSRR and 452 with the Bentall procedure. VSRR was associated with a reduced risk of early death [odds ratio (OR) 0.34; 95% confidence interval (CI) 0.21-0.57] and late death (OR 0.34; 95% CI 0.21-0.57) compared with the Bentall procedure. No statistically significant difference was observed between the VSRR and Bentall groups with pooled ORs (OR 0.77; 95% CI 0.47-1.27, OR 0.61; 95% CI 0.32-1.18 and OR 0.71; 95% CI 0.23-2.15) for re-exploration, thromboembolization/bleeding and postoperative infective endocarditis, respectively. An increased risk of reintervention was observed for the VSRR compared to the Bentall group (OR 3.79; 95% CI 1.27-11.30). The pooled rate of reintervention incidence was 1.6% (95% CI 0.0-3.7%) and 0.4% (95% CI 0.0-1.3%) for the VSRR and the Bentall groups, respectively. Conclusions: VSRR in patients with AAAD can be performed in experienced centres with excellent short- and long-term outcomes compared to those with the Bentall procedure and thus should be recommended especially for active young patients. abstract_id: PUBMED:26798728 Giant Aortic Root Aneurysm Presenting as Acute Type A Aortic Dissection. A 49-year-old woman with four months of increasing episodic palpitations, chest pain, and shortness of breath presented to an outside clinic where a new 4/6 systolic ejection murmur was identified. A transthoracic echocardiogram revealed a large aortic root aneurysm. The patient underwent emergent repair of the dissected root aneurysm with a modified Bentall procedure utilizing a #19 St Jude Valsalva mechanical valve conduit. Postoperatively, she required a permanent pacemaker placement. Her echo showed ejection fraction improvement from a preoperative 25% to a postoperative 35%. She was discharged home on postoperative day 7. abstract_id: PUBMED:35224901 Aortic root reinforcement using the Florida sleeve technique in a patient with acute aortic dissection type A. Stanford type A acute aortic dissection is an inherently lethal condition that is regarded as a surgical emergency. The Bentall procedure is considered the gold standard for patients requiring aortic root replacement. However, this method can be technically difficult for less-experienced surgeons. Complications encountered after composite graft replacement include distortion of the proximal part of the coronary artery, bleeding from the conduit implant site, and reattached coronary artery origins caused in general by a consumption coagulopathy. In cases for which aortic valve preservation is not applicable and the root is not dissected or dilated, surgeons often opt for less complicated techniques like aortic valve and supracoronary ascending aortic replacement. Nevertheless, these patients carry a high risk of late aortic root dilatation and subsequent reoperation. The goal of aortic root reinforcement by the Florida sleeve technique is to encase the aortic root to prevent any further dilatation and perioperative bleeding. abstract_id: PUBMED:35360960 Modified Bentall procedure: A 15-year single-center clinical experience. Objective: Bentall procedure is a standard technique for complete aortic root replacement but a huge challenge is postoperative bleeding. Many modifications have been proposed and the "button technique" is commonly used. With a 15-year experience, this study was to evaluate outcomes of this modified Bentall procedure. Methods: A retrospective study was performed with all patients who underwent the Bentall procedure with a button technique between 2005 and 2020. Commercially available composite grafts were used in 38 patients and self-assembled composite graft was used in 74 patients. Safety outcome was postoperative complications and long-term outcomes included overall and reoperation-free survivals. Results: Among 112 patients, the mean age was 46.3 years and 69.6% were men. Indications for the Bentall procedure were ascending aortic aneurysm and valve regurgitation (59.8%), aortic dissection (19.6%), Marfan syndrome (17%), and endocarditis (3.6%). Three patients (2.7%) died in hospital. During a median follow-up period of 42 months, 11 patients died and 7 patients required reoperation. Overall survival and reoperation-free survival were 86.1% and 83.4% at 5 years and 86.1% and 72.6% at 10 years, respectively. Conclusions: The Bentall procedure with a button technique is safe and effective for aortic root replacement. Further investigations are required to improve the feasibility and effectiveness of the procedure. abstract_id: PUBMED:34114255 Traumatic aortic root rupture leading to acute aortic regurgitation and acute type A aortic dissection. A 17-year-old male sustained a blunt thoracic trauma after he had a dirt bike accident. He was admitted for the management of multiple fractures, was hemodynamically stable, and presented without any cardiac symptoms. The patient underwent transthoracic echocardiography and CT angiogram of the thorax as the workup of possible cardiac injury as he had a new aortic regurgitation murmur, troponin rise, and a new RBBB. Imaging showed aortic root rupture, type A aortic dissection involving aortic root and proximal ascending aorta, and acute severe aortic regurgitation, not typically seen with blunt thoracic trauma. The patient was immediately taken to the operating room, underwent a surgical aortic valve and root replacement with the Bentall procedure, and had a good outcome. Answer: The Bentall procedure has traditionally been considered the gold standard for treating acute aortic dissection with aortic root involvement. However, recent studies and clinical experiences suggest that alternative approaches, such as valve-sparing procedures, may offer comparable or even superior outcomes in certain cases. A study comparing the classical Bentall procedure with the valve-sparing David procedure in patients with acute aortic dissection type A (AADA) and aortic root involvement found that the David procedure had acceptable results, with lower 30-day mortality rates and comparable 1-year survival rates and freedom from valve-related reoperation. The study concluded that the David procedure, if possible, or a Bio-Bentall for pathological valves, seems to be the optimal technique (PUBMED:26090885). Another study highlighted that the David operation offers shorter hemostasis and operation times than the Bentall operation in the setting of AADA, which could be beneficial in terms of surgical complexity and blood transfusion requirements (PUBMED:35251818). A systematic review and meta-analysis comparing valve-sparing root replacement (VSRR) with the Bentall procedure in patients with acute type A aortic dissection (AAAD) found that VSRR was associated with a reduced risk of early and late death compared to the Bentall procedure. However, there was an increased risk of reintervention with VSRR (PUBMED:31365078). Furthermore, a retrospective analysis of patients with acute type-A aortic dissection (ATAAD) suggested that the mid-term outcomes of aortic root repair are not inferior to the Bentall procedure, indicating that root repair could be an alternative approach with the advantage of preserving the native valve (PUBMED:37951742). While the Bentall-de Bono technique is less demanding than valve-sparing aortic surgery and is often chosen by less-experienced surgeons when the aortic root is compromised by dissection, it is important to note that the latest guidelines for the management of aortic diseases recommend replacing the aortic root when it is involved in the dissection (PUBMED:33691047). In conclusion, while the Bentall procedure has been the traditional gold standard for acute aortic dissection with aortic root involvement, emerging evidence suggests that valve-sparing techniques such as the David procedure and aortic root repair may offer similar or better outcomes in certain patient populations.
Instruction: Thin-section CT of the lung: does electrocardiographic triggering influence diagnosis? Abstracts: abstract_id: PUBMED:14512510 Thin-section CT of the lung: does electrocardiographic triggering influence diagnosis? Purpose: To determine the impact of prospective electrocardiographic (ECG) triggering on image quality and diagnostic outcome of thin-section computed tomography (CT) of the lung. Materials And Methods: Forty-five consecutive patients referred for thin-section CT of the lung were examined with prospectively ECG-triggered and nontriggered thin-section CT of the lung with a multi-detector row helical CT scanner. Subjective image quality criteria (image noise, motion artifacts, and diagnostic accessibility) were rated by three radiologists in consensus for the upper lobe, middle lobe and/or lingula, and lower lobe. Pathologic changes were assessed for the various lobes, and a diagnosis was assigned. The diagnoses were compared by two radiologists in consensus to determine the effects of CT technique on diagnostic outcome. Quantitative measurements were performed, including determination of image noise and signal-to-noise ratios in different anatomic regions. The Wilcoxon signed rank test and paired sign test (both with Bonferroni correction) were used for statistical analysis. Results: Subjective assessment showed significant differences in motion artifact reduction in the middle lobe, lingula, and left lower lobe. The diagnostic assessibility of triggered CT was rated significantly higher only for the left lower lobe compared with nontriggered data acquisition. No differences in diagnostic outcome were determined between triggered and nontriggered techniques. Mean image noise in tracheal air was 68.2 +/- 17 (SD) for triggered CT versus 37.4 +/- 9 for nontriggered CT (P &lt;.05). Mean signal-to-noise ratio in the upper versus lower lobes was 22.5 +/- 8 versus 25.4 +/- 10 for triggered and 35.6 +/- 9 versus 39.2 +/- 10 for nontriggered techniques (P &lt;.05). Conclusion: Given the lack of improvement in diagnostic accuracy and the need for additional resources, ECG-triggered thin-section CT of the lung is not recommended for routine clinical practice. abstract_id: PUBMED:34092723 Necessity of Thin Section CT in the Detection of Pulmonary Metastases: Comparison between 5 mm and 1 mm Sections of CT. Background: The aim of this study was to evaluate the difference in the ability of 1-mm and 5-mm section Computed Tomography(CT) to detect pulmonary metastases in patients with pulmonary metastases. Methods: We retrospectively analyzed the CT findings of 106 patients with pulmonary metastases due to malignancies treated at Toho University Omori Medical Center between 2013 and 2020. Results: Cases with only one nodule evaluated by 5-mm section CT had significantly lower discordance with 1-mm section CT than cases with two or more nodules detected by a 5 mm section (p = 0.0161). After reference to a 1 mm section, cases with only one nodule reevaluated by 5-mm section CT had significantly lower discordance than cases with two or more nodules reevaluated using 5-mm section CT. In cases with only one nodule, reevaluation using a 5 mm section was consistent with evaluation using a 1 mm section. However, this was not observed in cases with two or more nodules, with a significant difference between one nodule and two or more nodules. Conclusions: If there are two or more nodules observed in 5-mm section CT it may be necessary to reevaluate using 1-mm section CT to determine the exact number of pulmonary metastases. abstract_id: PUBMED:32539692 Imaging features of the initial chest thin-section CT scans from 110 patients after admission with suspected or confirmed diagnosis of COVID-19. Background: In December 2019, an outbreak of a novel coronavirus pneumonia, now called COVID-19, occurred in Wuhan, Hubei Province, China. COVID-19, which is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has spread quickly across China and the rest of the world. This study aims to evaluate initial chest thin-section CT findings of COVID-19 patients after their admission at our hospital. Methods: Retrospective study in a tertiary referral hospital in Anhui, China. From January 22, 2020 to February 16, 2020, 110 suspected or confirmed COVID-19 patients were examined using chest thin-section CT. Patients in group 1 (n = 51) presented with symptoms of COVID-19 according to the diagnostic criteria. Group 2 (n = 29) patients were identified as a high degree of clinical suspicion. Patients in group 3 (n = 30) presented with mild symptoms and normal chest radiographs. The characteristics, positions, and distribution of intrapulmonary lesions were analyzed. Moreover, interstitial lesions, pleural thickening and effusion, lymph node enlargement, and other CT abnormalities were reviewed. Results: CT abnormalities were found only in groups 1 and 2. The segments involved were mainly distributed in the lower lobes (58.3%) and the peripheral zone (73.8%). The peripheral lesions, adjacent subpleural lesions, accounted for 51.8%. Commonly observed CT patterns were ground-glass opacification (GGO) (with or without consolidation), interlobular septal thickening, and intralobular interstitial thickening. Compared with group 1, patients in group 2 presented with smaller lesions, and all lesions were distributed in fewer lung segments. Localized pleural thickening was observed in 51.0% of group 1 patients and 48.2% of group 2 patients. The prevalence of lymph node enlargement in groups 1 and 2 combined was extremely low (1 of 80 patients), and no significant pleural effusion or pneumothorax was observed (0 of 80 patients). Conclusion: The common features of chest thin-section CT of COVID-19 are multiple areas of GGO, sometimes accompanied by consolidation. The lesions are mainly distributed in the lower lobes and peripheral zone, and a large proportion of peripheral lesions are accompanied by localized pleural thickening adjacent to the subpleural region. abstract_id: PUBMED:11511889 Thin-section CT of the lung: influence of 0.5-s gantry rotation and ECG triggering on image quality. The aim of this study was to determine if ECG triggering and a shorter acquisition time of 0.5-s rotation decrease cardiac motion artifacts of thin-section CT of the lung. In 25 patients referred for thin-section thoracic CT, 1-mm thin-section slices were performed with a scanning time of 0.5 s with ECG gating, 0.5 s and 1 s during the diastolic phase of the heart at five identical anatomical levels from the aortic arch to lung basis. At each anatomical level and for each lung, cardiac motion artifacts were graded independently on a four-point scale by three readers. Patients were divided into two groups according to their heart rate. A four-way analysis of variance was used to assess differences between the three modalities. Mean cardiac motion artifacts scores were rated 1.23+/-0.02, 1.47+/-0.02, and 1.79+/-0.02, at 0.5 s with ECG gating, 0.5 s without ECG gating, and 1 s, respectively (F=139, p&lt;0.0001). At the four anatomical levels below the aortic arch, the left lung scores were greater than the right lung score for the three modalities. For the modality 0.5 s with ECG gating no difference of scores was found between patients grouped according to their cardiac frequency. The 0.5-s gantry rotation with or without ECG gating scans reduces cardiac motion artifacts on pulmonary thin-section CT images and is mainly beneficial for the lower part of the left lung. abstract_id: PUBMED:19430756 Reticular pattern in thin-section CT: from morphology to differential diagnosis Major constituents of a reticular pattern in thin-section CT are thickened interlobular and intralobular septa as well as honeycombing. When thickening of the interlobular septa or honeycombing are visible as predominant features, these patterns have a limited differential diagnosis. In this context, a detailed analysis of morphologic characteristics (e.g., smooth or nodular interlobular septal thickening) and of the pattern localisation (e.g., peripheral, basal and subpleural honeycombing) is required. Thickened intralobular septa, parenchymal bands, subpleural lines and the "interface sign" are all rather non-specific findings. However, if interpreted in the context of other CT findings, a differential diagnosis may also frequently be reported. abstract_id: PUBMED:38358328 Thin-Section CT in the Categorization and Management of Pulmonary Fibrosis including Recently Defined Progressive Pulmonary Fibrosis. While idiopathic pulmonary fibrosis (IPF) is the most common type of fibrotic lung disease, there are numerous other causes of pulmonary fibrosis that are often characterized by lung injury and inflammation. Although often gradually progressive and responsive to immune modulation, some cases may progress rapidly with reduced survival rates (similar to IPF) and with imaging features that overlap with IPF, including usual interstitial pneumonia (UIP)-pattern disease characterized by peripheral and basilar predominant reticulation, honeycombing, and traction bronchiectasis or bronchiolectasis. Recently, the term progressive pulmonary fibrosis has been used to describe non-IPF lung disease that over the course of a year demonstrates clinical, physiologic, and/or radiologic progression and may be treated with antifibrotic therapy. As such, appropriate categorization of the patient with fibrosis has implications for therapy and prognosis and may be facilitated by considering the following categories: (a) radiologic UIP pattern and IPF diagnosis, (b) radiologic UIP pattern and non-IPF diagnosis, and (c) radiologic non-UIP pattern and non-IPF diagnosis. By noting increasing fibrosis, the radiologist contributes to the selection of patients in which therapy with antifibrotics can improve survival. As the radiologist may be first to identify developing fibrosis and overall progression, this article reviews imaging features of pulmonary fibrosis and their significance in non-IPF-pattern fibrosis, progressive pulmonary fibrosis, and implications for therapy. Keywords: Idiopathic Pulmonary Fibrosis, Progressive Pulmonary Fibrosis, Thin-Section CT, Usual Interstitial Pneumonia © RSNA, 2024. abstract_id: PUBMED:24282694 Coronary CT angiography with prospective ECG-triggering: an effective alternative to invasive coronary angiography. Despite the tremendous contributions of coronary CT angiography to coronary artery disease, radiation dose associated with coronary CT angiography has raised serious concerns in the literature, as the risk of developing radiation-induced malignancy is not negligible. Various dose-saving strategies have been implemented, with some of the strategies resulting in significant dose reduction. Of these strategies, prospective ECG-triggering is one of the most effective techniques with resultant effective radiation dose similar to or even lower than that of invasive coronary angiography. Prospective ECG-triggered coronary CT angiography has been reported to have high diagnostic accuracy in the diagnosis of coronary artery disease with image quality comparable to that of retrospective ECG-gating, but with significantly reduced radiation dose. Successful performance of prospective ECG-triggering is determined by strict exclusion criteria and careful patient preparation. The aim of this article is to provide an overview of the diagnostic applications of coronary CT angiography with prospective ECG-triggering with focus on radiation dose reduction. Radiation dose measurements are discussed with aim of allowing accurate dose estimation. Diagnostic value of prospective ECG-triggered coronary CT angiography in patients with different heart rate is discussed. Finally, current status and future directions are highlighted. abstract_id: PUBMED:33880620 Differentiating focal interstitial fibrosis from adenocarcinoma in persistent pulmonary subsolid nodules (&gt; 5 mm and &lt; 20 mm): the role of coronal thin-section CT images. Objectives: To investigate thin-section computed tomography (CT) features of pulmonary subsolid nodules (SSNs) with sizes between 5 and 20 mm to determine predictive factors for differentiating focal interstitial fibrosis (FIF) from adenocarcinoma. Methods: From January 2017 to December 2018, 169 patients who had persistent SSNs 5-20 mm in size and underwent preoperative nodule localization were enrolled. Patient characteristics and thin-section CT features of the SSNs were reviewed and compared between the FIF and adenocarcinoma groups. Univariable and multivariable analyses were used to identify predictive factors of malignancy. Receiver operating characteristic (ROC) curve analysis was used to quantify the performance of these factors. Results: Among the 169 enrolled SSNs, 103 nodules (60.9%) presented as pure ground-glass opacities (GGOs), and 40 (23.7%) were FIFs. Between the FIF and adenocarcinoma groups, there were significant differences (p&lt; 0.05) in nodule border, shape, thickness, and coronal/axial (C/A) ratio. Multivariable analysis demonstrated that a well-defined border, a nodule thickness &gt;4.2, and a C/A ratio &gt;0.62 were significant independent predictors of malignancy. The performance of a model that incorporated these three predictors in discriminating FIF from adenocarcinoma achieved a high area under the ROC curve (AUC, 0.979) and specificity (97.5%). Conclusions: For evaluating persistent SSNs 5-20 mm in size, the combination of a well-defined border, a nodule thickness &gt; 4.2, and a C/A ratio &gt; 0.62 is strongly correlated with malignancy. High accuracy and specificity can be achieved by using this predictive model. Key Points: • Thin-section coronal images play an important role in differentiating FIF from adenocarcinoma. • The combination of a well-defined border, nodule thickness&gt;4.2 mm, and C/A ratio &gt;0.62 is associated with malignancy. • This predictive model may be helpful for managing persistent SSNs between 5 and 20 mm in size. abstract_id: PUBMED:15666992 Intrapulmonary lymph nodes: thin-section CT findings, pathological findings, and CT differential diagnosis from pulmonary metastatic nodules. We compared the thin-section CT findings of 11 intrapulmonary lymph nodes with pathological findings and evaluated the possibility of CT scan differential diagnosis from pulmonary metastatic nodules. First, we retrospectively reviewed CT scan and pathological findings of intrapulmonary lymph nodes. The median size of these nodules was 6.2 mm. The nodules appeared round (n=3) or angular (n=8) in shape with a sharp border, and they were found below the level of the carina. The median distance from the nearest pleural surface was 4.6 mm, and 3 of the 11 nodules were attached to the pleura. On thin-section CT scan, linear densities extending from the intrapulmonary lymph nodes were frequently visualized, and were pathologically proven to be ectatic lymphoid channels. We then compared the thin-section CT findings of 8 metastatic nodules less than 1 cm in diameter with those of the 11 intrapulmonary lymph nodes. The median size of these nodules was 6.8 mm, and the median distance from the nearest pleural surface was 16 mm. All nodules appeared round in shape. None of the nodules had linear densities extending from the nodules. The linear densities on thin-section CT scan may be the most useful characteristic of intrapulmonary lymph nodes, when differential diagnosis from metastatic nodules is necessary. abstract_id: PUBMED:17369020 Recent progress in computer-aided diagnosis of lung nodules on thin-section CT. Computer-aided diagnosis (CAD) provides a computer output as a "second opinion" in order to assist radiologists in the diagnosis of various diseases on medical images. Currently, a significant research effort is being devoted to the detection and characterization of lung nodules in thin-section computed tomography (CT) images, which represents one of the newest directions of CAD development in thoracic imaging. We describe in this article the current status of the development and evaluation of CAD schemes for the detection and characterization of lung nodules in thin-section CT. We also review a number of observer performance studies in which it was attempted to assess the potential clinical usefulness of CAD schemes for nodule detection and characterization in thin-section CT. Whereas current CAD schemes for nodule characterization have achieved high performance levels and would be able to improve radiologists' performance in the characterization of nodules in thin-section CT, current schemes for nodule detection appear to report many false positives, and, therefore, significant efforts are needed in order further to improve the performance levels of current CAD schemes for nodule detection in thin-section CT. Answer: The impact of prospective electrocardiographic (ECG) triggering on the diagnosis of thin-section computed tomography (CT) of the lung was investigated in a study involving 45 consecutive patients. The study aimed to determine whether ECG-triggered thin-section CT of the lung would improve image quality and diagnostic outcomes compared to nontriggered thin-section CT. The results showed that while ECG triggering did reduce motion artifacts in certain lobes of the lung, it did not lead to differences in diagnostic outcomes. Additionally, ECG-triggered CT was associated with higher image noise and lower signal-to-noise ratios compared to nontriggered CT. Based on these findings, the study concluded that ECG-triggered thin-section CT of the lung does not improve diagnostic accuracy and requires additional resources, thus it is not recommended for routine clinical practice (PUBMED:14512510).
Instruction: Recurrent smear abnormalities where repeat loop treatment is not possible: is hysterectomy the answer? Abstracts: abstract_id: PUBMED:15943984 Recurrent smear abnormalities where repeat loop treatment is not possible: is hysterectomy the answer? Objective: The objective of this study was to determine the outcome of women who underwent hysterectomy for recurrent cytological abnormalities where repeat loop treatment was considered not to be technically possible because of insufficient remaining cervical tissue. Methods: Women undergoing a hysterectomy for the above indication at the Northern Gynaecological Cancer Centre over a period of 10 years (1992-2001) were identified from a prospectively collected database. Case notes were then reviewed and women undergoing hysterectomy for other indications were excluded. Relevant demographic and clinical data were then extracted. Results: 33 patients meeting the above criteria were identified. The overall hysterectomy rate for this indication was 0.73%. 20 out of the 33 women had significant pathology on the hysterectomy specimen. 95% of these had high-grade disease with one having a Stage 1A1 squamous carcinoma. None of the patients required more radical treatment than a simple hysterectomy. There were no major complications following the hysterectomy. Positive endocervical margins on the previous loop specimen (P = 0.05) was an important correlating factor predicting the presence of CIN on the hysterectomy specimen. One out of the thirty hysterectomies (3.3%) performed using the vaginal route had incomplete excision compared to one of three (33%) using the abdominal route. Hysterectomy was successful in treating 85.2% of the women; only 4 women subsequently developed vaginal intraepithelial neoplasia. Conclusion: Simple hysterectomy appears to be a suitable diagnostic and treatment option for women with recurrent high-grade cytological abnormalities where further loop treatment is technically not possible. Incomplete excision at the endocervical margin on the previous loop specimen was the main factor associated with the presence of cervical intraepithelial neoplasia at hysterectomy. abstract_id: PUBMED:20543689 The outcomes of repeat surgery for recurrent symptomatic endometriosis. Purpose Of Review: To evaluate the efficacy of second-line surgery in the management of recurrent endometriosis. Recent Findings: Long-term probability of pain recurrence after repeat conservative surgery for recurrent endometriosis varies between 20 and 40%. The association of presacral neurectomy to the treatment of endometriosis might be effective in reducing midline pain; however, no studies have evaluated this procedure among patients with recurrent disease. The medium-term outcome of hysterectomy for endometriosis-associated pain is quite satisfactory; nevertheless, probability of pain persistence after hysterectomy is 15% and risk of pain worsening 3-5%, with a six times higher risk of further surgery in patients with ovarian preservation as compared to ovarian removal. The conception rate among women undergoing repetitive surgery for recurrent endometriosis associated with infertility is 26%, whereas the overall crude pregnancy rate after a primary procedure is 41%. Summary: Repeat conservative surgery for pelvic pain associated with recurrent endometriosis has the same efficacy and limitations as primary surgery. Conversely, after repeat conservative surgery for infertility, the pregnancy rate is almost half the rate obtained after primary surgery. More data are needed to define the best therapeutic option in women with recurrent endometriosis, in terms of pain relief, pregnancy rate and patient compliance. abstract_id: PUBMED:16205196 Recurrent colorectal carcinoma detected by routine cervicovaginal papanicolaou smear testing. Background: We present a case of recurrent colon cancer detected by routine, annual Papanicolaou screening. Case: A 59-year-old African American woman who had been treated for T2N0M0 (stage II, Dukes A) colon cancer 2 years before to presentation had a Pap smear showing a high-grade squamous intraepithelial lesion with a normal cervical biopsy result. Because of this discrepancy, a loop electrosurgical excision procedure and endocervical curettage were performed and showed atypical glandular cells suspicious for adenocarcinoma. Subsequent colonoscopy showed recurrent adenocarcinoma of the colon. The patient underwent an en-block total abdominal hysterectomy and anterior-perineal resection showing invasion of recurrent colon cancer into the uterus and cervix. Conclusion: In patients with a history of extrauterine adenocarcinoma, abnormal Pap screening may indicate recurrent or metastatic carcinoma. abstract_id: PUBMED:8164554 The management of women with initial minor Pap smear abnormalities. Objective: To describe the management and follow-up of women with initial minor Papanicolaou (Pap) smear abnormalities by general practitioners (GPs) in metropolitan Sydney in 1990. Subjects And Setting: One hundred women with cervical intraepithelial neoplasia grade 1 (CIN 1) and 121 women with mild squamous atypia (MSA) on Pap smears taken by GPs in 1990, who had not had Pap smear abnormalities in the previous two years, were sampled from the records of four Sydney pathology laboratories. Design: A descriptive study. Information about the management of women after their Pap smear was obtained from GPs by telephone questionnaire. Results: Of women with MSA, 19% were initially investigated by colposcopy, and 8% went on to have treatment. Of 82 women with MSA who were not initially investigated, 80% had follow-up Pap smears within 12 months. Of women with CIN 1, 84 underwent colposcopy with or without biopsy; 27% of these women had CIN 2/3 and 31% had CIN 1 confirmed by investigation. Overall, 51% of women with CIN 1 on their initial Pap smears were treated by excisional or ablative means, including 78% of women with confirmed CIN 2/3, and 69% of women with confirmed CIN 1. Two and a half years after the original Pap smear, only 46% of women with initial MSA and 51% of women with initial CIN 1 were known by their GP to be having follow-up. Conclusions: Most women with MSA were managed by initial observation, and those with CIN 1 were managed by initial investigation. However, the range of management practices described suggests a lack of consensus among practitioners about the most appropriate management for women with minor cervical abnormalities. A large randomised controlled trial would help elucidate preferred management guidelines. The difficulty individual GPs experience in following up women after abnormal Pap smears supports the establishment of centralised State cytology registers. abstract_id: PUBMED:19550214 Cytological follow-up of women older than 50 years with high-grade cervical smear treated by large loop excision. Objective: To evaluate cytological surveillance for women older than 50 years, to detect recurrent or residual disease after treatment of cervical intraepithelial neoplasia by loop excision. Materials And Methods: Women undergoing a large loop excision for high-grade squamous intraepithelial lesion or glandular cytological abnormalities during a period of 4 years (2000-2003) were identified from the colposcopy database. Women younger than 50 years or with a history of previous loop excision were excluded. Clinical data, histology, and follow-up cytology results for up to 2 years after treatment were collected. Results: Eighty-nine patients were identified. Age of the women ranged from 51 to 66 years, with a median of 51.5 years. Thirty-two (36%) had severe dyskaryosis, 53 (60%) had moderate dyskaryosis, and 4 (4%) had glandular abnormalities on cervical cytology before the loop biopsy. Cervical intraepithelial neoplasia (CIN) 2,3 and glandular abnormalities, CIN 1, and no abnormalities were found in 50 (56%), 18 (20%), and 19 (22%) loop specimens, respectively. Invasive disease was found in 2 (2%) cases. They were excluded from further analysis. The lesion was completely excised in 58 (65%) and incompletely excised in 23 (26%) patients. It was not possible to comment on the margin status in 8 (9%) cases. These were excluded from further analysis. Of the 23 women who had margins involved, 8 (35%) had ectocervical, 12 (52%) had endocervical, and 3 (13%) had both margins involved. All women had follow-up cervical smears at the cytology clinic. At 6-month follow-up, 3 patients had persistent CIN and 4 had borderline changes on cervical smears. At 2 years follow-up, 3 patients had high-grade squamous intraepithelial lesion abnormalities, 2 of whom had clear margins at their loop biopsy earlier.Twenty percent of the women with positive endocervical margins on loop excision needed further treatment for residual or persistent disease on follow-up. Overall, 4 (5%) of the 79 patients who had a loop biopsy went on to have cytological abnormalities suggestive of persistent/residual disease needing further treatment. Conclusion: Cytological surveillance for post-loop biopsy follow-up seems to be a good option for detecting residual disease in this high-risk group of patients. abstract_id: PUBMED:2197326 The management and treatment of an abnormal Pap smear. Table 2 summarizes the management of the abnormal Pap smear. Management of dysplasia in this institution is aggressive--as destructive therapy of mild dysplasia is advised, opposed to watching the patient and treating only if the disease persists. The rationale for this is the 33% to 45% failure rate for follow-up appointments in the primarily inner-city population served. The key to follow-up is to repeat cervical cytology in all patients treated, even those treated with hysterectomy, every three months until two consecutive normal smears are obtained. At that time, surveillance and intervals may be modified, but screening should continue at least annually. The mortality rate of carcinoma of the cervix has dropped precipitously during the last 40 years, in part, from simple screening of the cervix with the Papanicolaou smear. The effort to treat premalignant changes has been rewarded. The use of the colposcope and destructive forms of therapy have allowed successful treatment of patients with less morbidity and mortality than the immediate reliance on cervical conization. Remember, conization is still indicated and prudent in selected patients. Following these guidelines may contribute to the downward trend. abstract_id: PUBMED:32711428 Health Literacy, Knowledge on Cervical Cancer and Pap Smear and Its Influence on Pre-Marital Malay Muslim Women Attitude towards Pap Smear. Background: Cervical cancer is preventable. In Malaysia, women are found to have good awareness of the disease and yet, the Pap smear uptake is still poor. Measuring health literacy level could explain this discrepancy. This study aims to determine the relationship between health literacy, level of knowledge of cervical cancer and Pap smear with attitude towards Pap smear among women attending pre-marital course. Methods: A cross sectional study was performed in three randomly selected centres that organised pre-marital courses. All Malay Muslim women participants aged 18 to 40 years old were recruited while non-Malaysian, illiterate, and had hysterectomy were excluded. Validated self-administered questionnaires used were European Health Literacy Questionnaire (HLS-EU-Q16 Malay) and Knowledge and attitude towards Cervical Cancer and Pap Smear Questionnaire. The mean percentage score (mean± SD) was calculated, with higher scores showed better outcomes. Multiple linear regression was used to measure the relationship of independent variables with attitude towards Pap smear. Results: A total of 417 participants were recruited with a mean age of 24.9 ± 3.56 years old. Prevalence of awareness of cervical cancer was 91.6% (n=382, 95% CI: 89.0%, 94.2%) and mean percentage score was 74.7%±7.6. Prevalence of awareness of Pap smear was 59.0% (n=246, 95% CI: 54.2%, 63.8%) and mean percentage score was 80.2% ± 6.5. The health literacy mean score was 13.3±3.6, with minimum score 0 and maximum score 16. The mean percentage score of attitudes towards Pap smear was 64.8%±9.3. Multiple linear regression analysis demonstrated significant relationship between health literacy (p=0.047) and knowledge of Pap smear (p&lt;0.001) with attitude towards Pap smear. Conclusion: A higher health literacy with high knowledge of Pap smear improves the attitude towards Pap smear. Pre-marital course is an opportunistic platform to disseminate information to improve health literacy and knowledge of cervical cancer and Pap smear screening. abstract_id: PUBMED:29690868 Oncological and reproductive outcomes of adenocarcinoma in situ of the cervix managed with the loop electrosurgical excision procedure. Background: The standard treatment for cervical adenocarcinoma in situ (AIS) is hysterectomy, which is a more aggressive treatment than that used for squamous intraepithelial lesions. Several previous studies have primarily demonstrated that the loop electrosurgical excision procedure (LEEP) is as safe and effective as cold knife cone (CKC) biopsy when AIS is unexpectedly found in a loop excision. This study evaluated the safety of LEEP as the initial treatment for patients with AIS who were strictly selected and evaluated before and after loop resection. Methods: The oncological and reproductive outcomes of a series of AIS patients who underwent LEEP as the initial treatment between February 2006 and December 2016 were retrospectively evaluated. Results: A total of 44 women were eligible for analysis. The mean age at diagnosis was 36.1 years, and 14 patients were nulliparous. Multiple lesions were identified in 4 (9.1%) patients. Either hysterectomy (6 patients) or repeat cone biopsies (3 patients) were performed in 8 of the 10 patients who presented positive or not evaluable surgical resection margins (SMs) on the initial LEEP specimens. Residual disease was detected in two patients. All patients were closely followed for a mean of 36.9 months via human papillomavirus testing, PAP smears, colposcopy, and endocervical curettage when necessary. No recurrences were detected. Of the 16 patients who desired to become pregnant, 8 (50%) successfully conceived, and the full-term live birth rate was 83.3% among this subgroup. Conclusions: LEEP with negative SMs was a safe and feasible fertility-sparing surgical procedure for patients with AIS, and the obstetric outcome was satisfactory. However, long-term follow-up is mandatory. abstract_id: PUBMED:35671143 Recurrent postpartum hemorrhage at subsequent pregnancy in patients with prior uterine artery embolization: angiographic findings and outcomes of repeat embolization. Objective: To evaluate angiographic findings and outcomes of uterine artery embolization (UAE) for recurrent postpartum hemorrhage (PPH) in a subsequent pregnancy in patients with a history of prior UAE. Methods: Between March 2004 and February 2021, UAE was performed for PPH with gelatin sponge slurry in 753 patients. Among these, 13 underwent repeat UAE for recurrent PPH after subsequent delivery. The causes of PPH, angiographic findings, hemostasis, and adverse events were evaluated. Results: The causes of recurrent PPH included retained placental tissue (n = 9) and uterine atony (n = 4). On angiography, unilateral or bilateral uterine arteries were obliterated due to prior UAE in 10 patients (76.9%). The uterine collateral vessels were embolized (anterior division of the internal iliac artery [n = 10], round ligament [n = 5], and ovarian [n = 4] artery). In the remaining three patients with recanalized or patent (not embolized at prior UAE) uterine arteries, both uterine arteries were embolized. Immediate hemostasis was achieved in nine patients (69.2%). The remaining four patients (30.8%) with obliterated uterine arteries required hysterectomy. There were three mild adverse events (pelvic pain [n = 2] and fever [n = 1]). Conclusion: UAE with gelatin sponge slurry frequently causes permanent uterine artery obliteration. In cases of recurrent PPH occurring in subsequent pregnancy, repeat UAE may be less likely to achieve hemostasis (69.2%). Advances In Knowledge: 1. UAE with gelatin sponge slurry frequently caused permanent uterine artery obliteration.2. In recurrent PPH occurring in subsequent pregnancy, the repeat UAE may be less likely to achieve hemostasis than initial UAE. abstract_id: PUBMED:35115479 Efficacy and Long-term Outcomes of Repeated Large Loop Excision of the Transformation Zone of the Cervix. Objective: To evaluate the efficacy and long-term outcome of repeat large loop excision of the transformation zone in women with residual or recurrent cervical intraepithelial neoplasia. Methods: PALGA (the Dutch Pathology Registry), a database of deidentified cervical cytologic and histologic data, was used to examine women with cervical dysplasia who underwent two or more large loop excision of the transformation zone procedures between January 2005 and June 2015. We obtained cervical cytology and histology results. The main outcome was efficacy of repeated large loop excision of the transformation zone procedure in women with residual or recurrent cervical intraepithelial neoplasia. We also examined subsequent excisional procedures and hysterectomy. Results: We identified 499 women who had undergone two or more large loop excision of the transformation zone procedures. After their second procedure, 60.7% of women had a normal first cervical cytologic sample. The mean duration of follow-up was 68 months (0-163 months). Additional cervical excisional procedures were performed in 33.7% of women. Overall, 1.2% of women developed cervical cancer during follow-up. Moreover, 19.0% of women eventually underwent hysterectomy. Conclusion: One third of the women who undergo two large loop excision of the transformation zone procedures require an additional excisional procedure or hysterectomy. Almost one fifth of these women eventually undergo hysterectomy. Answer: Based on the provided abstracts, hysterectomy appears to be a suitable option for women with recurrent high-grade cytological abnormalities where further loop treatment is technically not possible due to insufficient remaining cervical tissue (PUBMED:15943984). In the study mentioned in this abstract, simple hysterectomy was successful in treating 85.2% of the women, and none of the patients required more radical treatment than a simple hysterectomy. The main factor associated with the presence of cervical intraepithelial neoplasia at hysterectomy was incomplete excision at the endocervical margin on the previous loop specimen. Additionally, the abstract from PUBMED:35115479 indicates that one-third of women who undergo two large loop excision of the transformation zone (LLETZ) procedures require an additional excisional procedure or hysterectomy, and almost one-fifth of these women eventually undergo hysterectomy. This suggests that for some women with recurrent abnormalities, hysterectomy may become the eventual treatment choice. However, it is important to consider the individual circumstances of each patient, including their desire for fertility preservation and the severity of the recurrent abnormalities. For instance, the abstract from PUBMED:29690868 discusses the safety and feasibility of LEEP as a fertility-sparing surgical procedure for patients with adenocarcinoma in situ (AIS) of the cervix, with satisfactory obstetric outcomes. This indicates that less radical options than hysterectomy may be appropriate for certain patients. In summary, while hysterectomy can be an effective treatment option for women with recurrent smear abnormalities where repeat loop treatment is not possible, the decision should be made on a case-by-case basis, taking into account the patient's clinical situation, preferences, and potential for future fertility.
Instruction: Angiogenesis factor in endometrial carcinoma: a new prognostic indicator? Abstracts: abstract_id: PUBMED:8678154 Angiogenesis factor in endometrial carcinoma: a new prognostic indicator? Objective: Tumor angiogenesis is believed to be a prognostic indicator associated with tumor growth and metastasis. Studies of angiogenesis in breast, prostate, and lung cancer, as well as melanoma, have shown that neovascularization correlates with the likelihood of metastasis and recurrences. The purpose of this study was to evaluate microvessel density as a prognostic factor in endometrial cancer. Methods: Between 1980 and 1991 the tumor registry identified 25 patients with a diagnosis of recurrent endometrial cancer. These patients were matched with 25 patients with nonrecurrent disease for age, stage, grade, and treatment. The histologic slides of the 50 patients were reviewed. The paraffin blocks were obtained, and the area of the deepest myometrial invasion was selected for staining. The microvessels within the invasive cancer were highlighted by means of immunocytochemical staining to detect factor VIII-related antigen. Microvessels were counted by two investigators who were blinded to the patients' clinical status. Survival data were analyzed with Kaplan-Meier survival curves. Results: Microvessel count was related to likelihood of recurrence, although this trend did not reach statistical significance. Patients with tumors of low capillary density had a mean survival time of 123 months. Patients with tumors of high capillary density had a mean survival time of 75 months (p = 0.02). Among patients with recurrent disease, those with a low capillary count survived a mean of 64 months. Patients with recurrent disease with tumors of high capillary density survived a mean of 45 months (p = 0.002). Conclusion: Angiogenesis factor correlates with survival in endometrial carcinoma. abstract_id: PUBMED:16417101 Angiogenesis-prognostic factor in patients with endometrial cancer Endometrial carcinoma is one of the most commonly found cancers. In numerous kinds of cancers, tumor microvessel density correlates with clinical stage of disease and is considered as an independent prognostic factor. Evaluation of angiogenesis intensity in endometrial cancer also seems to be an independent prognostic factor and statistically correlates with FIGO stage of disease, histological type and grade of tumor, depth of myometrial invasion and metastasis. Activity of angiogenic factors in human tissues and serum provides additional reference concerning the growth and progression of endometrial cancer. abstract_id: PUBMED:10021305 Angiogenesis in malignancies of the female genital tract. Objective: The purpose of this work was to review current knowledge pertaining to angiogenesis in malignancies of the female genital tract. Methods: We identified studies published in the English language regarding angiogenesis in gynecologic malignancies. The studies were obtained from a MEDLINE search from 1966 through June 1998; additional sources were identified through cross-referencing. Results: A growing body of evidence confirms the ability of vulvar and cervical squamous cell carcinomas and endometrial and ovarian adenocarcinoma to induce angiogenesis. In vulvar intraepithelial neoplasia a correlation between vascular endothelial growth factor (VEGF) expression, microvessel density (MVD), and progression of dysplasia has been demonstrated. In invasive vulvar carcinoma, high VEGF expression and MVD portend poor prognosis. Currently a debate exists regarding the ability of cervical squamous intraepithelial neoplasia to induce angiogenesis. Most studies, however, indicate angiogenesis to be of prognostic value in patients with invasive squamous cell carcinoma. The ability of complex endometrial hyperplasia to induce angiogenesis has been demonstrated. A direct correlation between angiogenesis, higher grade and depth of invasion in Stage I adenocarcinoma, and prognostic value in Stage I and II and recurrent disease has been noted. In ovarian epithelial adenocarcinoma, higher microvessel counts in the primary ovarian tumor or omental metastases may serve as a prognostic indicator for survival. Conclusions: Similar to other malignant diseases, angiogenesis appears to play an important role in disease progression and survival in patients with gynecologic malignancies. Preliminary data indicate angiogenesis may serve as a prognostic indicator in vulvar and cervical squamous cell carcinomas and endometrial and ovarian adenocarcinomas. These findings may lead to future application of therapeutic trials with antiangiogenic factors. abstract_id: PUBMED:10566619 Intratumoral angiogenesis: a new prognostic indicator for stage I endometrial adenocarcinomas? The prognostic significance of three recently emerged parameters, namely intratumoral angiogenesis and the antiapoptotic proteins bcl-2 and mutant p53, was investigated in a series of 124 patients with endometrial adenocarcinomas of the endometrioid cell type. All patients were treated with total abdominal hysterectomy and bilateral oophorectomy, without node dissection. When deep myometrial invasion or advanced stage of disease was confirmed, adjuvant radiotherapy was given. Intratumoral angiogenesis was assessed in tissue samples, after immunohistochemical staining, with the anti-CD31 monoclonal antibody. The mean microvessel density (MVD) was 23.2 +/- 14.1 (range 4-60; 95% CI 20-25.8). Microvessel density was high (&gt; 30) in 30% of endometrial adenocarcinomas, medium (15-30) in 33% of the tumors, and low (&lt; 15) in the remaining cases (37%). A strong cytoplasmic and/or perinuclear expression of bcl-2 in more than 10% of the neoplastic cells was considered as being positive, and noted in 35.5% of the endometrial neoplasms; it was more frequent in the less vascularized carcinomas (P = 0.03). Nuclear p53 accumulation in an equal percentage of neoplastic cells (&gt; 10%) was less common (7.2%). In univariate analysis, early stage of disease, absence of lymphatic-vascular space invasion (LVI), and low intratumoral MVD were the parameters associated with an improved survival (P = 0.0001, P = 0.001, and P = 0.009, respectively). In multivariate analysis, however, the only independent variable noted was stage of disease (P &lt; 0.0001). Within stage I endometrial adenocarcinomas, only intratumoral angiogenesis was associated with prognosis (univariate analysis): high MVD cases had a significantly worse prognosis compared to medium MVD (P = 0.02). Low MVD adenocarcinomas, on the other hand, were associated with an intermediate prognosis, indicating that other factors, such as hypoxia and related mechanisms, may also be important. It is suggested that intratumoral angiogenesis may prove useful in selecting a subgroup of cancer patients, among others with stage I endometrial disease, that would benefit from additional treatment. abstract_id: PUBMED:14674140 Neo-angiogenesis in immunohistochemical techniques as a prognostic factor in endometrial carcinoma Objective: Angiogenesis in malignant tumors is a prognostic factor associated with tumor growth and metastasis. The aim of the research was: determination of the angiodensity rate in two immunohistochemical techniques, estimation of the value of the examined parameter at different stages of clinical progression and histological differentiation of endometrial carcinoma, and analysis of the obtained values as prognostic factors in the disease process. Materials And Methods: The examination covered 86 women treated surgically for endometrial carcinoma. The preliminary histological evaluation was followed by immunohistochemical methods. The microvessels within the invasive cancer were highlighted by means of immuno-cytochemical staining to detect CD-31 and CD-105 antigen. The average value of angiodesity was estimated by means of a computer image analyser. Results: The group of patients at the preinvasive stage of the disease manifested significantly statistically lower values of angiodensity. It was detected that the histological differentiation of carcinoma does not influence intensification of angiogenesis. Higher values of this parameter have an adverse influence on the survival rate. Conclusion: The evaluation of the angiodensity coefficient can be a helpful prognostic parameter in endometrial carcinoma. abstract_id: PUBMED:24627673 The investigation of tumoral angiogenesis with HIF-1 alpha and microvessel density in women with endometrium cancer. Objective: Hypoxia inducible factor 1 alpha (HIF-1α) is a nuclear protein upregulated in response to reduced cellular oxygen concentration which therefore acts as a marker for hypoxia. The aim of this study was to determine tumoral angiogenesis with immunohistochemical markers in endometrium cancer and its relation with stage, grade, survival rates and other prognostic factors. Material And Methods: Using the database in our Gynecologic Oncology clinic, we selected 94 patients who were diagnosed with endometrial cancer and underwent primary surgery at our institution between 2001 and 2010. Tissue microarrays believed to demonstrate the optimum part of the tumor were reprepared from the paraffin blocks. Angiogenesis and microvessel density (MVD) were investigated with the aid of HIF-1α and CD34 antibodies. Results: High expression of HIF-1α was significantly more frequent in advanced grade endometrial cancers (p=0.044). HIF-1α expression was highly correlated with CD34 expression in the tumor cells (p&lt;0.001). However lack of relation among stage, overall survival rates and histological types were analyzed with HIF-1α. When we compared HIF-1α positive and negative cases with cervical, adnexial, lymphovascular and myometrial invasion, there was no difference between these groups. MVD was evaluated with CD34 and it was remarkable and significantly different on advanced grade tumors (r=0.268; p=0.009). A similar significant difference was observed between the high expression of CD34 and type II endometrial cancer histology (p&lt;0.001). However, there was no relationship between the MVD and stage or survival rates. Conclusion: High expression of HIF-1α is associated with tumoral angiogenesis in endometrial adenocarcinomas. Further studies targeting HIF-1α for disrupting mechanisms essential for tumor growth in endometrium cancer will be significant investigations in the future. abstract_id: PUBMED:23426782 Downregulation of vasohibin-2, a novel angiogenesis regulator, suppresses tumor growth by inhibiting angiogenesis in endometrial cancer cells. The vasohibin-2 (VASH2) gene was originally found to be expressed in infiltrating mononuclear cells of a mouse model of hypoxia-induced subcutaneous angiogenesis. These cells are mobilized from bone marrow to promote angiogenesis. Recently, VASH2 has been demonstrated to be expressed in several types of cancer in which it promotes tumor development through angiogenesis. However, its role in endometrial cancer remains unknown. Using quantitative reverse transcription-polymerase chain reaction (RT-PCR), we found that VASH2 was overexpressed in several human endometrial cancer cell lines, including the HEC50B cell line, which we used to further examine the role of VASH2. Although knockdown of VASH2 with stable transfection of shRNA had little effect on the proliferation of HEC50B cells in vitro, knockdown in an in vivo murine xenograft model inhibited tumor growth by decreasing tumor angiogenesis. In addition, the supernatant from HEC50B cells that expressed VASH2 significantly promoted the proliferation of human umbilical vein endothelial cells. By contrast, knockdown of VASH2 significantly attenuated the proliferative effect. These results indicate that VASH2 contributes to the development of endometrial cancer by promoting angiogenesis through a paracrine mode of action. Consequently, VASH2 may be considered to be a novel molecular target for endometrial cancer therapy. abstract_id: PUBMED:15032273 HSP27 in patients with ovarian carcinoma: still an independent prognostic indicator at 60 months follow-up. Objective: Heat shock protein 27 (HSP27) is produced in response to pathophysiologic stress in animal cells. The authors have previously shown that HSP27 is an independent prognostic indicator in patients with ovarian carcinoma. The present study was performed to see whether HSP27 remained an independent prognostic indicator with longer follow-up. Methods: One hundred and three consecutive patients with epithelial ovarian carcinoma were studied. Slides were prepared from fresh tissue. HPS27 staining was performed as previously described. Patient records were examined for FIGO stage, grade, histology, level of cytoreduction and survival. Results: One hundred and three patients were followed for a mean of 60 months. Twenty patients had FIGO Stage I disease, four Stage II, 59 Stage III, and 20 Stage IV. Immunohistochemical (IHC) staining for HSP27 was not related to histologic grade, level of cytoreduction or histologic subtype. A statistically significant decrease in HSP27 staining was found to correlate with increased FIGO stage (p = 0.008). Using cox-regression analysis, HSP27 staining (p = 0.025), stage (p = 0.0012), and level of cytoreduction (p &lt; 0.0001) were independent predictors of survival in these patients. Conclusion: Cox-regression analysis found HSP27 to be an independent indicator of prognosis and survival in patients with ovarian carcinoma who had longer follow-up. Decreased HSP27 staining was related to decreased survival. This study confirms the authors' earlier report on the importance of HSP27 as a prognostic indicator in ovarian carcinoma. abstract_id: PUBMED:16211359 Recent advances in endometrial angiogenesis research. This review summarises recent research into the mechanisms and regulation of endometrial angiogenesis. Understanding of when and by what mechanisms angiogenesis occurs during the menstrual cycle is limited, as is knowledge of how it is regulated. Significant endometrial endothelial cell proliferation occurs at all stages of the menstrual cycle in humans, unlike most animal models where a more precise spatial relationship exists between endothelial cell proliferation and circulating levels of oestrogen and progesterone. Recent stereological data has identified vessel elongation as a major endometrial angiogenic mechanism in the mid-late proliferative phase of the cycle. In contrast, the mechanisms that contribute to post-menstrual repair and secretory phase remodelling have not yet been determined. Both oestrogen and progesterone/progestins appear to have paradoxical actions, with recent studies showing that under different circumstances both can promote as well as inhibit endometrial angiogenesis. The relative contribution of direct versus indirect effects of these hormones on the vasculature may help to explain their pro- or anti-angiogenic activities. Recent work has also identified the hormone relaxin as a player in the regulation of endometrial angiogenesis. While vascular endothelial growth factor (VEGF) is fundamental to endometrial angiogenesis, details of how and when different endometrial cell types produce VEGF, and how production and activity is controlled by oestrogen and progesterone, remains to be elucidated. Evidence is emerging that the different splice variants of VEGF play a major role in regulating endometrial angiogenesis at a local level. Intravascular neutrophils containing VEGF have been identified as having a role in stimulating endometrial angiogenesis, although other currently unidentified mechanisms must also exist. Future studies to clarify how endometrial angiogenesis is regulated in the human, as well as in relevant animal models, will be important for a better understanding of diseases such as breakthrough bleeding, menorrhagia, endometriosis and endometrial cancer. abstract_id: PUBMED:10479512 p53 expression as a prognostic indicator of 5-year survival in endometrial cancer. Background: One of the most common genetic alterations to occur in human cancers is an alteration of the p53 tumor suppressor gene. The purpose of this article was to build upon the authors' previous work with p53 and determine whether p53 was a prognostic indicator of 5-year survival. Methods: One hundred thirty-seven consecutively surgically treated patients with endometrial cancer had their p53 expression studied by immunoperoxidase staining and quantified by image analysis. All patients were evaluable for 5-year survival. Results: One hundred three patients had endometrioid adenocarcinoma; 6, adenosquamous carcinoma; 14, papillary serous carcinoma; 10, clear cell carcinoma; and 4, undifferentiated carcinoma. p53 expression ranged from 0.0 to 58.2% positive nuclear area with a mean of 11.5% (median 2.6%) for the cohort. For the patients with endometrioid carcinoma, the mean p53 expression was 7.1% while for the nonendometrioid tumors it was 24.6% (P&lt;0.001). Fifty-nine of the 103 endometrioid tumors (57.3%) stained positive for p53 while 32 of the 34 nonendometrioid (94.1%) tumors stained positive (P&lt;0.001). Increasing histologic grade correlated with an increasing p53 expression (P = 0.003). The percentage of tumors expressing p53 was found to be higher in FIGO stage II, III, and IV than in FIGO stage I cancer (P = 0.003). However, mean p53 expression did not differ between early (stage I) and advanced (stage II, III, and IV) cancers (P = 0.088). Utilizing 5-year survival as the endpoint for multivariate analysis, FIGO stage (P = 0.0028) and p53 expression (P&lt;0.001) were the only independent prognostic indicators found. Conclusion: p53 expression is more commonly found in nonendometrioid than in endometrioid adenocarcinoma of the endometrium. It, along with FIGO stage, is an independent prognostic indicator of 5-year survival. Answer: The study of angiogenesis, the formation of new blood vessels, has become increasingly relevant as a prognostic indicator in endometrial carcinoma. Several studies have investigated the relationship between angiogenesis and the prognosis of endometrial cancer, with many finding a correlation between increased microvessel density (MVD) or angiogenic factor expression and poorer outcomes. One study found that patients with endometrial cancer who had tumors with high capillary density had a significantly shorter mean survival time compared to those with tumors of low capillary density. This suggests that angiogenesis correlates with survival in endometrial carcinoma (PUBMED:8678154). Another study supported this finding, stating that angiogenesis intensity in endometrial cancer is an independent prognostic factor and statistically correlates with FIGO stage of disease, histological type and grade of tumor, depth of myometrial invasion, and metastasis (PUBMED:16417101). Further research has shown that angiogenesis plays an important role in disease progression and survival in patients with gynecologic malignancies, including endometrial and ovarian adenocarcinomas. High microvessel counts in the primary ovarian tumor or omental metastases may serve as a prognostic indicator for survival (PUBMED:10021305). Intratumoral angiogenesis has also been investigated as a prognostic indicator. One study found that in stage I endometrial adenocarcinomas, high MVD cases had a significantly worse prognosis compared to medium MVD, suggesting that intratumoral angiogenesis may be useful in identifying patients who could benefit from additional treatment (PUBMED:10566619). The evaluation of angiodensity, a measure of the density of blood vessels within a tumor, has been proposed as a helpful prognostic parameter in endometrial carcinoma, with higher values of this parameter having an adverse influence on the survival rate (PUBMED:14674140). Additionally, the expression of hypoxia inducible factor 1 alpha (HIF-1α), a marker for hypoxia and tumoral angiogenesis, has been found to be associated with advanced grade endometrial cancers and correlated with CD34 expression in tumor cells. However, no relation was found among stage, overall survival rates, and histological types with HIF-1α (PUBMED:24627673).
Instruction: Can digital breast tomosynthesis replace conventional diagnostic mammography views for screening recalls without calcifications? Abstracts: abstract_id: PUBMED:23345348 Can digital breast tomosynthesis replace conventional diagnostic mammography views for screening recalls without calcifications? A comparison study in a simulated clinical setting. Objective: This study evaluated digital breast tomosynthesis (DBT) as an alternative to conventional diagnostic mammography in the workup of noncalcified findings recalled from screening mammography in a simulated clinical setting that incorporated comparison mammograms and breast ultrasound results. Subjects And Methods: One hundred forty-six women, with 158 abnormalities, underwent diagnostic mammography and two-view DBT. Three radiologists viewed the abnormal screening mammograms, comparison mammograms, and DBT images and recorded a DBT BI-RADS category and confidence score for each finding. Readers did not view the diagnostic mammograms. A final DBT BI-RADS category, incorporating ultrasound results in some cases, was determined and compared with the diagnostic mammography BI-RADS category using kappa statistics. Sensitivity and specificity were calculated for DBT and diagnostic mammography. Results: Agreement between DBT and diagnostic mammography BI-RADS categories was excellent for readers 1 and 2 (κ = 0.91 and κ = 0.84) and good for reader 3 (κ = 0.68). For readers 1, 2, and 3, sensitivity and specificity of DBT for breast abnormalities were 100%, 100%, and 88% and 94%, 93%, and 89%, respectively. The clinical workup averaged three diagnostic views per abnormality and ultrasound was requested in 49% of the cases. DBT was adequate mammographic evaluation for 93-99% of the findings and ultrasound was requested in 33-55% of the cases. Conclusion: The results of this study suggest that DBT can replace conventional diagnostic mammography views for the evaluation of noncalcified findings recalled from screening mammography and achieve similar sensitivity and specificity. Two-view DBT was considered adequate mammographic evaluation for more than 90% of the findings. There was minimal change in the use of ultrasound with DBT compared with diagnostic mammography. abstract_id: PUBMED:35768941 Assessment of screen-recalled abnormalities for digital breast tomosynthesis versus digital mammography screening in the BreastScreen Maroondah trial. Introduction: Australia's first population-based pilot trial comparing digital breast tomosynthesis (DBT) and digital mammography (DM) screening reported detection measures in 2019. This study describes the trial's secondary outcomes pertaining to the assessment process in women screened with DBT or DM, including the type of recalled abnormalities and the procedures performed. Methods: Women with suspected abnormalities at screening were recalled for further investigation. Outcome measures were number of lesions assessed, types of imaging findings recalled to assessment, and data on testing and assessment outcomes; these were reported using descriptive analyses of lesion-specific data. Results: A total of 274 lesions and 203 lesions were reported in the DBT-screened and DM-screened groups, respectively. There were a higher proportion of lesions depicted as calcifications (32.4% vs 21.3%), and a lower proportion of lesions depicted as asymmetrical densities (3.2% vs 15.7%) for DBT recalls than DM recalls. A lower proportion of DBT-recalled lesions was assessed with additional mammography than DM-recalled lesions (49.3% vs 93.1%). Higher proportions of DBT-recalled lesions than DM-recalled lesions were investigated with clinical breast examination (50.4% vs 39.9%), core needle biopsy (45.6% vs 28.6%) and open biopsy (4.0% vs 1.0%). Similar proportions of DBT- and DM-recalled lesions were assessed using ultrasound (76.3% vs 71.4%). Conclusion: Assessment of screen-recalled lesions showed that, compared with DM, DBT found more benign and more malignant lesions, and generally required more procedures except for less additional mammography workup. These findings show that a transition to DBT screening changes the assessment workload. abstract_id: PUBMED:31415739 Survey Results Regarding Uptake and Impact of Synthetic Digital Mammography With Tomosynthesis in the Screening Setting. Synthesized digital mammography (SM) was developed to replace digital mammography (DM) in digital breast tomosynthesis (DBT) imaging to reduce radiation dose. This survey assessed utilization and attitudes regarding SM in DBT screening. The study was institutional review board exempt. An online survey was sent to members of the Society of Breast Imaging in June 2018. Questions included practice information, utilization of DBT and SM, perception of change in recall rates (RRs) and cancer detection rates (CDRs) with SM-DBT versus DM-DBT, and attitudes regarding SM versus DM in DBT screening. χ2 Tests were used to compare response frequencies across groups. In all, 312 of 2,600 Society of Breast Imaging members responded to the survey (12%). Of respondents, 96% reported DBT capability, and 83% reported SM capability. Of those without SM, the most cited reasons were cost or administration and image quality concerns (both 32%). In addition, 40% reported combined SM and DM use in DBT screens, and 52% reported SM use without DM in the majority of DBT screens. The overall satisfaction with SM was 3.4 of 5 (1-5 scale). Most cited SM advantages were decreased dose (85%) and increased lesion conspicuity (27%). The most cited SM disadvantages were calcification characterization (61%) and decreased image quality (31%). Most respondents were unsure if CDRs changed (44%) and RR changed (30%) with few reporting adverse outcomes (6% RR increase, 1% CDR decrease). Most radiologists screening with DBT have SM, but only one-half have replaced DM with SM. Despite few reported adverse screening outcomes with SM-DBT, radiologists have concerns about image quality, specifically calcification characterization. abstract_id: PUBMED:34053786 Mammographic features and screening outcome in a randomized controlled trial comparing digital breast tomosynthesis and digital mammography. Purpose: To compare the distribution of mammographic features among women recalled for further assessment after screening with digital breast tomosynthesis (DBT) versus digital mammography (DM), and to assess associations between features and final outcome of the screening, including immunohistochemical subtypes of the tumour. Methods: This randomized controlled trial was performed in Bergen, Norway, and included 28,749 women, of which 1015 were recalled due to mammographic findings. Mammographic features were classified according to a modified BI-RADS-scale. The distribution were compared using 95 % confidence intervals (CI). Results: Asymmetry was the most common feature of all recalls, 24.3 % (108/444) for DBT and 38.9 % (222/571) for DM. Spiculated mass was most common for breast cancer after screening with DBT (36.8 %, 35/95, 95 %CI: 27.2-47.4) while calcifications (23.0 %, 20/87, 95 %CI: 14.6-33.2) was the most frequent after DM. Among women screened with DBT, 0.13 % (95 %CI: 0.08-0.21) had benign outcome after recall due to indistinct mass while the percentage was 0.28 % (95 %CI: 0.20-0.38) for DM. The distributions were 0.70 % (95 %CI: 0.57-0.85) versus 1.46 % (95 %CI: 1.27-1.67) for asymmetry and 0.24 % (95 %CI: 0.16-0.33) versus 0.54 % (95 %CI: 0.43-0.68) for obscured mass, among women screened with DBT versus DM, respectively. Spiculated mass was the most common feature among women diagnosed with non-luminal A-like cancer after DBT and after DM. Conclusions: Spiculated mass was the dominant feature for breast cancer among women screened with DBT while calcifications was the most frequent feature for DM. Further studies exploring the clinical relevance of mammographic features visible particularly on DBT are warranted. abstract_id: PUBMED:30933648 Can Digital Breast Tomosynthesis Replace Full-Field Digital Mammography? A Multireader, Multicase Study of Wide-Angle Tomosynthesis. OBJECTIVE. The purpose of this study was to test the hypothesis whether two-view wide-angle digital breast tomosynthesis (DBT) can replace full-field digital mammography (FFDM) for breast cancer detection. SUBJECTS AND METHODS. In a multireader multicase study, bilateral two-view FFDM and bilateral two-view wide-angle DBT images were independently viewed for breast cancer detection in two reading sessions separated by more than 1 month. From a pool of 764 patients undergoing screening and diagnostic mammography, 330 patient-cases were selected. The endpoints were the mean ROC AUC for the reader per breast (breast level), ROC AUC per patient (subject level), noncancer recall rates, sensitivity, and specificity. RESULTS. Twenty-nine of 31 readers performed better with DBT than FFDM regardless of breast density. There was a statistically significant improvement in readers' mean diagnostic accuracy with DBT. The subject-level AUC increased from 0.765 (standard error [SE], 0.027) for FFDM to 0.835 (SE, 0.027) for DBT (p = 0.002). Breast-level AUC increased from 0.818 (SE, 0.019) for FFDM to 0.861 (SE, 0.019) for DBT (p = 0.011). The noncancer recall rate per patient was reduced by 19% with DBT (p &lt; 0.001). Masses and architectural distortions were detected more with DBT (p &lt; 0.001); calcifications trended lower (p = 0.136). Accuracy for detection of invasive cancers was significantly greater with DBT (p &lt; 0.001). CONCLUSION. Reader performance in breast cancer detection is significantly higher with wide-angle two-view DBT independent of FFDM, verifying the robustness of DBT as a sole view. However, results of perception studies in the vision sciences support the inclusion of an overview image. abstract_id: PUBMED:33142193 Is There a Difference in the Diagnostic Outcomes of Calcifications Initially Identified on Synthetic Tomosynthesis Versus Full-Field Digital Mammography Screening? Purpose: To compare the outcomes of microcalcifications recalled on full-field digital (FFDM) and FFDM and combined tomosynthesis (Combo) to synthetic (SM) screening mammograms. Method: We reviewed medical records, radiology, and pathology reports of all patients found to have abnormal calcifications requiring further evaluation on mammography screening at our institution between 11/1/2016-11/1/2018 and collected patient demographics, calcification morphology and distribution, and mammography technique (SM, FFDM, or Combo). We used biopsy pathology or at least 1-year imaging follow-up to establish overall diagnostic outcome (benign or malignant). Fisher's exact test was used to compare validation rates at diagnostic work-up, BI-RADS category, and final outcome of calcifications identified on each screening technique. T-test was used for continuous variables. Results: Of 699 calcifications in 596 women recalled, 176 (30%) of 596 were from SM and 420 (70%) FFDM/Combo. There was a significantly higher rate of calcifications unvalidated at diagnostic work-up for SM compared to FFDM/Combo (0.8% vs. 10%, p &lt; 0.0001). SM calcifications were more likely to receive BI-RADS 2/3 at diagnostic work-up compared to FFDM/Combo ones (55% vs. 42%, p = 0.003). Of 346 (49%) calcifications that underwent biopsy, 88 (25%) were malignant (36% of SM vs. 22% of FFDM/Combo, OR:0.5 [95% CI: 0.3, 0.8] p = 0.01). Of 622 lesions with established diagnostic outcome, there was no difference between having an overall benign or malignant outcome between SM and FFDM/Combo (17% vs. 13%, OR: 0.8 [95% Cl: 0.5, 1.2] p = 0.27). Conclusions: Synthetic tomosynthesis screening results in a higher rate of false positive and unvalidated calcification recalls compared to FFDM/Combo. abstract_id: PUBMED:30622686 Diagnostic Performance of Digital Breast Tomosynthesis for Breast Suspicious Calcifications From Various Populations: A Comparison With Full-field Digital Mammography. The diagnostic performance difference between digital breast tomosynthesis (DBT) and conventional full-field digital mammography (FFDM) for breast suspicious calcifications from various populations is unclear. The objective of this study is to determine whether DBT exhibits the diagnostic advantage for breast suspicious calcifications from various populations compared with FFDM. Three hundred and five patients were enrolled (of which seven patients with bilateral lesions) and 312 breasts images were retrospectively analyzed by three radiologists independently. The postoperative pathology of breast calcifications was the gold standard. Breast cancer was diagnosed utilizing DBT and FFDM with sensitivities of 92.9% and 88.8%, specificities of 87.9% and 75.2%, positive predictive values of 77.8% and 62.1%, negative predictive values of 96.4% and 93.6%, respectively. DBT exhibited significantly higher diagnostic accuracy for benign calcifications compared with FFDM (87.9% vs 75.2%), and no advantage in the diagnosis of malignant calcifications. DBT diagnostic accuracy was notably higher than FFDM in premenopausal (88.4% vs 78.8%), postmenopausal (90.2% vs 77.2%), and dense breast cases (89.4% vs 81.9%). There was no significant difference in non-dense breast cases. In our study, DBT exhibited a superior advantage in dense breasts and benign calcifications cases compared to FFDM, while no advantage was observed in non-dense breasts or malignant calcifications cases. Thus, in the breast cancer screening for young women with dense breasts, DBT may be recommended for accurate diagnosis. Our findings may assist the clinicians in applying the optimal techniques for different patients and provide a theoretical basis for the update of breast cancer screening guideline. abstract_id: PUBMED:29552533 Impact of Addition of Digital Breast Tomosynthesis to Digital Mammography in Lesion Characterization in Breast Cancer Patients. Context: Digital breast tomosynthesis (DBT) is a new development in mammography technology which reduces the effect of overlapping tissue. Aims: The aim is to interrogate whether addition of DBT to digital mammography (DM) helps in better characterization of mammographic abnormalities in breast cancer patients in general and in different breast compositions. Settings And Design: Retrospective, analytical cross-sectional study. Subjects And Methods: Mammographic findings in 164 patients with 170 pathologically proven lesions were evaluated by using first DM alone and thereafter with addition of DBT to DM. The perceived utility of adjunct DBT was scored using a rating of 0-2. A score of 0 indicating that DM plus DBT was comparable to DM alone, 1 indicating that DM plus DBT was slightly better, and 2 indicating that DM plus DBT was definitely better. Statistical Analysis: McNemar Chi-squares test, Fisher's exact test. Results: On DM, 149 lesions were characterized mass with or without calcifications, 18 asymmetries with or without calcifications, 2 as architectural distortion, and 1 as microcalcification alone. Adjunct DBT helped in better morphological characterization of 17 lesions, with revelation of underlying masses in 16 asymmetries and one architectural distortion. Adjunct DBT was perceived to be slightly better than DM alone in 44.7% lesions, and definitely better in 22.9% lesions. Lesions showing score 1 or 2 improvement were significantly higher in heterogeneously and extremely dense breasts (P &lt; 0.001). Conclusions: Adjunct DBT improves morphological characterization of lesions in patients with breast cancer. It highlights more suspicious features of lesions that indicate the presence of cancer, particularly in dense breasts. abstract_id: PUBMED:36226804 Diagnostic performance of tomosynthesis, digital mammography and a dedicated digital specimen radiography system versus pathological assessment of excised breast lesions. Background: The aim of the study was to compare the performance of full-field digital mammography (FFDM), digital breast tomosynthesis and a dedicated digital specimen radiography system (SRS) in consecutive patients, and to compare the margin status of resected lesions versus pathological assessment. Patients And Methods: Resected tissue specimens from consecutive patients who underwent intraoperative breast specimen assessment following wide local excision or oncoplastic breast conservative surgery were examined by FFDM, tomosynthesis and SRS. Two independent observers retrospectively evaluated the visibility of lesions, size, margins, spiculations, calcifications and diagnostic certainty, and chose the best performing method in a blinded manner. Results: We evaluated 216 specimens from 204 patients. All target malignant lesions were removed with no tumouron-ink. One papilloma had positive microscopic margins and one patient underwent reoperation owing to extensive in situ components. There were no significant differences in measured lesion size among the three methods. However, tomosynthesis was the most accurate modality when compared with the final pathological report. Both observers reported that tomosynthesis had significantly better lesion visibility than SRS and FFDM, which translated into a significantly greater diagnostic certainty. Tomosynthesis was superior to the other two methods in identifying spiculations and calcifications. Both observers reported that tomosynthesis was the best performing method in 76.9% of cases. The interobserver reproducibilities of lesion visibility and diagnostic certainty were high for all three methods. Conclusions: Tomosynthesis was superior to SRS and FFDM for detecting and evaluating the target lesions, spiculations and calcifications, and was therefore more reliable for assessing complete excision of breast lesions. abstract_id: PUBMED:28780702 Clinical implementation of synthesized mammography with digital breast tomosynthesis in a routine clinical practice. Background: Most published studies evaluating digital breast tomosynthesis (DBT) included a separate 2-dimensional full-field digital mammogram (FFDM) for DBT screening protocols, increasing radiation from screening mammography. Synthesized mammography (SM) creates a 2-dimensional image from the DBT source data, and if used in place of FFDM, it reduces radiation of DBT screening. This study evaluated the implementation of SM + DBT in routine screening practice in terms of recall rates, cancer detection rates (CDR), % of minimal cancers, % of node-positive cancers, and positive predictive values (PPV). Materials And Methods: A multivariate retrospective institutional analysis was performed on 31,979 women who obtained screening mammography (10/2013-12/2015) with cohorts divided by modality (SM + DBT, FFDM + DBT, and FFDM). We adjusted for comparison mammograms, age, breast density, and the interpreting radiologist. Recall type was analyzed for differences (focal asymmetry, asymmetry, masses, calcifications, architectural distortion). Results: SM + DBT significantly decreased the recall rate compared to FFDM (5.52 vs. 7.83%, p &lt; 0.001) with no differences in overall CDR (p = 0.66), invasive and/or in situ CDR, or percentages of minimal and node-negative cancers. PPV1 significantly increased with SM + DBT relative to FFDM (9.1 vs. 6.2%, p = 0.02). SM + DBT did not differ significantly in recall rate or overall CDR compared to FFDM + DBT. There were statistically significant differences in certain findings recalled by screening modality (e.g., focal asymmetries). Conclusions: SM + DBT reduces false positives compared to FFDM, while maintaining the CDR and other desirable audit outcome data. SM + DBT is more accurate than FFDM alone, and is a desirable alternative to FFDM + DBT, given the added benefit of radiation reduction. Answer: Yes, digital breast tomosynthesis (DBT) can replace conventional diagnostic mammography views for screening recalls without calcifications. A study evaluated DBT as an alternative to conventional diagnostic mammography in the workup of noncalcified findings recalled from screening mammography. The results showed that DBT achieved similar sensitivity and specificity to diagnostic mammography, with excellent agreement between DBT and diagnostic mammography BI-RADS categories for two readers and good agreement for the third reader. DBT was considered adequate mammographic evaluation for more than 90% of the findings, suggesting that DBT can replace conventional diagnostic mammography views in the evaluation of noncalcified findings (PUBMED:23345348). Furthermore, another study comparing DBT and digital mammography (DM) screening reported that DBT required fewer additional mammography workups than DM for screen-recalled lesions, indicating that DBT could potentially streamline the assessment process for recalled abnormalities (PUBMED:35768941). Additionally, a multireader, multicase study found that wide-angle DBT significantly improved reader performance in breast cancer detection compared to full-field digital mammography (FFDM), with a reduction in noncancer recall rates and better detection of masses and architectural distortions, although calcifications trended lower (PUBMED:30933648). In summary, the evidence suggests that DBT can serve as a viable alternative to conventional diagnostic mammography for the evaluation of noncalcified abnormalities recalled from screening, with similar or improved diagnostic outcomes.
Instruction: Does vitamin C supplementation influence the levels of circulating oxidized LDL, sICAM-1, sVCAM-1 and vWF-antigen in healthy male smokers? Abstracts: abstract_id: PUBMED:15127090 Does vitamin C supplementation influence the levels of circulating oxidized LDL, sICAM-1, sVCAM-1 and vWF-antigen in healthy male smokers? Objective: To examine the effects of vitamin C supplementation on the concentration of oxidation markers, in particular, circulating oxidized LDL (OxLDL) and on endothelial activation markers. Design: Randomized double-blind, placebo-controlled crossover trial. Setting: Belgian population of the city of Leuven. Subjects: A total of 34 healthy male smokers aged 26-73 y. Intervention: Smokers were randomly assigned to receive either vitamin C (250 mg twice daily) or placebo capsules, each to be taken for 4 weeks. After a 1-week washout period, participants then crossed over to the alternative capsules for further 4 weeks. Mean Outcome Measures: Markers of oxidation (bilirubin, uric acid, alpha-tocopherol, retinol, malondialdehyde, circulating Oxidized LDL (OxLDL)) and markers of endothelial activation (sICAM-1, sVCAM-1, vWF-antigen) were analysed. Results: Plasma ascorbate concentrations significantly increased from 46.6+/-17.6 to 70.1+/-21.2 mumol/l after a 4-week treatment with 500 mg vitamin C per day. The other plasma antioxidants concentrations, including bilirubin, uric acid, alpha-tocopherol and retinol, were similar in both treatment periods. Vitamin C did not change plasma malondialdehyde and circulating OxLDL compared with placebo (vitamin C 0.73+/-0.25 mg/dl OxLDL; placebo 0.72+/-0.21 mg/dl OxLDL). After vitamin C supplementation, neither sICAM-1 and sVCAM-1 levels nor the concentration of vWF-antigen significantly differed from placebo condition. Conclusions: Oral supplementation of vitamin C is not associated with changes in markers of oxidation or endothelial activation in healthy male smokers. abstract_id: PUBMED:24584618 Prognostic value of serum von Willebrand factor, but not soluble ICAM and VCAM, for mortality and cardiovascular events is independent of residual renal function in peritoneal dialysis patients. Objective: We explored associations between markers of endothelial dysfunction and outcome events, and whether those associations were independent of residual renal function (RRF) in patients on peritoneal dialysis. Methods: The study enrolled 261 incident patients and 68 healthy control subjects who were followed till death, censoring, or study end. Demographics, biochemistry, markers of inflammation (C-reactive protein) and endothelial dysfunction [soluble intercellular adhesion molecule 1 (sICAM), soluble vascular adhesion molecule 1 (sVCAM), and von Willebrand factor (vWf)] were examined at baseline. Outcome events included all-cause death and fatal and nonfatal cardiovascular (CV) events. Results: Mean levels of vWf, sICAM, and sVCAM were significantly higher in patients than in healthy control subjects. Levels of sICAM and sVCAM, but not vWf, were significantly correlated with RRF. Levels of sICAM and vWf both predicted all-cause mortality and fatal and nonfatal CV events after adjustment for recognizable CV risk factors. The association between sICAM and outcome events disappeared after further adjustment for RRF. However, RRF did not change the predictive role of vWf for outcome events. Compared with the lowest vWf quartile (6.6% - 73.9%), the highest vWf quartile (240.9% - 1161%) predicted the highest risk for fatal and nonfatal CV events (adjusted hazard ratio: 2.05; 95% confidence interval: 1.15 to 3.64; p = 0.014). We observed no associations between sVCAM and RRF, or sVCAM and any outcome event. Conclusions: The prognostic value of vWf, but not sICAM, is independent of RRF in predicting mortality and CV events. abstract_id: PUBMED:11053614 The effect of vitamin C supplementation on coagulability and lipid levels in healthy male subjects. Although dietary intake and plasma levels of vitamin C have been inversely associated with cardiovascular disease, the mechanism through which it may exert its effect has not been fully explained. Since thrombosis plays an important role in the onset of cardiovascular disease, we investigated the effect of vitamin C on measures of hemostasis that have been associated with cardiovascular risk. The effect of vitamin C on lipid levels was also evaluated. In a randomized, placebo-controlled, crossover study, we determined the effect of 2 g daily of vitamin C supplementation on platelet adhesion and aggregation, levels of tissue plasminogen activator antigen, plasminogen activator inhibitor, fibrinogen, plasma viscosity, von Willebrand factor, and lipid levels in 18 healthy male volunteers with low normal vitamin C levels. No striking effects of vitamin C on the hemostatic measures were observed, although tissue plasminogen activator antigen levels were inversely related to Vitamin C levels. Von Willebrand factor levels were slightly higher with vitamin C, although within the normal range. Total cholesterol levels were 10% lower when subjects were receiving vitamin C compared to placebo (167+/-7 mg/dL vs. 184+/-7 mg/dL), P=0. 007), although the total cholesterol/HDL ratio was not significantly different. Higher levels of tissue plasminogen activator antigen, which in the present study were associated with lower vitamin C levels, have been shown in prospective studies to convey an increased risk of cardiovascular events. Further studies of the effect of vitamin C on hemostatic measures are required in higher risk populations or those with known cardiovascular disease. abstract_id: PUBMED:26400262 Dietary proteins improve endothelial function under fasting conditions but not in the postprandial state, with no effects on markers of low-grade inflammation. Endothelial dysfunction (ED) and low-grade inflammation (LGI) have a role in the development of CVD. The two studies reported here explored the effects of dietary proteins and carbohydrates on markers of ED and LGI in overweight/obese individuals with untreated elevated blood pressure. In the first study, fifty-two participants consumed a protein mix or maltodextrin (3×20 g/d) for 4 weeks. Fasting levels and 12 h postprandial responses of markers of ED (soluble intercellular adhesion molecule 1 (sICAM), soluble vascular cell adhesion molecule 1 (sVCAM), soluble endothelial selectin and von Willebrand factor) and markers of LGI (serum amyloid A, C-reactive protein and sICAM) were evaluated before and after intervention. Biomarkers were also combined into mean Z-scores of ED and LGI. The second study compared 4 h postprandial responses of ED and LGI markers in forty-eight participants after ingestion of 0·6 g/kg pea protein, milk protein and egg-white protein. In addition, postprandial responses after maltodextrin intake were compared with a protein mix and sucrose. The first study showed significantly lower fasting ED Z-scores and sICAM after 4 weeks on the high-protein diet (P≤0·02). The postprandial studies found no clear differences of ED and LGI between test meals. However, postprandial sVCAM decreased more after the protein mix compared with maltodextrin in both studies (P≤0·04). In conclusion, dietary protein is beneficial for fasting ED, but not for fasting LGI, after 4 weeks of supplementation. On the basis of Z-scores, postprandial ED and LGI were not differentially affected by protein sources or carbohydrates. abstract_id: PUBMED:21283820 Traffic air pollution and oxidized LDL. Background: Epidemiologic studies indirectly suggest that air pollution accelerates atherosclerosis. We hypothesized that individual exposure to particulate matter (PM) derived from fossil fuel would correlate with plasma concentrations of oxidized low-density lipoprotein (LDL), taken as a marker of atherosclerosis. We tested this hypothesis in patients with diabetes, who are at high risk for atherosclerosis. Methodology/principal Findings: In a cross-sectional study of non-smoking adult outpatients with diabetes we assessed individual chronic exposure to PM by measuring the area occupied by carbon in airway macrophages, collected by sputum induction and by determining the distance from the patient's residence to a major road, through geocoding. These exposure indices were regressed against plasma concentrations of oxidized LDL, von Willebrand factor and plasminogen activator inhibitor 1 (PAI-1). We could assess the carbon load of airway macrophages in 79 subjects (58 percent). Each doubling in the distance of residence from major roads was associated with a 0.027 µm(2) decrease (95% confidence interval (CI): -0.048 to -0.0051) in the carbon load of airway macrophages. Independently from other covariates, we found that each increase of 0.25 µm(2) [interquartile range (IQR)] in carbon load was associated with an increase of 7.3 U/L (95% CI: 1.3 to 13.3) in plasma oxidized LDL. Each doubling in distance of residence from major roads was associated with a decrease of -2.9 U/L (95% CI: -5.2 to -0.72) in oxidized LDL. Neither the carbon load of macrophages nor the distance from residence to major roads, were associated with plasma von Willebrand factor or PAI-1. Conclusions: The observed positive association, in a susceptible group of the general population, between plasma oxidized LDL levels and either the carbon load of airway macrophages or the proximity of the subject's residence to busy roads suggests a proatherogenic effect of traffic air pollution. abstract_id: PUBMED:15112887 Von Willebrand factor and autoantibodies against oxidized LDL in hemodialysis patients treated with vitamin E-modified dialyzers. Oxidant stress is a well known cause of damage in the atherosclerotic process. Vitamin E is one of the most promising natural antioxidants. In this study we investigated if a vitamin E-coated dialyzer was able to reduce the plasma levels of auto-antibodies against oxidized-LDL, von Willebrand factor (vWf) and thrombomodulin (TM) as markers of endothelial damage. In this controlled 6-month prospective study, we investigated these markers in two matched groups (n=16 each) of patients on regular hemodialysis not yet diagnosed for atherosclerosis cardiovascular disease (ACVD) (mean age=58.3+/-7.0 yrs, mean dialysis age=30.1+/-10.0 months), in which cellulosic (CLS) and vitamin E-modified dialyzers (CLE) were compared. At inclusion all the patients were treated with CLS. Then, the study group was shifted to CLE for 6 months. At baseline the patients showed normal levels of vitamin E and high levels of oxLDL-Ab, vWf and TM compared to healthy subjects. In the CLE group oxLDL-Ab and vWf, but not TM levels, decreased progressively (from 472+/-287 to 264+/-199 mU/mL, p&lt;0.0001 and from 101.1+/-7.5% to 76.7+/-18.5%; p&lt;0.001, respectively), and vitamin E increased from 4.40+/-0.81 to 7.81+/-1.16 microg/mg of cholesterol. At the end of the study, 8 of the patients treated with CLE were randomly selected and went back to the membrane without Vitamin E for six months. They showed an significant increase in OxLDL-Ab and vWf levels and a significant reduction in tocoferol levels. In conclusion, CLE compared to cellulosic dialyzers can lower some indices of damage to LDL and endothelial cells. abstract_id: PUBMED:38307406 Silymarin prevents endothelial dysfunction by upregulating Erk-5 in oxidized LDL exposed endothelial cells. Extracellular signal-regulated kinase (Erk)-5 is a key mediator of endothelial cell homeostasis, and its inhibition causes loss of critical endothelial markers leading to endothelial dysfunction (ED). Circulating oxidized low-density lipoprotein (oxLDL) has been identified as an underlying cause of ED and atherosclerosis in metabolic disorders. Silymarin (Sym), a flavonolignan, possesses various pharmacological activities however its preventive mechanism in ED warrants further investigation. Here, we have examined the effects of Sym in regulating the expression of Erk-5 and ameliorating ED using in vitro and in vivo models. Primary human umbilical vein endothelial cells (pHUVECs) viability was measured by MTT assay; mRNA and protein expression by RT-qPCR and Western blotting; tube-formation assay was performed to examine endothelialness. In in-vivo experiments, normal chow-fed mice (control) or high-fat diet (HFD)-fed mice were administered Sym or Erk-5 inhibitor (BIX02189) and body weight, blood glucose, plasma-LDL, oxLDL levels, and expression of EC markers in the aorta were examined. Sym (5 μg/ml) maintained the viability and tube-formation ability of oxLDL exposed pHUVECs. Sym increased the expression of Erk-5, vWF, and eNOS and decreased ICAM-1 at transcription and translation levels in oxLDL-exposed pHUVECs. In HFD-fed mice, Sym reduced the body weight, blood glucose, LDL-cholesterol, and oxLDL levels, and increased the levels of vWF and eNOS along with Erk-5 and decreased the level of ICAM-1 in the aorta. These data suggest that Sym could be a potent anti-atherosclerotic agent that could elevate Erk-5 level in the ECs and prevent ED caused by oxidized LDL during HFD-induced obesity in mice. abstract_id: PUBMED:12050277 Vitamin E supplementation reduces plasma vascular cell adhesion molecule-1 and von Willebrand factor levels and increases nitric oxide concentrations in hypercholesterolemic patients. Up-regulation of vascular cell adhesion molecule-1 (VCAM-1) and reduced nitric oxide (NO) availability represent early characteristics of atherosclerosis. To evaluate whether the antioxidant vitamin E affected the circulating levels of soluble VCAM-1 (sVCAM-1) and the plasma metabolite of NO (nitrite+nitrate) in hypercholesterolemic patients, either vitamin E (either 400 IU or 800 IU/d for 8 wk) or placebo were randomly, double-blindly given to 36 hypercholesterolemic patients and 22 age- and sex-matched controls. At baseline hypercholesterolemic patients showed higher plasma sVCAM-1 (microg.liter(-1)) (591.2 +/- 132.5 vs. 505.0 +/- 65.6, P &lt; 0.007) and lower NO metabolite (microM) levels (15.9 +/- 3.4 vs. 29.2 +/- 5.1, P &lt; 0.0001) than controls. In hypercholesterolemic patients, 8 wk vitamin E (but not placebo) treatment significantly decreased circulating sVCAM-1 levels (400 IU: -148.9 +/- 84.6, P &lt; 0.009; 800 IU: -204.0 +/- 75.7, P &lt; 0.0001; placebo: -4.7 +/- 22.6, NS), whereas it increased NO metabolite concentrations (400 IU: +4.0 +/- 1.7, P &lt; 0.02; 800 IU: +5.5 +/- 0.8, P &lt; 0.0001; placebo: +0.1 +/- 1.1, NS) without affecting circulating low- density lipoprotein levels. Changes in both plasma sVCAM-1 and NO metabolite levels showed a trend to significantly correlate (r = -0.515, P = 0.010; and r = 0.435, P = 0.034, respectively) with changes in vitamin E concentrations induced by vitamin E supplementation. In conclusion, isolated hypercholesterolemia both increased circulating sVCAM-1 and reduced NO metabolite concentrations. Vitamin E supplementation counteracts these alterations, thus representing a potential tool for endothelial protection in hypercholesterolemic patients. abstract_id: PUBMED:21862014 Biomarkers to predict clinical progression in small vessel disease strokes: prognostic role of albuminuria and oxidized LDL cholesterol. Objective: Clinical progression in lacunar strokes (LS) is an unpredictable and fearful complication. Endothelial dysfunction (ED) is believed to be the first step in the pathophysiology of LS therefore we aimed to analyze the association of three markers of ED: albuminuria, von Willebrand factor (vWF), and oxidized LDL cholesterol (ox-LDL) with LS progression. Methods: From December 2007 to December 2010, 127 LS patients admitted within 6 h of symptom onset were prospectively assessed. Progression was defined as initial NIHSS score worsening ≥4 points within the first 72 h. Analysis of vWF and ox-LDL was done at admission. Albuminuria was measured in the first morning spot urine. Association between 3 biomarkers and progression was tested using logistic regression analysis. Other clinical variables of interest were also studied. Discriminative power was analyzed with a receiver operator curve. Results: Twenty-two patients (17.3%) progressed. Progression was associated with worse outcome at 90 days. Albuminuria and ox-LDL were associated in univariate analysis; vWF was not. Adjusted OR were: ox-LDL [OR: 1.03; 95% CI: 1.01-1.07, p=0.019], albuminuria [OR: 2.07; 95% CI: 1.04-4.13, p=0.039]. Association was linear without a cut-off point. Clinical variables were not associated with progression. The model including albuminuria and ox-LDL had a good predictive value [AUC: 0.80 [0.70-0.89)]. Conclusions: Albuminuria and ox-LDL levels are independently associated with higher risk of progression in LS. The lack of reliable clinical predictors makes biomarker research a priority to improve progression detection in this subtype of ischemic strokes. abstract_id: PUBMED:7582719 Altered levels of soluble adhesion molecules in rheumatoid arthritis, vasculitis and systemic sclerosis. We compared the levels of soluble adhesion molecules E-selectin (sE-selectin), intercellular adhesion molecule-1 (sICAM-1) and vascular cell adhesion molecule-1 (sVCAM-1) alongside von Willebrand factor (vWf), CRP and rheumatoid factor in 40 patients in serum by ELISA, rheumatoid factor by sheep red blood cell agglutination and CRP by immunonephelometry. Compared to controls, increased sE-selectin was found in patients with RA (P = 0.0015), vasculitis (P &lt; 0.0003) and SSc (P = 0.0126), whilst raised sICAM-1 was found in RA (P &lt; 0.0003), vasculitis (P &lt; 0.0003) and SSc (P &lt; 0.0378). sVCAM was lower in RA than in controls (P = 0.0102), but was unchanged in vasculitis or in SSc. vWF was raised in RA (P = 0.0102), vasculitis (P &lt; 0.0003) and SSc (P &lt; 0.0003). In a Spearman's rank analysis of all the data, vWf correlated with sVCAM-1 and sICAM-1 (both P &lt; 0.001), sE-selectin with sICAM-1 (P &lt; 0.001) and sVCAM with sICAM-1 (P &lt; 0.005). Levels of rheumatoid factor correlated with those of sE-selectin (P = 0.003) and sVCAM-1 (P = 0.012), but there were no correlations between any index and CRP. The strongest correlations within the RA group were between sICAM and sVCAM (P = 0.001), in vasculitis it was between sE-selectin and sICAM (P &lt; 0.001), and in SSc it was between sE-selectin and sVCAM (P = 0.019). These data suggest that the differing levels of vWf, sE-selectin and sICAM-1 in the inflammatory vasculitides may be useful in establishing a role for leucocyte/endothelial adhesion in these diseases. Answer: No, vitamin C supplementation does not influence the levels of circulating oxidized LDL (OxLDL), soluble intercellular adhesion molecule 1 (sICAM-1), soluble vascular adhesion molecule 1 (sVCAM-1), and von Willebrand factor (vWF) antigen in healthy male smokers. A study conducted on 34 healthy male smokers aged 26-73 years, who were randomly assigned to receive either vitamin C (250 mg twice daily) or placebo capsules for 4 weeks, followed by a 1-week washout period and then crossed over to the alternative capsules for another 4 weeks, found that plasma ascorbate concentrations significantly increased after vitamin C treatment. However, other plasma antioxidant concentrations and markers of oxidation, including bilirubin, uric acid, alpha-tocopherol, retinol, malondialdehyde, and circulating OxLDL, were similar in both treatment periods. Additionally, after vitamin C supplementation, neither sICAM-1 and sVCAM-1 levels nor the concentration of vWF-antigen significantly differed from the placebo condition (PUBMED:15127090).
Instruction: Do the elderly benefit from annual physical examination? Abstracts: abstract_id: PUBMED:28535772 Rural-urban difference in the use of annual physical examination among seniors in Shandong, China: a cross-sectional study. Background: Regular physical examination contributes to early detection and timely treatment, which is helpful in promoting healthy behaviors and preventing diseases. The objective of this study is to compare the annual physical examination (APE) use between rural and urban elderly in China. Methods: A total of 3,922 participants (60+) were randomly selected from three urban districts and three rural counties in Shandong Province, China, and were interviewed using a standardized questionnaire. We performed unadjusted and adjusted logistic regression models to examine the difference in the utilization of APE between rural and urban elderly. Two adjusted logistic regression models were employed to identify the factors associated with APE use in rural and urban seniors respectively. Results: The utilization rates of APE in rural and urban elderly are 37.4% and 76.2% respectively. Factors including education level, exercise, watching TV, and number of non-communicable chronic conditions, are associated with APE use both in rural and urban elderly. Hospitalization, self-reported economic status, and health insurance are found to be significant (p &lt; 0.05) predictors for APE use in rural elderly. Elderly covered by Urban Resident Basic Medical Insurance (URBMI) (p &lt; 0.05, OR = 1.874) are more likely to use APE in urban areas. Conclusions: There is a big difference in APE utilization between rural and urban elderly. Interventions targeting identified at-risk subgroups, especially for those rural elderly, are essential to reduce such a gap. To improve health literacy might be helpful to increase the utilization rate of APE among the elderly. abstract_id: PUBMED:35627952 How Different Is the Annual Physical Examination of Older Migrants than That of Older Nonmigrants? A Coarsened Exact Matching Study from China. It has become a top priority to ensure equal rights for older migrants in China. This study aims to explore how different the annual physical examination of older migrants is compared to that of older nonmigrants in China by using a coarsened exact matching method, and to explore the factors affecting annual physical examination among older migrants in China. Data were drawn from the China Migrants Dynamic Survey 2015 and China Health and Retirement Longitudinal Survey 2015. The coarsened exact matching method was used to analyse the difference in the annual physical examination of older migrants and nonmigrants. A logistic regression was used to analyse the factors affecting annual physical examination among older migrants. The annual physical examination of older migrants was 35.6%, which was significantly lower than that of older nonmigrants after matching (Odds ratios = 0.91, p &lt; 0.05). It was affected by education, employment, hukou, household economic status, health, health insurance, main source of income, type of migration, range of migration, years of migration, having health records in local community and number of local friends among older migrants in China. Older migrants adopted negative strategies in annual physical examination compared to older nonmigrants. Active strategies should be made to improve the equity of annual physical examination for older migrants in China. abstract_id: PUBMED:29650068 The Outpatient Physical Examination. The physical examination in the outpatient setting is a valuable tool. Even in settings where there is lack of evidence, such as the annual physical examination of an asymptomatic adult, the physical examination is beneficial for the physician-patient relationship. When a patient has specific symptoms, the physical examination-in addition to a thorough history-can help narrow down, or in many cases establish, a diagnosis. In a time where imaging and laboratory tests are easily available, but are expensive and can be invasive, a skilled physical examination remains an important component of patient evaluation. abstract_id: PUBMED:12202069 Do the elderly benefit from annual physical examination? An example from Kaohsiung City, Taiwan. Background: This study evaluates the impact of free annual health examinations on survival of elderly (&gt; or =65 years of age) residents in Kaohsiung City, Taiwan. Methods: A stratified random sample scheme was used in each of the 11 districts of Kaohsiung City. A total of 1,193 elderly people were selected and interviewed in 1993; deaths and results of health check-ups were recorded through 1998. Results: While over 50% of the subjects received at least one health examination between 1993 and 1998, only 18% received three or more. Most (60%) subjects who received examinations in a given year also received examinations the subsequent year; most (over 70%) who did not receive examinations in a given year did not receive check-ups the following year. Cox proportional hazards model showed that those who utilized the examination service had better survival probability than those who did not, given the same age, sex, education, marital status, living arrangements, and number of chronic diseases at baseline: The relative risk (RR) of mortality for those who ever utilized the health examination service was 0.50 (P &lt; 0.0001). Conclusions: Elderly subjects who received annual health examinations had lower mortality than those who did not. This finding should be interpreted cautiously, however, as the difference in survival may reflect better general health behaviors among those who participated in the program. abstract_id: PUBMED:34916726 Relationship of frequency of participation in a physical checkup and physical fitness in middle-aged and elderly people: the Yakumo study. An annual physical checkup is provided as part of the long-term Yakumo study. The checkup is voluntary and there is variation in the frequency of participation. The aim of this study was to examine relationship of physical fitness with frequency of participation in this checkup. The subjects had all attended at least one annual physical checkup from 2006 to 2018. Data from 1,804 initial checkups were used for analysis. At the checkups, age, gender, height, weight, body mass index (BMI), and bone mineral density (BMD) were recorded, and physical activity was measured. The average number of physical checkups per participant for 13 years was 2.4 (1-13). Daily exercise habits were found to be significantly associated with higher participation in physical checkups. Furthermore, between groups with low (1-5 times; &lt;90th percentile of participants) and high (≥6 times) participation, weight and BMI were significantly higher, and BMD, grip strength, 10-m gait time, back muscle strength, and two-step test were all significantly lower in the group with lower frequency of participation in the checkup. In conclusions, our results show that frequency of participation in a voluntary annual physical checkup is significantly associated with physical fitness in middle-aged and elderly people. abstract_id: PUBMED:30227797 The Physical Examination of an 'Uncooperative' Elderly Patient The Physical Examination of an 'Uncooperative' Elderly Patient Abstract. The physical examination of uncooperative elderly patients regularly presents physicians in the private practice, in the hospital or nursing home with great challenges. The lack of cooperation itself can be an important indication of an underlying medical problem. Important elements to improve the patient's cooperation include ensuring basic needs, sufficient time and patience, adequate communication and good cooperation with relatives and other healthcare professionals. Targeted clinical observation as well as thinking in geriatric syndromes and unmet needs can help to raise physical findings despite limited cooperation. Pathological findings are indicators of impaired organ and functional systems and must be supplemented by a detailed examination. abstract_id: PUBMED:28042985 Enhancing elderly health examination effectiveness by adding physical function evaluations and interventions. This study aimed to assess the benefit of adding physical function evaluations and interventions to routine elderly health examination. This is a Quasi-experimental controlled trial. 404 elderly adults (aged 70 and over) scoring 3-6 on the Canadian Study of Health and Aging Clinical Frailty Scale Chinese In-Person Interview Version (CSHA-CFS) in a 2012 annual elderly health examination were enrolled. Both the control and experimental groups received the routine annual health examination with the latter further provided with functional evaluations, exercise instruction, and nutrition education. 112 (84.8%) persons in the experiment group and 267 (98.2%) in the control group completed the study. CSHA-CFS performance of the experimental group was more likely to improve (odds ratio=9.50, 95% confidence interval (CI)=4.62-19.56) and less likely to deteriorate (OR=0.04, 95% CI=0.01-0.31) one year after intervention. Within the experimental group, Fried Frailty Index improvement percentage surpassed the deterioration percentage (29.5% vs. 0.9%, p&lt;0.001), five-meter walk speed rose from 1.0±0.2 to 1.2±0.2m/s (p&lt;0.001), grip strength escalated from 22.3±7.1 to 24.8±6.7kg (p&lt;0.001), Short-form Physical Performance Battery increased from 10.0±1.6 to 11.6±0.9 (p&lt;0.001), and timed up and go test decreased from 10.9±2.9 to 8.9±2.7s (p&lt;0.001). However, no statistical difference was detected in composite adverse endpoints, including hospitalization, emergency department visit and falls, between the two groups, though the incidence was higher in the control group. Adding functional evaluations, exercise and nutrition interventions to the annual elderly health examination appeared to benefit the health of adults aged 70 years and older. abstract_id: PUBMED:17602927 The annual physical examination: important or time to abandon? The annual physical examination remains a popular format with both patients and providers, despite the lack of evidence that either a comprehensive examination or laboratory screening tests are indicated for healthy adults. Patient desire for extensive testing and comprehensive examination combined with provider belief that the physical examination is both of proven value and can detect subclinical illness have led to the continued pervasive practice of annual physical examinations in our country. The authors review the current forces behind the ongoing popularity of the annual physical examination and the current recommendations for preventive services in healthy adults, and provide thoughts on what the busy practicing clinician can focus on in the realm of proven preventive health. abstract_id: PUBMED:24423046 The use of annual physical examinations among the elderly in rural China: a cross-sectional study. Background: Periodic physical examination is considered helpful in preventing illness and promoting health among the elderly. Limited information is available about the use of annual physical examinations among the elderly in rural areas, however. This research explores the distribution characteristics of annual physical examination use and its determinants among people aged 60 or over in rural China. Methods: A cross-sectional study was undertaken to estimate distribution characteristics of annual physical examination use and to collect data of sociodemographic characteristics, health knowledge level, and health communication channels. Participants were 1128 people aged 60 or over, randomly selected from four different provinces in the East, Mid-East, Mid-West, and West China. Logistic regression determined the predictors of annual physical examination use. Results: Participants were predominantly aged 60-79 (44.1%) and 70-79 (42.0%). A total of 716 (63.5%) participants underwent annual physical examinations. Those who reported acquiring health knowledge via bulletin boards and village doctors had a higher probability of using annual physical examinations (OR = 3.15 and 1.53). The probability for civil servants/retired having annual physical examinations was 2.16 times higher than for farmers. Those who had an average level of health knowledge had a higher probability of using annual physical examinations than those at the below-average level (odds ratio: 2.07). Conclusion: The government and public health institutions should assist farmers to acquire the habit of having annual physical examinations. Traditional channels, such as bulletin boards, should be used to deliver health information. Village doctors should be supported in delivering health information to the elderly in rural areas. abstract_id: PUBMED:35787845 Should we abandon annual physical examination? - A meta-analysis of annual physical examination and all-cause mortality in adults based on observational studies. Several meta-analyses based on randomized clinical trials data have failed to find an association between the annual physical examination (APE) and reduced mortality; however, no comparable meta-analysis based on observational data exists. We conducted a meta-analysis of observational studies comparing APE versus non-APE in adults for all-cause mortality. English-language searches of four databases (PubMed, CINAHL, EMBASE, and Google Scholar) between the years 2000 to 2019 yielded seven observational studies that investigated APE versus non-APE in healthy adults in relation to all-cause mortality. Random effects models were used to calculate pooled hazard ratios and 95% confidence intervals (CI), and to incorporate variation between studies. During follow-up periods that ranged from two to 25 years, there were 35,055 deaths among 633,957 participants. APE was significantly associated with a 45% lower hazard of all-cause mortality, with pooled hazard ratio of 0.55 (95% CI 0.48 to 0.64, P &lt; 0.01) for all participants. This meta-analysis of seven observational studies in the past 20 years provides evidence of an association between APE and a lower hazard of all-cause mortality, a finding that contrasts with findings based on meta-analyses of randomized clinical trials data. Nonetheless, at present the evidence available about the effectiveness or ineffectiveness of APE on all-cause mortality still needs further study. Answer: Yes, the elderly appear to benefit from annual physical examinations (APE). Studies have shown that regular physical examinations can contribute to early detection and timely treatment of diseases, which is helpful in promoting healthy behaviors and preventing diseases (PUBMED:28535772). A study from Kaohsiung City, Taiwan, found that elderly subjects who received annual health examinations had lower mortality than those who did not, suggesting that those who utilized examination services had better survival probability (PUBMED:12202069). Additionally, a meta-analysis of observational studies found that APE was significantly associated with a 45% lower hazard of all-cause mortality among participants (PUBMED:35787845). Furthermore, adding physical function evaluations and interventions to routine elderly health examinations has been shown to benefit the health of adults aged 70 years and older, with improvements in physical function and a higher likelihood of improved health status (PUBMED:28042985). The physical examination, even in the absence of specific symptoms, is also beneficial for the physician-patient relationship and remains an important component of patient evaluation (PUBMED:29650068). However, it is important to note that the effectiveness of APE in reducing mortality has been debated, with some studies based on randomized clinical trials data failing to find an association between APE and reduced mortality (PUBMED:35787845). Despite this, the observational data suggests a positive association between APE and lower mortality risk. Therefore, while the evidence is not entirely conclusive, the balance of the available research suggests that the elderly do benefit from annual physical examinations.
Instruction: Should we cross the cross-links? Abstracts: abstract_id: PUBMED:29310773 Analysis of collagen and elastin cross-links. Fibrillar collagens represent the most abundant extracellular matrix proteins in vertebrates providing tissues and organs with form, stability, and connectivity. For such mechanical functions, the formation of covalent intermolecular cross-linking between molecules is essential. This process, the final posttranslational modification during collagen biosynthesis, is initiated by conversion of specific lysine and hydroxylysine residues to the respective aldehydes by the action of lysyl oxidases. This conversion triggers a series of condensation reactions with the juxtaposed lysine-aldehyde, lysine, hydroxylysine, and histidine residues within the same and neighboring molecules resulting in di-, tri-, and tetravalent cross-links. Elastin, another class of extracellular matrix protein, is also stabilized by the lysyl oxidase-mediated mechanism but involving only lysine residues leading to the formation of unique tetravalent cross-links. This chapter presents an overview of fibrillar collagen cross-linking, and the analytical methods for collagen and elastin cross-links we have developed. abstract_id: PUBMED:36453939 Ultrastretchable Composite Organohydrogels with Dual Cross-Links Enabling Multimodal Sensing. Building multiple cross-links or networks is a favorable way of diversifying applications of the hydrogels, which is also available for the organohydrogels prepared via the solvent replacement way. However, the situations become more complicated for organohydrogels due to the presence of replaced solvents. Therefore, the correlations between the multiple cross-links and final performance need to be better understood for the organohydrogels, which is vital for tailoring their inherent properties to expand final application scenarios. Polyacrylamide (PAM)/poly(vinyl alcohol) (PVA)/MXene composite organohydrogels with dual cross-links, namely, the covalently cross-linked PAM chains as the primary network and the physically cross-linked PVA/PAM chains with MXene particles as the secondary cross-links, were developed here for the study. The occurrence of the secondary cross-links plays multiple roles as sacrificial units endowing the system with ultrastretchability with an excellent strain-resistance effect and as temperature-sensitive units endowing the system with thermosensation ability with an outstanding temperature coefficient of resistance. Thus, the optimized sample can be used as a strain sensor with excellent environmental tolerance for detecting human motion as a pressure sensor to probe compression with weak deformation and as a thermal sensor to capture environmental temperature changes. This work provides valuable information on developing organohydrogels with superior performance for multimodal sensors. abstract_id: PUBMED:30840236 Measurement of Collagen Cross-Links from Tissue Samples by Mass Spectrometry. All tissues contain an extracellular matrix (ECM) which is constantly and dynamically remodeled, either in physiological or pathological processes, such as fibrosis or cancer. One of the key contributors in the establishment of a fibrotic state is the abnormal deposition of extracellular matrix and cross-linked proteins, in particular collagen, leading to tissue stiffening and disruption of organ function. The precise and sensitive measurement of these cross-links by LC-MS/MS is a very powerful tool for providing a quantitative and qualitative analysis of fibrosis and is a key requirement in the study of this state, as well as in the development of drugs for this unmet clinical need. abstract_id: PUBMED:36093786 SpotLink enables sensitive and precise identification of site nonspecific cross-links at the proteome scale. Nonspecific cross-linker can provide distance restraints between surface residues of any type, which could be used to investigate protein structure construction and protein-protein interaction (PPI). However, the vast number of potential combinations of cross-linked residues or sites obtained with such a cross-linker makes the data challenging to analyze, especially for the proteome-wide applications. Here, we developed SpotLink software for identifying site nonspecific cross-links at the proteome scale. Contributed by the dual pointer dynamic pruning algorithm and the quality control of cross-linking sites, SpotLink identified &gt; 3000 cross-links from human cell samples within a short period of days. We demonstrated that SpotLink outperformed other approaches in terms of sensitivity and precision on the datasets of the simulated succinimidyl 4,4'-azipentanoate dataset and the condensin complexes with known structures. In addition, some valuable PPI were discovered in the datasets of the condensin complexes and the HeLa dataset, indicating the unique identification advantages of site nonspecific cross-linking. These findings reinforce the importance of SpotLink as a fundamental characteristic of site nonspecific cross-linking technologies. abstract_id: PUBMED:25431636 Synthesis of G-N2-(CH2)3-N2-G Trimethylene DNA interstrand cross-links. The synthesis of G-N2-(CH2)3-N2-G trimethylene DNA interstrand cross-links (ICLs) in a 5'-CG-3' and 5'-GC-3' sequence from oligodeoxynucleotides containing N2-(3-aminopropyl)-2'-deoxyguanosine and 2-fluoro-O6-(trimethylsilylethyl)inosine is presented. Automated solid-phase DNA synthesis was used for unmodified bases and modified nucleotides were incorporated via their corresponding phosphoramidite reagent by a manual coupling protocol. The preparation of the phosphoramidite reagents for incorporation of N2-(3-aminopropyl)-2'-deoxyguanosine is reported. The high-purity trimethylene DNA interstrand cross-link product is obtained through a nucleophilic aromatic substitution reaction between the N2-(3-aminopropyl)-2'-deoxyguanosine and 2-fluoro-O6-(trimethylsilylethyl)inosine containing oligodeoxynucleotides. abstract_id: PUBMED:25606979 Synthesis of G-N2-(CH2)3-N2-G Trimethylene DNA Interstrand Cross-Links. The synthesis of G-N(2)-(CH(2))(3)-N(2)-G trimethylene DNA interstrand cross-links (ICLs) in a 5'-CG-3' and 5'-GC-3' sequence from oligodeoxynucleotides containing N(2)-(3-aminopropyl)-2'-deoxyguanosine and 2-fluoro-O(6)-(trimethylsilylethyl)inosine is presented. Automated solid-phase DNA synthesis was used for unmodified bases and modified nucleotides were incorporated via their corresponding phosphoramidite reagent by a manual coupling protocol. The preparation of the phosphoramidite reagents for incorporation of N(2)-(3-aminopropyl)-2'-deoxyguanosine is reported. The high-purity trimethylene DNA interstrand cross-link product is obtained through a nucleophilic aromatic substitution reaction between the N(2)-(3-aminopropyl)-2'-deoxyguanosine- and 2-fluoro-O(6)-(trimethylsilylethyl)inosine-containing oligodeoxynucleotides. abstract_id: PUBMED:32858158 Human cataractous lenses contain cross-links produced by crystallin-derived tryptophanyl and tyrosyl radicals. Protein insolubilization, cross-linking and aggregation are considered critical to the development of lens opacity in cataract. However, the information about the presence of cross-links other than disulfides in cataractous lenses is limited. A potential role for cross-links produced from tryptophanyl radicals in cataract development is suggested by the abundance of the UV light-sensitive Trp residues in crystallin proteins. Here we developed a LC-MS/MS approach to examine the presence of Trp-Trp, Trp-Tyr and Tyr-Tyr cross-links and of peptides containing Trp-2H (-2.0156 Da) in the lens of three patients diagnosed with advanced nuclear cataract. In the proteins of two of the lenses, we characterized intermolecular cross-links between βB2-Tyr153-Tyr104-βA3 and βB2-Trp150-Tyr139-βS. An additional intermolecular cross-link (βB2-Tyr61-Trp200-βB3) was present in the lens of the oldest patient. In the proteins of all three lenses, we characterized two intramolecular Trp-Trp cross-links (Trp123-Trp126 in βB1 and Trp81-Trp84 in βB2) and six peptides containing Trp -2H residues, which indicate the presence of additional Trp-Trp cross-links. Relevantly, we showed that similar cross-links and peptides with modified Trp-2H residues are produced in a time-dependent manner in bovine β-crystallin irradiated with a solar simulator. Therefore, different crystallin proteins cross-linked by crystalline-derived tryptophanyl and tyrosyl radicals are present in advanced nuclear cataract lenses and similar protein modifications can be promoted by solar irradiation even in the absence of photosensitizers. Overall, the results indicate that a role for Trp-Tyr and Trp-Trp cross-links in the development of human cataract is possible and deserves further investigation. abstract_id: PUBMED:28845123 A simple statistical approach to model the time-dependent response of polymers with reversible cross-links. A new class of polymers characterized by dynamic cross-links is analyzed from a mechanical point of view. A thermodynamically consistent model is developed within the Lagrangian framework for polymers that can rearrange their internal cross-links. Such a class of polymers has the capability to reset their internal microstructure and the microscopic remodeling mechanism leads to a behavior similar to that of an elastic fluid. These materials can potentially be used in several fields, such as in biomechanics, smart materials, morphing materials to cite e few. However, a comprehensive understanding is necessary before we can predict their behavior and perform material design for advanced technologies. The proposed formulation-following a statistical approach adapted from classical rubber elasticitye is based on the evolution of the molecular chains' end-to-end distance distribution function. This distribution is allowed here to evolve with time, starting from an initial stress-free state and depending on the deformation history and the cross-link attachment/detachment kinetics. Some simple examples are finally presented and discussed to illustrate the capability and generality of the developed approach. abstract_id: PUBMED:28246932 Identification of Pyridinoline Trivalent Collagen Cross-Links by Raman Microspectroscopy. Intermolecular cross-linking of bone collagen is intimately related to the way collagen molecules are arranged in a fibril, imparts certain mechanical properties to the fibril, and may be involved in the initiation of mineralization. Raman microspectroscopy allows the analysis of minimally processed bone blocks and provides simultaneous information on both the mineral and organic matrix (mainly type I collagen) components, with a spatial resolution of ~1 μm. The aim of the present study was to validate Raman spectroscopic parameters describing one of the major mineralizing type I trivalent cross-links, namely pyridinoline (PYD). To achieve this, a series of collagen cross-linked peptides with known PYD content (as determined by HPLC analysis), human bone, porcine skin, predentin and dentin animal model tissues were analyzed by Raman microspectroscopy. The results of the present study confirm that it is feasible to monitor PYD trivalent collagen cross-links by Raman spectroscopic analysis in mineralized tissues, exclusively through a Raman band ~1660 wavenumbers. This allows determination of the relative PYD content in undecalcified bone tissues with a spatial resolution of ~1 μm, thus enabling correlations with histologic and histomorphometric parameters. abstract_id: PUBMED:33601275 Formation of protein cross-links by singlet oxygen-mediated disulfide oxidation. Cross-links formed within and between proteins are a major cause of protein dysfunction, and are postulated to drive the accumulation of protein aggregates in some human pathologies. Cross-links can be formed from multiple residues and can be reversible (usually sulfur-sulfur bonds) or irreversible (typically carbon-carbon or carbon-heteroatom bonds). Disulfides formed from oxidation of two Cys residues are widespread, with these formed both deliberately, via enzymatic reactions, or as a result of unintended oxidation reactions. We have recently demonstrated that new protein-glutathione mixed disulfides can be formed through oxidation of a protein disulfide to a thiosulfinate, and subsequent reaction of this species with glutathione. Here we investigate whether similar reactions occur between an oxidized protein disulfide, and a Cys residues on a second protein, to give novel protein cross-links. Singlet oxygen (1O2)-mediated oxidation of multiple proteins (α-lactalbumin, lysozyme, beta-2-microglobulin, C-reactive protein), and subsequent incubation with the Cys-containing protein glyceraldehyde-3-phosphate dehydrogenase (GAPDH), generates inter-protein cross-links as detected by SDS-PAGE, immunoblotting and mass spectrometry (MS). The cross-link yield is dependent on the 1O2 concentration, the presence of the original protein disulfide bond, and the free Cys on GAPDH. MS with 18O-labeling has allowed identification of the residues involved in some cases (e.g. Cys25 from the Cys25-Cys80 disulfide in beta-2-microglobulin, with Cys149 or Cys244 of GAPDH). The formation of these cross-links results in a loss of GAPDH enzymatic activity. These data provide 'proof-of-concept' for a novel mechanism of protein cross-link formation which may help rationalize the accumulation of cross-linked proteins in multiple human pathologies. Answer: The question "Should we cross the cross links?" is somewhat ambiguous without additional context. However, based on the provided abstracts, we can infer that the question may relate to the role of cross-links in various biological and synthetic materials and whether manipulating or targeting these cross-links is beneficial or necessary in certain applications or studies. Cross-links are crucial for the structural integrity and function of various materials. In biological systems, cross-links in collagen and elastin provide stability and connectivity to tissues and organs (PUBMED:29310773). Abnormal deposition and cross-linking of extracellular matrix proteins, such as collagen, can lead to pathological conditions like fibrosis, making the measurement of these cross-links important for understanding and treating such diseases (PUBMED:30840236). In the context of cataracts, cross-links produced by crystallin-derived radicals are implicated in lens opacity, suggesting that understanding these cross-links could be important for addressing this condition (PUBMED:32858158). In synthetic materials, cross-links can be engineered to create polymers with desirable properties, such as ultrastretchability and sensing capabilities (PUBMED:36453939), or to model the time-dependent response of polymers with reversible cross-links (PUBMED:28845123). Cross-linking can also be a tool in molecular biology, as demonstrated by the synthesis of DNA interstrand cross-links for studying DNA repair mechanisms (PUBMED:25431636, PUBMED:25606979). Analytical tools like Raman microspectroscopy can identify specific types of cross-links, such as pyridinoline in bone (PUBMED:28246932), and software like SpotLink can identify proteome-scale nonspecific cross-links, which are useful for studying protein structure and interactions (PUBMED:36093786). Overall, whether to "cross" (i.e., target, manipulate, or study) cross-links depends on the context. In materials science and engineering, creating and optimizing cross-links can lead to materials with novel properties. In biological systems, understanding and potentially targeting cross-links can be crucial for diagnosing and treating diseases. Thus, cross-links are a significant focus in both research and application across various fields.
Instruction: Routine contrast imaging of low pelvic anastomosis prior to closure of defunctioning ileostomy: is it necessary? Abstracts: abstract_id: PUBMED:18368457 Routine contrast imaging of low pelvic anastomosis prior to closure of defunctioning ileostomy: is it necessary? Purpose: The purpose of the study was to determine the utility of routine contrast enema prior to ileostomy closure and its impact on patient management in patients with a low pelvic anastomosis. Material And Methods: Two hundred eleven patients had a temporary loop ileostomy constructed to protect a low colorectal or coloanal anastomosis following low anterior resection for cancer (57%) or other disease (12%) or to protect an ileal pouch-anal anastomosis following restorative proctocolectomy (31%). All patients were evaluated by physical examination, proctoscopy, and water-soluble contrast enema prior to ileostomy closure. Imaging results were correlated with the clinical situation to determine the effects on patient management. Results: The mean time from ileostomy creation to closure was 15.6 weeks. Overall, 203 patients (96%) had an uncomplicated course. Eight patients (4%) developed an anastomotic leak, seven of which were diagnosed clinically and confirmed radiographically before planned ileostomy closure. Resolution of the leak was confirmed by follow-up contrast enema. One patient, whose pouchogram revealed a normal anastomosis, clinically developed a leak after ileostomy closure. It is important to note that routine contrast enema examination did not reveal an anastomotic leak or stricture that was not already suspected clinically. Conclusions: All patients who developed an anastomotic leak in this study were diagnosed clinically, and the diagnosis was confirmed by selective use of radiographic tests. Routine contrast enema evaluation of low pelvic anastomoses before loop ileostomy closure did not provide any additional information that changed patient management. The utility of this routine practice should be questioned. abstract_id: PUBMED:33284668 Evaluation of Pelvic Anastomosis by Endoscopic and Contrast Studies Prior to Ileostomy Closure: Are Both Necessary? A Single Institution Review. Contrast enema is the gold standard technique for evaluating a pelvic anastomosis (PA) prior to ileostomy closure. With the increasing use of flexible endoscopic modalities, the need for contrast studies may be unnecessary. The objective of this study is to compare flexible endoscopy and contrast studies for anastomotic inspection prior to defunctioning stoma reversal. Patients with a protected PA undergoing ileostomy closure between July 2014 and June 2019 at our institution were retrospectively identified. Demographics and clinical outcomes in patients undergoing preoperative evaluation with endoscopic and/or contrast studies were analyzed. We identified 207 patients undergoing ileostomy closure. According to surgeon's preference, 91 patients underwent only flexible endoscopy (FE) and 100 patients underwent both endoscopic and contrast evaluation (FE + CE) prior to reversal. There was no significant difference in pelvic anastomotic leak (2.2% vs. 1%), anastomotic stricture (1.1% vs. 6%), pelvic abscess (2.2% vs. 3.0%), or postoperative anastomotic complications (4.4% vs. 9%) between groups FE and FE + CE (P &gt; .05). Flexible endoscopy alone appears to be an acceptable technique for anastomotic evaluation prior to ileostomy closure. Further studies are needed to determine the effectiveness of different diagnostic modalities for pelvic anastomotic inspection. abstract_id: PUBMED:22880182 Routine barium enema prior to closure of defunctioning ileostomy is not necessary. Purpose: The use of barium enemas to confirm the anastomotic integrity prior to ileostomy closure is still controversial. The purpose of the study was to determine the utility of routine contrast enema prior to ileostomy closure and its impact on patient management in patients with a low pelvic anastomosis. Methods: One hundred forty-five patients had a temporary loop ileostomy constructed to protect a low colorectal or coloanal anastomosis following low anterior resection for rectal cancer. All patients were evaluated by physical examination, proctoscopy, and barium enema prior to ileostomy closure. Results: The median time from ileostomy creation to closure was 8 months. Five (3.5%) of the 144 patients were found to have clinically relevant strictures at the colorectal anastomosis on routine barium enema. One patient (0.7%) showed anastomotic leak on their barium enema. Overall, 141 patients (97.9%) had an uncomplicated postoperative course. Postoperative complication occurred in three patients (2.1%). None of them showed abnormal barium enema finding, which suggested that routine contrast enema examination did not predict postoperative complication. Conclusion: Routine barium enema evaluation of low pelvic anastomoses before loop ileostomy closure did not provide any additional information for postoperative colorectal anastomotic complication. abstract_id: PUBMED:28399874 Early closure of defunctioning stoma increases complications related to stoma closure after concurrent chemoradiotherapy and low anterior resection in patients with rectal cancer. Background: After a low anterior resection, creating a defunctioning stoma is vital for securing the anastomosis in low-lying rectal cancer patients receiving concurrent chemoradiotherapy. Although it decreases the complication and reoperation rates associated with anastomotic leakage, the complications that arise before and after stoma closure should be carefully evaluated and managed. Methods: This study enrolled 95 rectal cancer patients who received neoadjuvant concurrent chemoradiotherapy and low anterior resection with anastomosis of the bowel between July 2010 and November 2012. A defunctioning stoma was created in 63 patients during low anterior resection and in another three patients after anastomotic leakage. Results: The total complication rate from stoma creation to closure was 36.4%. Ileostomy led to greater renal insufficiency than colostomy did and significantly increased the readmission rate (all p &lt; 0.05). The complication rate related to stoma closure was 36.0%. Patients with ileostomy had an increased risk of developing complications (p = 0.017), and early closure of the defunctioning stoma yielded a higher incidence of morbidity (p = 0.006). Multivariate analysis revealed that a time to closure of ≤109 days was an independent risk factor for developing complications (p = 0.007). Conclusions: The optimal timing of stoma reversal is at least 109 days after stoma construction in rectal cancer patients receiving concurrent chemoradiotherapy and low anterior resection. abstract_id: PUBMED:30348506 Contrast radiography before diverting stoma closure in rectal cancer is not necessary on a routine basis. Introduction: Diverting stomata are recommended in patients with low anterior resection and risk factors in order to reduce the severity of anastomotic leaks. Usually, a radiology study is performed prior to the closure of the stoma to detect subclinical leaks. The aim of the present study is to assess the clinical utility of the radiology study. Methods: A prospective cohort study of patients undergoing anterior rectal resection for rectal cancer and those who underwent stoma closure without contrast enema. This study was carried out after a retrospective review of radiology study results prior to the closure of the stoma in patients operated from 2007 to 2011. Results: Eighty-six patients met the study criteria. Thirteen patients (15.1%) presented pelvic sepsis. Contrast enema before stoma closure was pathological in 8 patients (9.3%). Five out of the 13 patients with pelvic sepsis had a pathological radiological study, compared to only 3 out of the 73 patients without intra-abdominal complications after rectal resection (38.5% vs. 4.1%; P=.001). Based on these results, we conducted a prospective study omitting the contrast enema in patients with no postoperative complications. Thirty-eight patients had their stoma closed without a prior radiology study. None of the patients presented pelvic sepsis. Conclusions: Radiology studies of the colorectal anastomosis before reconstruction can safely be omitted in patients without pelvic sepsis after the previous rectal resection. abstract_id: PUBMED:28834982 Clinical Value of Contrast Enema Prior to Ileostomy Closure. Purpose To determine the value of routine contrast enema of loop ileostomy before elective ileostomy closure regarding the influence on the clinical decision-making. Materials and Methods Retrospective analysis of contrast enemas at a tertiary care center between 2005 und 2011. Patients were divided into two groups: Group I with ileostomy reversal, group II without ileostomy closure. Patient-related parameters (underlying disease, operation method) and parameters based on the findings (stenosis, leakage of anastomosis, incontinence) were evaluated. Results Analyzing a total of 252 patients in 89 % (group I, n = 225) ileostomy closure was performed. In 15 % the radiologic report was the only diagnostic modality needed for therapy decision; in 36 % the contrast enema and one or more other diagnostic methods were decisive. In 36 % the radiological report of the contrast imaging was not relevant for decision at all. In 11 % (group II, n = 27) no ileostomy closure was performed. In this group in 11 % the radiological report of the contrast enema was the only decision factor for not performing the ileostomy reversal. In 26 % one or more examination was necessary. In 26 % the result of the contrast examination was not relevant. Conclusion The radiologic contrast imaging of loop ileostomy solely plays a minor role in complex surgical decision-making before planned reversal, but is important as first imaging method in detecting complications and often leads to additional examinations. Key points · Contrast enema of loop ileostomy before planned ileostomy closure is a frequently performed examination.. · There exist no general guidelines that give further recommendations on decision-making planning ileostomy closure.. · The radiologic contrast imaging of loop ileostomy solely plays a minor role in decision-making before planned reversal, but is important as first imaging method.. Citation Format · Goetz A, da Silva NP, Moser C et al. Clinical Value of Contrast Enema Prior to Ileostomy Closure. Fortschr Röntgenstr 2017; 189: 855 - 863. abstract_id: PUBMED:30483951 The application of defunctioning stomas after low anterior resection of rectal cancer. Defunctioning stomas are frequently used by colorectal surgeons after unsatisfactory anastomosis. The primary purpose of constructing a defunctioning stoma is to prevent an anastomotic leakage or to alleviate the detrimental consequences of it. However, the construction of defunctioning stomas is not free and is associated with adverse impacts on the patient. Stoma-related complications can develop in different stages and can impair a patient's quality of life. Furthermore, one in every four to six defunctioning stomas turns into a non-closure stoma. Since no definite indications for the creation of a defunctioning stoma are available, surgeons have to carefully weigh their benefits against their adverse effects. Thus, the precise selection of patients who should undergo the creation of a defunctioning stoma is of great importance, and an alternative method for preventing anastomotic leakage is needed. abstract_id: PUBMED:24222144 Transumbilical defunctioning ileostomy: A new approach for patients at risks of anastomotic leakage after laparoscopic low anterior resection. Background: The use of a protective defunctioning stoma in rectal cancer surgery has been reported to reduce the rates of reoperation for anastomotic leakage, as well as mortality after surgery. However, a protective defunctioning stoma is not often used in cases other than low rectal cancer because of the need for stoma closure later, and hesitation by patients to have a stoma. We outline a novel and patient-friendly procedure with an excellent cosmetic outcome. This procedure uses the umbilical fossa for placement of a defunctioning ileostomy followed by a simple umbilicoplasty for ileostomy closure. Patients And Methods: This study included a total of 20 patients with low rectal cancer who underwent a laparoscopic low anterior resection with defunctioning ileostomy (10 cases with a conventional ileostomy in the right iliac fossa before March 2012, and 10 subsequent cases with ileostomy at the umbilicus) at the Jikei University Hospital in Tokyo from August 2011 to January 2013. The clinical characteristics of the two groups were compared: operative time, blood loss, length of hospital stay and postoperative complications of the initial surgery, as well as the stoma closure procedure. Results: There were no differences between the groups in the median operative time for initial surgery (248 min vs. 344 min), median blood loss during initial surgery (0 ml vs. 115 ml), and median hospital stay after initial surgery (13 days vs. 16 days). Complication rates after the initial surgery were similar. There were no differences between the groups in median operative time for stoma closure (99 min vs. 102 min), median blood loss during stoma closure (7.5 ml vs. 10 ml), and median hospital stay after stoma closure (8 days in both groups). Complications after stoma closure such as wound infection and intestinal obstruction were comparable. Thus, no significant differences in any factor were found between the two groups. Conclusion: The transumbilical protective defunctioning stoma is a novel solution to anastomotic leakage after laparoscopic rectal cancer surgery, with patient-friendliness as compared to conventional procedures in light of the cosmetic outcome. abstract_id: PUBMED:32908969 Risk factors for nonclosure of defunctioning stoma and stoma-related complications among low rectal cancer patients after sphincter-preserving surgery. Background: Defunctioning stoma is widely used to reduce anastomotic complications in rectal cancer surgery. However, the complications of stoma and stoma reversal surgery should not be underestimated. Furthermore, in some patients, stoma reversal failed. Here, we investigated the complications of defunctioning stoma surgery and subsequent reversal surgery and identify risk factors associated with the failure of getting stoma reversed. Methods: In total, 154 patients who simultaneously underwent low anterior resection and defunctioning stoma were reviewed. Patients were divided into two groups according to whether their stoma got reversed or not. The reasons that patients received defunctioning stoma and experienced stoma-related complications and the risk factors for failing to get stoma reversed were analysed. Results: The mean follow-up time was 47.54 (range 4.0-164.0) months. During follow-up, 19.5% of the patients suffered stoma-related long-term complications. Only 79 (51.3%) patients had their stomas reversed. The morbidity of complications after reversal surgery was 45.6%, and these mainly consisted of incision-related complications. Multivariate analyses showed that pre-treatment comorbidity (HR = 3.17, 95% CI 1.27-7.96, P = 0.014), postoperative TNM stage (HR = 2.55, 95% CI 1.05-6.18, P = 0.038), neoadjuvant therapy (HR = 2.75, 95% CI 1.07-7.05, P = 0.036), anastomosis-related complications (HR = 4.52, 95% CI 1.81-11.29, P = 0.001), and disease recurrence (HR = 24.83, 95% CI 2.90-213.06, P = 0.003) were significant independent risk factors for a defunctioning stoma to be permanent. Conclusions: Defunctioning stoma is an effective method to reduce symptomatic anastomotic leakage, but the stoma itself and its reversal procedure are associated with high morbidity of complications, and many defunctioning stomas eventually become permanent. Therefore, surgeons should carefully assess preoperatively and perform defunctioning stomas in very high risk patients. In addition, doctors should perform stoma reversal surgery more actively to prevent temporary stomas from becoming permanent. abstract_id: PUBMED:33570353 Risk factors associated with non-closure of defunctioning stoma in patients with rectal cancer: univariate and multivariate analysis Objective: To identify the risk factors associated with non-closure of defunctioning stoma in patients with rectal cancer. Material And Methods: A retrospective analysis included patients who underwent surgical treatment at the Ryzhikh National Medical Research Centre of Coloproctology for the period from March 2017 to August 2019. Inclusion criterion was anterior or low anterior resection followed by anastomosis and preventive stoma. Univariate and multivariate analysis enrolled 28 factors for identifying the risk factors of non-closure of defunctioning stoma. Results: There were 246 patients with rectal cancer. Intraoperative fluorescence angiography was applied in 145 cases to assess blood supply within the anastomosis and reduce the risk of anastomotic leakage. According to multivariate analysis, only two factors had significant influence on non-closure of preventive stoma - any grade of anastomotic leakage (OR 6.5; 95% CI 2.2-18.8, p=0.001) and rectal cancer stage IV (OR 7.2; 95% CI 1.9-27.6, p=0.004). Conclusion: According to our data, permanent stoma is observed in 15% of patients. Answer: The necessity of routine contrast imaging of low pelvic anastomosis prior to the closure of defunctioning ileostomy is debated in the literature. Several studies have questioned the utility of this practice. One study found that routine contrast enema evaluation of low pelvic anastomoses before loop ileostomy closure did not provide any additional information that changed patient management, as all patients who developed an anastomotic leak were diagnosed clinically, and the diagnosis was confirmed by selective use of radiographic tests (PUBMED:18368457). Another study concluded that routine barium enema evaluation of low pelvic anastomoses before loop ileostomy closure did not predict postoperative complications, suggesting that this practice may not be necessary (PUBMED:22880182). A comparison between flexible endoscopy and contrast studies for anastomotic inspection prior to defunctioning stoma reversal found that flexible endoscopy alone appears to be an acceptable technique for anastomotic evaluation, with no significant difference in anastomotic leak, stricture, pelvic abscess, or postoperative anastomotic complications between groups that underwent only flexible endoscopy versus those that underwent both endoscopic and contrast evaluation (PUBMED:33284668). Furthermore, a prospective cohort study suggested that radiology studies of the colorectal anastomosis before reconstruction can safely be omitted in patients without pelvic sepsis after the previous rectal resection (PUBMED:30348506). Another retrospective analysis indicated that the radiologic contrast imaging of loop ileostomy plays a minor role in complex surgical decision-making before planned reversal (PUBMED:28834982). In summary, the evidence suggests that routine contrast imaging may not be necessary prior to the closure of defunctioning ileostomy, particularly in patients without clinical indications of complications. However, the decision should be individualized based on the patient's clinical situation, and further studies may be needed to determine the effectiveness of different diagnostic modalities for pelvic anastomotic inspection.
Instruction: Can pelvic floor muscle training improve sexual function in women with pelvic organ prolapse? Abstracts: abstract_id: PUBMED:25401779 Can pelvic floor muscle training improve sexual function in women with pelvic organ prolapse? A randomized controlled trial. Introduction: Pelvic floor muscle training (PFMT) has level 1 evidence of reducing the size and symptoms associated with pelvic organ prolapse (POP). There is scant knowledge, however, regarding whether PFMT has an effect on sexual function. Aim: The aim of the trial was to evaluate the effect of PFMT on sexual function in women with POP. Methods: In this randomized controlled trial, 50 women were randomized to an intervention group (6 months of PFMT and lifestyle advice) and 59 women were randomized to a control group (lifestyle advice only). Main Outcome Measures: Participants completed a validated POP-specific questionnaire to describe frequency and bother of prolapse, bladder, bowel, and sexual symptoms and answered a semi-structured interview. Results: No significant change in number of women being sexually active was reported. There were no significant differences between groups regarding change in satisfaction with frequency of intercourse. Interview data revealed that 19 (39%) of women in the PFMT group experienced improved sexual function vs. two (5%) in the control group (P&lt;0.01). Specific improvements reported by some of the women were increased control, strength and awareness of the pelvic floor, improved self-confidence, sensation of a "tighter" vagina, improved libido and orgasms, resolution of pain with intercourse, and heightened sexual gratification for partners. Women who described improved sexual function demonstrated the greatest increases in pelvic floor muscle (PFM) strength (mean 16 ± 10 cmH2 0) and endurance (mean 150 ± 140 cmH2 0s) (P&lt;0.01). Conclusion: PFMT can improve sexual function in some women. Women reporting improvement in sexual function demonstrated the greatest increase in PFM strength and endurance. abstract_id: PUBMED:33495013 Early postpartum biofeedback assisted pelvic floor muscle training in primiparous women with second degree perineal laceration: Effect on sexual function and lower urinary tract symptoms. Objective: To evaluate the short-term effect of routine early postpartum electromyographic biofeedback assisted pelvic floor muscle training on sexual function and lower urinary tract symptoms. Materials And Methods: From December 2016 to November 2017, primiparous women with vaginal delivery, who experienced non-extended second-degree perineal laceration were invited to participate. Seventy-five participants were assigned into a pelvic floor muscle training (PFMT) group or control group. Women in the PFMT group received supervised biofeedback-assisted pelvic floor muscle training at the 1st week and 4th week postpartum. Exercises were performed at home with the same protocol until 6 weeks postpartum. The Pelvic Organ Prolapse Urinary Incontinence Sexual Questionnaire (PISQ-12) and the Urinary Distress Inventory short form questionnaire (UDI-6) were used to evaluate sexual function and lower urinary tract symptoms respectively at immediate postpartum, 6 weeks, 3 months, and 6 months postpartum. Results: Forty-five women (23 in PFMT group,22 in control group) completed all questionnaires at 6 months postpartum. For overall sexual function and the three sexual functional domains, no statistically significant difference was found in PISQ scores from baseline to 6 weeks, 3 months, and 6 months postpartum between the PFMT and control groups. For postpartum lower urinary tract symptoms, all symptoms gradually improved over time for both groups without a statistically significant difference between groups. Conclusion: Our study showed that supervised biofeedback-assisted pelvic floor muscle training started routinely at one week postpartum did not provide additional improvement in postpartum sexual function and lower urinary tract symptoms. abstract_id: PUBMED:28913148 The effect of pelvic organ prolapse type on sexual function, muscle strength, and pelvic floor symptoms in women: A retrospective study. Objective: This retrospective research was planned to investigate the effect of pelvic organ prolapse (POP) type on sexual function, muscle strength, and pelvic floor symptoms in symptomatic women. Materials And Methods: Data on POP type and stages as assessed using the Pelvic Organ Prolapse-Quantification system of 721 women who presented to the women's health unit between 2009 and 2016 were collected retrospectively. POP types were recorded as asymptomatic, anterior, apical, and posterior compartment prolapses. Sexual function was assessed using the Pelvic Organ Prolapse/Urinary Incontinence Sexual Questionnaire short-form (PISQ-12), pelvic floor muscle strength was assessed through vaginal pressure measurement, and pelvic floor symptoms and quality of life were assessed using the Pelvic Floor Distress Inventory-20 (PFDI-20). Results: Among 168 women who met the inclusion criteria, 96 had anterior compartment prolapses, 20 had apical compartment prolapses, 16 had posterior compartment prolapses, and 36 women were asymptomatic. There was no difference between the groups in their PISQ-12 total and subscales scores, PFDI-20 total and two subscale (colorectal/anal, urinary) scores, and muscle strength (p&gt;0.05). In the Pelvic Organ Prolapse Distress Inventory-6, another subscale of PFDI-20, it was determined that there was a difference between asymptomatic women and those with anterior compartment prolapses (p=0.044) and apical compartment prolapses (p=0.011). Conclusion: This research found that POP type did not affect sexual function, muscle strength, and colorectal and urinary symptoms in our cohort. There were more prolapse symptoms and complaints in women with anterior and apical compartment prolapses. abstract_id: PUBMED:31843420 Perioperative pelvic floor muscle training did not improve outcomes in women undergoing pelvic organ prolapse surgery: a randomised trial. Question: In women undergoing surgery for pelvic organ prolapse (POP), what is the average effect of the addition of perioperative pelvic floor muscle training on pelvic organ prolapse symptoms, pelvic floor muscle strength, quality of life, sexual function and perceived improvement after surgery? Design: Randomised controlled trial with concealed allocation, blinded assessors, and intention-to-treat analysis. Participants: Ninety-six women with an indication for POP surgery. Intervention: The experimental group received a 9-week pelvic floor muscle training protocol with four sessions before the surgery and seven sessions after the surgery. The control group received surgery only. Outcome Measures: Symptoms were assessed using the Pelvic Floor Distress Inventory (PFDI-20), which is scored from 0 'unaffected' to 300 'worst affected'. Secondary outcomes were assessed using vaginal manometry, validated questionnaires and Patient Global Impression of Improvement, which is scored from 1 'very much better' to 7 'very much worse'. All participants were evaluated 15 days before surgery, and at Days 40 and 90 after surgery. Results: There was no substantial difference in POP symptoms between the experimental and control groups at Day 40 (31 (SD 24) versus 38 (SD 42), adjusted mean difference -6, 95% CI -25 to 13) or Day 90 (27 (SD 27) versus 33 (SD 33), adjusted mean difference -4, 95% CI -23 to 14). The experimental group perceived marginally greater global improvement than the control group; mean difference -0.4 (95% CI -0.8 to -0.1) at Day 90. However, the estimated effect of additional perioperative pelvic floor muscle training was estimated to be not beneficial enough to be considered worthwhile for any other secondary outcomes. Conclusion: In women undergoing POP surgery, additional perioperative pelvic floor muscle training had negligibly small effects on POP symptoms, pelvic floor muscle strength, quality of life or sexual function. Trial Registration: ReBEC, RBR-29kgz5. abstract_id: PUBMED:36078788 Effect of Pelvic Floor Workout on Pelvic Floor Muscle Function Recovery of Postpartum Women: Protocol for a Randomized Controlled Trial. Background: There is a risk of pelvic floor dysfunction (PFD) from baby delivery. Many clinical guidelines recommend pelvic floor muscle training (PFMT) as the conservative treatment for PFD because pelvic floor muscles (PFMs) play a crucial role in development of PFD. However, there is disagreement about the method and intensity of PFM training and the relevant measurements. To pilot the study in PFM training, we designed a Pelvic Floor Workout (PEFLOW) for women to train their pelvic through entire body exercises, and we planned a trial to evaluate its effectiveness through comparing the outcomes from a group of postpartum women who perform PELFLOW at home under professional guidance online with the control group. Methods/design: The randomized controlled trial was projected to be conducted from November 2021 to March 2023. A total of 260 postpartum women would be recruited from the obstetrics departments of the study hospital and women would be eligible for participation randomized into experimental or control groups (EG/CG) if their PFM strength are scaled by less than Modified Oxford grading Scale (MOS) to be less than grade 3. Women in EG would perform a 12-week PEFLOW online under the supervision and guidance of a physiotherapist, while women in CG would have no interventions. Assessments would be conducted at enrollment, post intervention (for EG) or 18th to 24th week postpartum (for CG), and 1 year postpartum. Assessment would be performed in terms of pelvic floor symptoms, including MOS, cough stress test, urinary leakage symptoms, pelvic organ prolapse quantitation (POP-Q), and vaginal relaxation, clinic examinations including Pelvic floor electrophysiological test, Pelvic floor ultrasound and Spine X-ray, overall body test including trunk endurance test, handgrip test, body composition test, and questionnaires including International Physical Activity Questionnaire Score-Short Form(IPAQ-SF), Pelvic Floor Distress Inventory Questionnaire-20 (PFDI-20), Pelvic Floor Impact Questionnaire-7 (PFIQ-7), the 6-item Female Sexual Function Index (FSFI-6), and the Pittsburgh Sleep Quality Index (PSQI). Primary analysis will be performed to test our main hypothesis that PEFLOW is effective with respect to strengthen PFM strength. Discussion: This trial will demonstrate that pelvic floor-care is accessible to most women and clinical practice on PFD may change relevantly should this study find that Online PEFLOW approach is effective to improve PFMs. Trial Registration: ClinicalTrials.gov, NCT05218239. abstract_id: PUBMED:26055700 The Impact of Pelvic Floor Disorders and Pelvic Surgery on Women's Sexual Satisfaction and Function. Pelvic floor disorders have a significant impact on women's daily lives. Sexual health, which includes sexual satisfaction and function, can be altered by pelvic floor disorders and pelvic surgery. This article reviews common pelvic floor disorders (pelvic organ prolapse, urinary and fecal incontinence) and the effect they have on sexual satisfaction and function. Associations between sexual function and pelvic floor disorders are described, as are the relationships between sexual function and pelvic surgery. Women of all ages need to know their options and understand the impact pelvic surgery can have on sexual satisfaction, function, and activity. abstract_id: PUBMED:25921509 Pelvic floor muscle training and pelvic floor disorders in women Our goal is to provide an update on the results of pelvic floor rehabilitation in the treatment of urinary incontinence and genital prolapse symptoms. Pelvic floor muscle training allows a reduction of urinary incontinence symptoms. Pelvic floor muscle contractions supervised by a healthcare professional allow cure in half cases of stress urinary incontinence. Viewing this contraction through biofeedback improves outcomes, but this effect could also be due by a more intensive and prolonged program with the physiotherapist. The place of electrostimulation remains unclear. The results obtained with vaginal cones are similar to pelvic floor muscle training with or without biofeedback or electrostimulation. It is not known whether pelvic floor muscle training has an effect after one year. In case of stress urinary incontinence, supervised pelvic floor muscle training avoids surgery in half of the cases at 1-year follow-up. Pelvic floor muscle training is the first-line treatment of post-partum urinary incontinence. Its preventive effect is uncertain. Pelvic floor muscle training may reduce the symptoms associated with genital prolapse. In conclusion, pelvic floor rehabilitation supervised by a physiotherapist is an effective short-term treatment to reduce the symptoms of urinary incontinence or pelvic organ prolapse. abstract_id: PUBMED:31521573 Does Surgical Approach in Pelvic Floor Repair Impact Sexual Function in Women? Introduction: Surgical routes used to correct complex pelvic floor disorders (CPFDs) may have a negative impact on women's sexual function. Currently, there is no evidence concerning the impact of a specific surgical procedure on postoperative sexual function in women. Aim: The aim of this study was to compare an abdominal approach with rectopexy and sacrocolpopexy to a perineal procedure with abdominal rectopexy, regarding female sexual function in cases of CPFDs. Methods: Women who were operated for CPFDs between January 2003 and June 2010 were retrospectively asked to answer the Pelvic Organ Prolapse/Urinary Incontinence Sexual Questionnaire-12, the Miller Score of Incontinence, and a urinary incontinence frequency score. We also questioned them about their sexual function and satisfaction before and after the operation using visual analogic scores. Main Outcome Measure: We compared the Pelvic Organ Prolapse/Urinary Incontinence Sexual Questionnaire-12 before and after the surgery in both groups, and we made an intragroup comparison. Results: There were 334 women identified, but only 51 could be included. Globally, we found no statistically significant differences in terms of sexual function before and after surgery between the 25 groups. Intragroup comparison demonstrated that, within the perineal approach group, patients experienced a decrease in their sexual arousal after the procedure. The choice of surgical route for pelvic floor disorders does not seem to have an impact on the results of postoperative sexual function in women. This study adds to the limited literature on sexual outcomes of surgery for CPFD. It is limited principally due to its retrospective design and the small number of patients included. Conclusion: Both surgical routes have very similar outcomes on most sexual questions. A perineal approach combined with abdominal rectopexy did, however, demonstrate a slight decrease in sexual arousal of the patients after the intervention. Zawodnik A, Balaphas A, Buchs NC, et al. Does Surgical Approach in Pelvic Floor Repair Impact Sexual Function in Women? Sex Med 2019;7:522-529. abstract_id: PUBMED:31321820 Is voluntary pelvic floor muscles contraction important for sexual function in women with pelvic floor disorders? Aims: To investigate relationships between pelvic floor muscles (PFM) and sexual function (SF) in sexually active (SA) and not-SA (NSA) women with pelvic floor disorders (PFD). Methods: In 350 women with PFD: 173 (49.4%) SA, 177 (50.6%) NSA, Pelvic Organ Prolapse (POP)-Quantification, PFM tone, and strength were evaluated. Transperineal ultrasound (TPS) measured genital hiatus (GH) diameter, bladder neck (BN) movement. Pelvic Organ Prolapse/Incontinence Sexual Questionnaire, IUGA-Revised (PISQ-IR), and Female Sexual Function Index (FSFI) were used. SA women were dichotomized according to muscle strength (weak/strong) and tone (normal/hypoactive). Results: FSFI scores reflected sexual dysfunction in 63.5% SA women. 32.2% partnered NSA stated PFD the reason for sexual inactivity. NSA women had higher POP stages and hypoactive PFM rates compared to SA: 72 (40.7%) vs 52 (30.1%), P = .04. TPS GH diameter did not differ between SA and NSA at rest, contraction, and did not correlate with SF. BN length was longer in SA at rest (15.0 ± 7.0 vs 13.1 ± 9.4, P = .03) and contraction (19.7 ± 7.0 vs 16.7 ± 10.2, P = .006); 30 (8.6%) subjects depressed BN during contraction. GH change at contraction correlated with Oxford Grading Scale (rps = 0.41; P &lt; .001), and was smaller in women with nonfunctioning vs normal/underactive PFM (P &lt; .001). Women with hypoactive PFM had lower SF in PISQ-IR Global quality and FSFI Desire domains vs normal tone. BN length at rest, contraction, and total mobility correlated with several PISQ-IR and FSFI domains. Conclusions: In SA women with PFD, lower rates of hypoactive PFM tone were found. The ability to contract PFM did not influence SF. Greater mobility of BN correlated with lower SF. abstract_id: PUBMED:28041844 Is Pelvic Floor Dysfunction an Independent Threat to Sexual Function? A Cross-Sectional Study in Women With Pelvic Floor Dysfunction. Introduction: Prior studies have reported an association of sexual dysfunction with pelvic floor dysfunction (PFD), but without defining causation. Aim: To investigate predictors of sexual function in women with PFD, including pelvic organ prolapse, stress urinary incontinence, overactive bladder, obstructed defecation, and fecal incontinence. Methods: This retrospective cross-sectional study included 755 women (mean age = 56 years, 68% postmenopausal) referred for PFD (2008-2013). Subjects underwent standardized history and examination, including demographics and assessment of pelvic floor function and sexual function using validated quality-of-life instruments. The physical examination included body mass index, Pelvic Organ Prolapse Quantification measurements, and pelvic muscle strength (Oxford scale). Proportional odds regression analysis tested patient characteristics, PFD, and other determinants of sexual dysfunction as predictors of sexual function. Main Outcome Measures: The Pelvic Floor Distress Inventory (PFDI-20) and Pelvic Floor Impact Questionnaire (PFIQ-7) to assess PFD and the Short Personal Experiences Questionnaire to assess sexual function. Results: The prevalence of PFD included pelvic organ prolapse (72%), stress urinary incontinence (66%), overactive bladder (78%), fecal incontinence (41%), and obstructed defecation (70%). Most subjects (74%) had a sexual partner and most (56%) reported recent sexual intercourse. Participants reported a low level of sexual desire and sexual enjoyment and moderate levels of sexual arousal and orgasm. When stratified by sexual enjoyment, 46% enjoyed sex and this group had lower PFDI and PFIQ scores, reflecting less quality-of-life burden. Pelvic organ prolapse, obstructed defecation, and fecal incontinence were associated with not enjoying sex. However, when adjusted for other determinants of sexual dysfunction (eg, aging, dyspareunia, atrophy, and partner issues), these associations disappeared. Conclusion: Women with PFD also have a large burden of sexual dysfunction, although this appears to be mediated by factors not unique to PFD. Answer: Pelvic floor muscle training (PFMT) has been studied to determine its effects on sexual function in women with pelvic organ prolapse (POP). The results from various studies suggest that PFMT can have a positive impact on sexual function in some women with POP. A randomized controlled trial found that women in the PFMT group experienced improved sexual function compared to the control group, with specific improvements including increased control, strength, and awareness of the pelvic floor, improved self-confidence, sensation of a "tighter" vagina, improved libido and orgasms, resolution of pain with intercourse, and heightened sexual gratification for partners. Women who described improved sexual function also demonstrated the greatest increases in pelvic floor muscle (PFM) strength and endurance (PUBMED:25401779). However, another study showed that supervised biofeedback-assisted PFMT started routinely at one week postpartum did not provide additional improvement in postpartum sexual function and lower urinary tract symptoms (PUBMED:33495013). This suggests that the timing and specific conditions of PFMT may influence its effectiveness on sexual function. Retrospective research indicated that the type of pelvic organ prolapse did not affect sexual function, muscle strength, and colorectal and urinary symptoms in their cohort (PUBMED:28913148). This suggests that PFMT's impact on sexual function may not be directly related to the type of POP. A randomized trial found that perioperative PFMT did not improve outcomes in women undergoing POP surgery, including sexual function (PUBMED:31843420). This indicates that PFMT may not be beneficial in all contexts, particularly when combined with surgical intervention for POP. In summary, while some studies suggest that PFMT can improve sexual function in women with POP, the evidence is not consistent across all research. Factors such as the timing of PFMT, the presence of other interventions like surgery, and individual differences among women may influence the effectiveness of PFMT on sexual function in the context of POP.
Instruction: Do the matrix degrading enzymes cathepsins B and D increase following a high intensity exercise regime? Abstracts: abstract_id: PUBMED:17055751 Do the matrix degrading enzymes cathepsins B and D increase following a high intensity exercise regime? Objective: It has been shown by others that levels of matrix degrading enzymes are increased in osteoarthritis (OA) and so are proposed to be involved in the aetiopathogenesis of the disease, including exercise-associated OA. Therefore we hypothesised that cathepsin B and cathepsin D were increased in cartilage samples previously shown to have early stage OA from 2-year-old Thoroughbred horses, euthanased for reasons other than this study, that had a history of 19-week high intensity exercise (n=6) compared to age and sex-matched horses with a history of low intensity exercise (n=6). Methods: Cartilage samples were used from four specific sites within the carpal joints. Standard immunolocalisation protocols and blind counting of positive and negative cells within the articular surface, mid-zone and deep zone (DZ) were used to test our hypothesis. Results: A high intensity exercise regime did not significantly alter the number of chondrocytes positive for cathepsin B, whereas a significant decrease was found for cathepsin D in the DZ, indicating that these enzymes are regulated differently by mechanical loading. Furthermore, cathepsin D varied according to the topographical location within the joint, reflecting biomechanical differences experienced during a high compared to a low intensity exercise regime. Conclusion: This study disproves our hypothesis that cathepsins B and D are increased following a high intensity exercise regime unlike that reported for other matrix enzymes. abstract_id: PUBMED:8240719 Generation of matrix-degrading proteolytic system from fibronectin by cathepsins B, G, H and L. By their endoproteinase activities, cathepsins B, G, H and L can generate matrix-degrading proteolytic system from fibronectin. All four cathepsins studied cleaved fibronectin in fragments that were either proteolytically active or activated after incubation at pH 7.4 and in the presence of Ca2+. The highest enhancement of the matrix protein-degrading activity was observed after a gelatin-affinity chromatography of each digest. These results suggest that the effect of cathepsins at physiological pH in vivo may be enhanced by the activation of a matrix-degrading proteolytic system from fibronectin. abstract_id: PUBMED:27923176 Co-distribution of cysteine cathepsins and matrix metalloproteases in human dentin. It has been hypothesized that cysteine cathepsins (CTs) along with matrix metalloproteases (MMPs) may work in conjunction in the proteolysis of mature dentin matrix. The aim of this study was to verify simultaneously the distribution and presence of cathepsins B (CT-B) and K (CT-K) in partially demineralized dentin; and further to evaluate the activity of CTs and MMPs in the same tissue. The distribution of CT-B and CT-K in sound human dentin was assessed by immunohistochemistry. A double-immunolabeling technique was used to identify, at once, the occurrence of those enzymes in dentin. Activities of CTs and MMPs in dentin extracts were evaluated spectrofluorometrically. In addition, in situ gelatinolytic activity of dentin was assayed by zymography. The results revealed the distribution of CT-B and CT-K along the dentin organic matrix and also indicated co-occurrence of MMPs and CTs in that tissue. The enzyme kinetics studies showed proteolytic activity in dentin extracts for both classes of proteases. Furthermore, it was observed that, at least for sound human dentin matrices, the activity of MMPs seems to be predominant over the CTs one. abstract_id: PUBMED:8467955 Degradation of bone matrix proteins by osteoclast cathepsins. 1. The degradation of the bone matrix proteins osteocalcin, osteonectin and alpha 2HS-glycoprotein by human cathepsins B and L and human osteoclastoma cathepsins has been investigated. 2. Intermediate degradation products (M(r) &gt; 12 kDa) were not observed during the digestion of alpha 2HS-glycoprotein and osteonectin by cathepsins B and L although they were observed with some of the osteoclastoma cathepsins. Most of the osteoclastoma cathepsins were capable of degrading these two proteins to small peptides at comparable rates. 3. Each cathepsin produced a different pattern of osteocalcin degradation products. 4. The extensive range of non-collagenous proteins in bone matrix may necessitate the production by osteoclasts of cathepsins with different specificities during bone resorption. abstract_id: PUBMED:36633092 Experimental approaches for altering the expression of Abeta-degrading enzymes. Cerebral clearance of amyloid β-protein (Aβ) is decreased in early-onset and late-onset Alzheimer's disease (AD). Aβ is cleared from the brain by enzymatic degradation and by transport out of the brain. More than 20 Aβ-degrading enzymes have been described. Increasing the degradation of Aβ offers an opportunity to decrease brain Aβ levels in AD patients. This review discusses the direct and indirect approaches which have been used in experimental systems to alter the expression and/or activity of Aβ-degrading enzymes. Also discussed are the enzymes' regulatory mechanisms, the conformations of Aβ they degrade, where in the scheme of Aβ production, extracellular release, cellular uptake, and intracellular degradation they exert their activities, and changes in their expression and/or activity in AD and its animal models. Most of the experimental approaches require further confirmation. Based upon each enzyme's effects on Aβ (some of the enzymes also possess β-secretase activity and may therefore promote Aβ production), its direction of change in AD and/or its animal models, and the Aβ conformation(s) it degrades, investigating the effects of increasing the expression of neprilysin in AD patients would be of particular interest. Increasing the expression of insulin-degrading enzyme, endothelin-converting enzyme-1, endothelin-converting enzyme-2, tissue plasminogen activator, angiotensin-converting enzyme, and presequence peptidase would also be of interest. Increasing matrix metalloproteinase-2, matrix metalloproteinase-9, cathepsin-B, and cathepsin-D expression would be problematic because of possible damage by the metalloproteinases to the blood brain barrier and the cathepsins' β-secretase activity. Many interventions which increase the enzymatic degradation of Aβ have been shown to decrease AD-type pathology in experimental models. If a safe approach can be found to increase the expression or activity of selected Aβ-degrading enzymes in human subjects, then the possibility that this approach could slow the AD progression should be examined in clinical trials. abstract_id: PUBMED:33809973 Key Matrix Remodeling Enzymes: Functions and Targeting in Cancer. Tissue functionality and integrity demand continuous changes in distribution of major components in the extracellular matrices (ECMs) under normal conditions aiming tissue homeostasis. Major matrix degrading proteolytic enzymes are matrix metalloproteinases (MMPs), plasminogen activators, atypical proteases such as intracellular cathepsins and glycolytic enzymes including heparanase and hyaluronidases. Matrix proteases evoke epithelial-to-mesenchymal transition (EMT) and regulate ECM turnover under normal procedures as well as cancer cell phenotype, motility, invasion, autophagy, angiogenesis and exosome formation through vital signaling cascades. ECM remodeling is also achieved by glycolytic enzymes that are essential for cancer cell survival, proliferation and tumor progression. In this article, the types of major matrix remodeling enzymes, their effects in cancer initiation, propagation and progression as well as their pharmacological targeting and ongoing clinical trials are presented and critically discussed. abstract_id: PUBMED:30234276 Roles of cathepsins in pancreatic cancer. Pancreatic cancer is an aggressive disease with rapid invasion and metastasis. Extracellular matrix degrading enzymes play an important role in cancer cell invasion and migration. Cathepsins are a group of proteolytic enzymes, which are responsible for the matrix turnover. Among the cathepsins, more number of studies have focused upon cysteine cathepsins. The function and activities of these enzymes are interwoven and their interplay causes the activation of one another by following a proteolytic cascade. This review focuses on differential expression of cathepsins in different types of pancreatic cancer and controls, importance of cathepsins in various phenomena responsible for tumorigenesis and its spread in experimental and human studies. Thus, cathepsins and its expression in pancreatic cancer may be used as potential biomarkers and may prove to be important therapeutic targets if tested clinically. abstract_id: PUBMED:21248362 Cysteine cathepsins in human carious dentin. Matrix metalloproteinases (MMPs) are important in dentinal caries, and analysis of recent data demonstrates the presence of other collagen-degrading enzymes, cysteine cathepsins, in human dentin. This study aimed to examine the presence, source, and activity of cysteine cathepsins in human caries. Cathepsin B was detected with immunostaining. Saliva and dentin cysteine cathepsin and MMP activities on caries lesions were analyzed spectrofluorometrically. Immunostaining demonstrated stronger cathepsins B in carious than in healthy dentin. In carious dentin, cysteine cathepsin activity increased with increasing depth and age in chronic lesions, but decreased with age in active lesions. MMP activity decreased with age in both active and chronic lesions. Salivary MMP activities were higher in patients with active than chronic lesions and with increasing lesion depth, while cysteine cathepsin activities showed no differences. The results indicate that, along with MMPs, cysteine cathepsins are important, especially in active and deep caries. abstract_id: PUBMED:30897858 Cysteine Cathepsins and their Extracellular Roles: Shaping the Microenvironment. : For a long time, cysteine cathepsins were considered primarily as proteases crucial for nonspecific bulk proteolysis in the endolysosomal system. However, this view has dramatically changed, and cathepsins are now considered key players in many important physiological processes, including in diseases like cancer, rheumatoid arthritis, and various inflammatory diseases. Cathepsins are emerging as important players in the extracellular space, and the paradigm is shifting from the degrading enzymes to the enzymes that can also specifically modify extracellular proteins. In pathological conditions, the activity of cathepsins is often dysregulated, resulting in their overexpression and secretion into the extracellular space. This is typically observed in cancer and inflammation, and cathepsins are therefore considered valuable diagnostic and therapeutic targets. In particular, the investigation of limited proteolysis by cathepsins in the extracellular space is opening numerous possibilities for future break-through discoveries. In this review, we highlight the most important findings that establish cysteine cathepsins as important players in the extracellular space and discuss their roles that reach beyond processing and degradation of extracellular matrix (ECM) components. In addition, we discuss the recent developments in cathepsin research and the new possibilities that are opening in translational medicine. abstract_id: PUBMED:17569248 Cathepsin D and MMP-9 activity increase following a high intensity exercise in hind limb muscles of young rats. The influence of an intensive exercise regime on cathepsin D and MMP-9 activity in hind limb muscles was investigated. We hypothesized that high-intensity exercise would increase the number of these proteins, indicating their involvement in the pathogenesis of exercise-induced muscle injury. Muscle fibers from the gastrocnemius and soleus were used from young (6-mo-old) female rats (n = 6) who completed 10 consecutive days of treadmill running at high intensity (34 m min(-1) gradually up to 40 min per day), compared with nonrunning, age and sex-matched rats (n = 6). After a high-intensity exercise regime, cathepsin D activity significantly increased in the gastrocnemius (from 6.6 x 10(-3) to 10.7 x 10(-3) or 61% nM tyrosine x mg-1 protein x min-1) and the soleus (from 5.9 x 10(-3) to 8.9 x 10(-3) or 66%). The activity level of mRNA MMP-9, expressed as ng mg(-1) protein, increased in both muscles subjected to intensity running. The results of this study suggest that high-intensity running results in an elevation in the activity of lysosomal enzymes involved in matrix protein degradation. Answer: The matrix degrading enzymes cathepsins B and D do not necessarily increase following a high intensity exercise regime. A study that investigated the effect of a high intensity exercise regime on the levels of cathepsins B and D in cartilage samples from Thoroughbred horses found that the number of chondrocytes positive for cathepsin B did not significantly alter, whereas there was a significant decrease in cathepsin D in the deep zone of the cartilage. This indicates that these enzymes are regulated differently by mechanical loading and that cathepsin D varies according to the topographical location within the joint, reflecting biomechanical differences experienced during high compared to low intensity exercise regimes (PUBMED:17055751). However, another study on young rats showed that after a high-intensity exercise regime, cathepsin D activity significantly increased in the gastrocnemius and soleus muscles, suggesting that high-intensity running results in an elevation in the activity of lysosomal enzymes involved in matrix protein degradation (PUBMED:17569248). These findings suggest that the response of cathepsins B and D to high intensity exercise may be tissue-specific and that while there may not be an increase in cartilage, there could be an increase in muscle tissue.
Instruction: Primary prevention with a defibrillator: are therapies always really optimized before implantation? Abstracts: abstract_id: PUBMED:22696518 Primary prevention with a defibrillator: are therapies always really optimized before implantation? Aims: Left ventricle ejection fraction (LVEF) ≤ 30-35% is widely accepted as a cut-off for primary prevention with an implantable cardiac defibrillator (ICD) in patients with both ischaemic and non-ischaemic cardiomyopathy supposedly on optimal medical therapy. This study reports evolutions of LVEF and treatments of patients implanted in our institution with an ICD for primary prevention of sudden death, after 2 years of follow-up. Methods And Results: Among 84 patients with LVEF under 35% implanted between 2005 and 2007, 28 (33%) had improved their LVEF &gt;35% after the 2 years of follow-up. During this period, even if Beta-blockers (98%) and renin-angiotensin system (RAS) blockers (95%) were already initially prescribed, treatments were significantly optimized with improvement of maximal doses of beta-blockers and RAS blockers at 2 year follow-up compared with initial prescription (62 vs. 37% and 68 vs. 45%, respectively). In patients with improved LVEF, a trend toward a better treatment optimization and revascularization procedures (in the sub-group of ischaemic patients) were observed compared with non-improved LVEF patients. Conclusions: In our study of patients with prophylactic ICD, one-third of them have improved their LVEF after a 2 year follow-up. Despite an optimal medical therapy at the time of implantation, we were able to further improve the maximal treatment doses after implantation. This study highlights the issue of what should be considered as 'optimal' therapy and the possibility of improvement of LVEF related to a real optimized treatment before implantation. abstract_id: PUBMED:28084973 Implantable cardioverter-defibrillator implantation for primary and secondary prevention: indications and outcomes. Implantable cardioverter-defibrillators effectively reduce the rate of sudden cardiac death in children. Significant efforts have been made to better characterise the indications for their placement, and over the past two decades there has been a shift in their use from secondary to primary prevention. Primary prevention includes placement in patients thought to be at high risk of sudden cardiac death before the patient experiences any event. Secondary prevention includes placement after a high-risk event including sustained ventricular tachycardia or resuscitated cardiac arrest. Although liberal device implantation may be appealing even in patients having marginal indications, studies have shown high rates of adverse effects including inappropriate device discharges and the need for re-intervention because of hardware malfunction. The indications for placement of an implantable cardioverter-defibrillator, whether for primary or secondary prevention of sudden cardiac death, vary based on cardiac pathology. This review will assist the provider in understanding the risks and benefits of device implantation in order to enhance the shared decision-making capacity of patients, families, and providers. abstract_id: PUBMED:27343008 Early Implantation of Primary Prevention Implantable Cardioverter Defibrillators for Patients with Newly Diagnosed Severe Nonischemic Cardiomyopathy. Background: Primary prevention implantable cardioverter defibrillators (ICDs) reduce mortality in selected patients with severe systolic dysfunction. Current guidelines suggest a 3- to 6-month waiting period before implantation. Methods: We retrospectively studied 29 consecutive patients with newly diagnosed nonischemic cardiomyopathy (NICM) who underwent primary prevention ICD implantation within 6 months of diagnosis between January 2008 and April 2014. Cardiac MRI (CMR) evaluated left ventricular ejection fraction (LVEF) and regional fibrosis preimplant. The primary end point was "failure to qualify for an ICD at 12 months postimplant," either due to LVEF ≥ 35% or deterioration necessitating mechanical support or transplantation, without appropriate ICD therapy. Secondary end points were appropriate and inappropriate ICD therapy. Results: Baseline mean age was 44.2 ± 14.8 years and median LVEF 16.4%. Median time from diagnosis to implant was 32 days. At 12 months, 17 patients (58.6%) no longer qualified for an ICD, mainly due to LVEF improvement. At follow-up (mean 32.0 ± 20.6 months), three patients received appropriate therapy (one for ventricular fibrillation). All three had CMR late gadolinium enhancement (LGE) and nonsustained ventricular tachycardia (NSVT) preimplant. Cardiac resynchronization at implant predicted LVEF improvement. Conclusion: Early appropriate therapy, particularly for ventricular fibrillation, is infrequent for patients with very severe NICM who have ICDs implanted within 6 months of diagnosis. The majority of these patients would not qualify for an ICD at 12 months postinsertion. In the absence of a multimodality risk score, early ICD insertion should only be considered in selected cases (presence of LGE and NSVT). Wearable cardioverter defibrillators may have a role as a bridge to ICD decision. abstract_id: PUBMED:31769522 Documentation of shared decision making around primary prevention defibrillator implantations. Introduction: Patients eligible for primary prevention implantable cardioverter-defibrillator (ICD) therapy are faced with a complex decision that needs a clear understanding of the risks and benefits of such an intervention. In this study, our goal was to explore the documentation of primary prevention ICD discussions in the electronic medical records (EMRs) of eligible patients. Methods: In 1523 patients who met criteria for primary prevention ICD therapy between 2013 and 2015, we reviewed patient charts for ICD-related documentation: "mention" by physicians or "discussion" with patient/family. The attitude of the physician and the patient/family toward ICD therapy during discussions was categorized into negative, neutral, or positive preference. Patients were followed to the end-point of ICD implantation. Results: Over a median follow-up of 442 days, 486 patients (32%) received an ICD. ICD was mentioned in the charts of 1105 (73%) patients, and a discussion with the patient/family about the risks and benefits of ICD was documented in 706 (46%) charts. On multivariable analyses, positive cardiologist (hazard ratio [HR]: 7.9, 95% confidence of intervals [CI]: 1.0-59.7, P &lt; .05), electrophysiologist (HR: 7.7, 95% CI: 1.9-31.7, P &lt; .001), and patient/family (HR: 9.9, 95% CI: 6.2-15.7, P &lt; .001) preferences toward ICD therapy during the first documented ICD discussion were independently associated with ICD implantation. Conclusions: In a large cohort of patients eligible for primary prevention ICD therapy, a discussion with the patient/family about the risks and benefits of ICD implantation was documented in less than 50% of the charts. More consistent documentation of the shared decision making around ICD therapy is needed. abstract_id: PUBMED:35448096 Implantable Cardioverter Defibrillator in Primary and Secondary Prevention of SCD-What We Still Don't Know. Implantable cardioverter defibrillators (ICDs) are the cornerstone of primary and secondary prevention of sudden cardiac death (SCD) all around the globe. In almost 40 years of technological advances and multiple clinical trials, there has been a continuous increase in the implantation rate. The purpose of this review is to highlight the grey areas related to actual ICD recommendations, focusing specifically on the primary prevention of SCD. We will discuss the still-existing controversies strongly reflected in the differences between the international guidelines regarding ICD indication class in non-ischemic cardiomyopathy, and also address the question of early implantation after myocardial infarction in the absence of clear protocols for patients at high risk of life-threatening arrhythmias. Correlating the insufficient data in the literature for 40-day waiting times with the increased risk of SCD in the first month after myocardial infarction, we review the pros and cons of early ICD implantation. abstract_id: PUBMED:28711883 Medical evaluation of efficiency of optimized models for early detection and primary prevention of cardiovascular diseases. Introduction: Nowadays cardiovascular disease occupies a leading place in the structure of the prevalence, incidence, disability causes and mortality of the adult population in Ukraine and in the whole world. The prevalence of hypertension in the adult population ranges from 25 - 40%, coronary heart disease is almost 20% of people aged 50-59 years while 24.3% of them have a form of silent coronary artery disease. The feasibility of study is justified by the need to perform health institutions Law of Ukraine dated 07.07.2011 year №3611-VI ≪On Amendments to the Basic Laws of Ukraine on health care on improvement of care" and MoH of Ukraine from 24.07.2013 № 621/60 "On the system of cardiac care in health facilities of Ukraine" therefore extremely important is to develop an optimized model of early detection and primary prevention of cardiovascular diseases at primary level of health care. The aim of the research is to develop methods and evaluation models optimized for early detection and primary prevention of cardiovascular diseases at a general practitioner of family medicine. Material And Methods: The methodical apparatus is of complex of medical and social research methods that meet the requirements of public health: bibliosemantyc, systematic approach and analysis, statistical, expert evaluations. To determine the effectiveness of medical evaluation was conducted in its 33 clinics of general practice in Poltava region, including 7 urban and 26 rural. In expert opinion is taken 825 patients, of which 175 urban and 650 in rural areas. The results of the study found that 193 patients (23.4%) achieved target blood pressure through the implementation of the recommendations concerning the optimization behavior towards of risk factors, lifestyle. abstract_id: PUBMED:12868329 Primary and secondary prevention of colorectal cancer Colorectal cancer is really a public health problem. The authors review the literature about the environmental factors leading to colorectal cancer. Chemoprevention of colorectal cancer is also discussed, particularly by aspirin and non steroidal anti-inflammatory drugs. Development of specific cyclooxygenase-2 inhibitors constitutes a promising research's field. Secondary prevention by coloscopy and polypectomy must lead to a lower rate of colorectal cancer disease and improvement of mortality. abstract_id: PUBMED:23625310 Patient selection for the implantation of a left atrial appendage occluder in primary and secondary prevention of cardioembolic stroke in atrial fibrillation The implantation of a left atrial appendage (LAA) occluder has evolved into an established non-pharmacological alternative to oral anticoagulation (OAC) in the prevention of cardioembolic stroke in patients with atrial fibrillation. While 2 randomized trials investigated the LAA occluder as an alternative treatment in patients who can also undergo OAC, current guidelines recommend the LAA occluder rather as a second line therapy if permanent OAC is not possible due to contraindications. This is in line with current practice where an LAA occluder is usually only implanted if OAC is contraindicated or stopped due to bleeding. The LAA occluder seems most promising for patients with a high risk for both, stroke without OAC and severe bleeding with OAC. After patient informed consent, the LAA occluder may also represent an option for patients who are unwilling to undergo OAC. Since a large proportion of patients do not receive OAC despite an indication (because of contraindications or mere fear of bleeding) and since the majority of patients have to stop OAC during the course of 2 years, mostly due to bleeding, the LAA occluder may play an important role in the long-term prophylaxis of cardioembolic stroke due to atrial fibrillation. abstract_id: PUBMED:38073733 Indications and Effectiveness of ICD for Primary and Secondary Prevention in Patients Admitted in Ahvaz Imam Khomeini Hospital since 2017. Background: Implantable cardioverter-defibrillators (ICDs) have been established for primary and secondary prevention of fatal arrhythmias and effectively reduce the rate of sudden cardiac death (SCD). This study aims to evaluate the indications and effectiveness of ICD for primary and secondary prevention of SCD. Materials And Methods: This retrospective study was conducted on 229 patients (136 for primary and 93 for secondary prevention) with ICD implantations in Imam Khomeini Hospital, Ahvaz, between 2017 and 2020. The incidence of arrhythmic events after implantation of ICDs was saved in electrograms, and the performed treatments (antitachycardia pacing (ATP)/shock) were recorded from the device memory. Results: The indications for ICD implantation in primary and secondary prevention were different (P &lt; 0.0001). The most common cause of ICD implantation for primary prevention was ischemic cardiomyopathy (ICMP, 90.4%) and for secondary prevention was ICMP (58.1%) followed by dilated cardiomyopathy (31.2%). During ICD implantation, 54 patients (39.7%) with ICD implantation for primary prevention and 50 patients (53.8%) for secondary prevention had arrhythmia (P = 0.043). The rate of appropriate therapies in patients with secondary prevention was higher than the primary prevention (57.9% vs. 42.1%), while the rate of inappropriate treatments in patients with primary prevention indication was more than the secondary prevention (63% vs. 37%) (P = 0.060). Conclusions: ICMP was the main cause of ICD implantation for the prevention of SCD in both groups. At follow-up, the high prevalence of appropriate ICD therapy was observed in both groups, and this risk was slightly higher in the secondary prevention group. abstract_id: PUBMED:28017363 Statins for Primary Prevention of Cardiovascular Disease. The US Preventive Services Task Force has recently recommended the use of statins in primary prevention of cardiovascular disease in adults. Doubts remain as to whether this approach is really cost-effective, and additional drawbacks include the increasing burden of potential side effects and incremental costs due to laboratory monitoring. Answer: The question of whether therapies are always optimized before the implantation of a defibrillator for primary prevention is addressed in the study reported in abstract PUBMED:22696518. The study found that among patients with left ventricle ejection fraction (LVEF) under 35% who were implanted with an ICD for primary prevention of sudden death, 33% had improved their LVEF to over 35% after 2 years of follow-up. Despite being on optimal medical therapy at the time of implantation, the study observed that treatments could be further optimized post-implantation, with significant improvements in the maximal doses of beta-blockers and renin-angiotensin system (RAS) blockers. This suggests that what is considered 'optimal' therapy may not be fully achieved at the time of ICD implantation, and there is potential for further optimization of treatment that could improve LVEF (PUBMED:22696518). The abstracts do not provide a definitive answer to whether therapies are always optimized before ICD implantation for primary prevention, but they do highlight the complexity of the decision-making process and the potential for improvement in medical therapy even after the device is in place. The importance of shared decision-making and careful patient selection is emphasized in abstract PUBMED:31769522, which notes that documentation of discussions about the risks and benefits of ICD implantation is often lacking. The abstract PUBMED:35448096 also points out that there are still grey areas and controversies in the guidelines regarding ICD implantation, particularly in the context of primary prevention in non-ischemic cardiomyopathy and early implantation after myocardial infarction. In summary, while optimal therapy is a goal prior to ICD implantation for primary prevention, there may be room for further optimization of medical therapy post-implantation, and the decision to implant an ICD should involve careful consideration of individual patient factors and shared decision-making (PUBMED:22696518, PUBMED:31769522, PUBMED:35448096).
Instruction: Is the low-sodium diet actually indicated for all patients with stable heart failure? Abstracts: abstract_id: PUBMED:20414532 Is the low-sodium diet actually indicated for all patients with stable heart failure? Background: Although a low-sodium diet is indicated for heart failure HF, there is no evidence this dietary restriction is beneficial to all patients. Objective: To prospectively study the acute effectsof a low-sodium diet in patients (pts) with heart failure (HF). Methods: Fifty stable outpatients with mild to moderate HF who reported previously consuming 6.6 g table salt/day were studied. In Phase 1, all pts were submitted to a diet with 2 g of salt during 7 days, followed by randomization in 2 subgroups (Phase 2): one to receive 6 g of salt (subgroup 1) and the other, 2 g of salt/day for 7 days (subgroup II). Results: Phase 1: the diet with 2 g of salt reduced the BMI, plasma and urinary sodium, protein consumption, iron, zinc, selenium and vitamin B12; it increased plasma levels of norepinephrine, nitrate, serum aldosterone and improved quality of life. Phase 2: for pts with low BMI, the use of 6 g salt/day acutely decreased the levels of norepinephrine, albumin and cholesterol in plasma. No difference was observed in pts with higher BMI. Conclusion: The diet with 2 g salt/day for pts with HF increased the neurohormonal activation associated to HF progression. The BMI can influence the response to the neurohormonal activation in a low-sodium diet in pts with HF. Further studies to test salt restriction for longer periods are recommended. abstract_id: PUBMED:27903829 Long-Term Adherence to Low-Sodium Diet in Patients With Heart Failure. Although following a low-sodium diet (LSD) for heart failure (HF) has been recommended for decades, little is known about factors related to long-term patient adherence. The purposes of this study were to (a) compare sodium intake and factors affecting adherence in a long-term adherent group and in a non-adherent group and (b) examine predictors of membership in the long-term adherent group. Patients with HF ( N = 74) collected 24-hr urine samples and completed the Dietary Sodium Restriction Questionnaire and the Patient Health Questionnaire-9. Long-term adherence was determined using the Stage of Dietary Behavior Change Scale. The long-term adherent group had lower sodium intake (3,086 mg vs. 4,135 mg, p = .01) and perceived more benefits from LSD than the non-adherent group. Only positive attitudes toward LSD predicted membership in the long-term adherence group (odds ratio [OR] = 1.18, p = .005). Interventions focused on enhancing positive perceptions of the benefits of an LSD may improve long-term dietary adherence in patients with HF. abstract_id: PUBMED:26296246 Self-reported Adherence to a Low-Sodium Diet and Health Outcomes in Patients With Heart Failure. Background: Most clinicians rely on patients' self-report of following a low-sodium diet to determine adherence of patients with heart failure (HF). Whether self-reported adherence to a low-sodium diet is associated with cardiac event-free survival is unclear. Purposes: To determine (1) whether self-reported is concordant with adherence to a low-sodium diet measured by food diaries and 24-hour urinary sodium excretion and (2) whether self-reported adherence to a low-sodium diet predicts cardiac event-free survival. Methods: Adherence to a low-sodium diet was measured using 3 measures in 119 HF patients: (1) self-reported adherence, 1 item from the Self-care of Heart Failure Index scale; (2) a 3-day food diary; (3) 24-hour urinary sodium excretion. Patients were followed up for a median of 297 days to determine cardiac hospitalization or emergency department visit. One-way analysis of variance and Cox regression were used to address our purposes. Results: Self-reported adherence was concordant with adherence to a low-sodium diet measured by food diaries and 24-hour urinary sodium excretion. Thirty-one patients who reported they always follow a low-sodium diet had an average sodium intake less than 3 g/d (F = 5.07, P = .002) and 3.3 g of a mean 24-hour urinary sodium excretion (F = 3.393, P = .020). Patients who reported they never or rarely follow a low-sodium diet had 4.7 times greater risk of having cardiac events than did those who always followed a low-sodium diet (P = .017). Conclusion: Self-reported adherence to a low-sodium diet predicted cardiac event-free survival demonstrating clinicians can use this as an indicator of adherence. abstract_id: PUBMED:15935733 Factors related to nonadherence to low sodium diet recommendations in heart failure patients. Background: A low sodium diet is a cornerstone of nonpharmacologic therapy for heart failure patients. Although nonadherence is common, little is known about why heart failure patients fail to adhere to this diet. Aims: The purpose of this study was to explore the experience of heart failure patients in following a low sodium diet. Methods And Results: We conducted a qualitative descriptive study with a convenience sample of 20 participants. Interviews were conducted and analyzed for themes. The data reflected three primary themes about nonadherence to the low sodium diet: lack of knowledge, interference with socialization, and lack of food selections. Participants expressed a need for details about low sodium food selection, food preparation, and rationale for the diet. Lack of knowledge also was manifested as diet confusion for participants who required additional dietary restrictions. Interference with socialization was manifested by patients' experiences with family conflict when family members ate high-sodium foods and difficulty eating out. The theme of lack of low sodium food selections was reflected by comments about limited food choices, and lack of palatability. Conclusion: Researchers and clinicians need to consider patients' perceptions as they generate and evaluate interventions to increase adherence to a low sodium diet. abstract_id: PUBMED:20592543 Theory-based low-sodium diet education for heart failure patients. Theory-based teaching strategies for promoting adherence to a low-sodium diet among patients with heart failure are presented in this article. The strategies, which are based on the theory of planned behavior, address patient attitude, subjective norm, and perceived control as patients learn how to follow a low-sodium diet. Home health clinicians can select a variety of the instructional techniques presented to meet individual patient learning needs. abstract_id: PUBMED:18326994 Relationship of heart failure patients' knowledge, perceived barriers, and attitudes regarding low-sodium diet recommendations to adherence. The purposes of this study were to describe heart failure patient perceptions regarding instructions received for following a low-sodium diet and the benefits, barriers, and ease and frequency of following the diet. A total of 246 patients with heart failure referred from academic medical centers in the United States and Australia participated in the study. A subset of 145 patients provided 24-hour urine samples for sodium excretion assessment. While most (80%) patients reported receiving recommendations to follow a low-sodium diet, their recall of specific instructions was poor. Although the majority (75%) reported following a low-sodium diet most or all of the time, 24-hour urine sodium excretion indicated that only 25% of patients were adherent. Patients who reported being more adherent, however, had lower urine sodium excretion levels. Attitudes regarding difficulty in and perceived benefits of following the diet were not related to sodium excretion. Data on attitudes and barriers provided guidance for strategies to improve adherence. abstract_id: PUBMED:24598553 Association between self-reported adherence to a low-sodium diet and dietary habits related to sodium intake in heart failure patients. Background: Sodium restriction is the primary dietary therapy in heart failure (HF); however, assessing sodium intake is challenging to clinicians, who commonly rely on patients' self-report of following a low-sodium diet to determine adherence. It is important to further explore the utility of self-reported adherence to a low-sodium diet in patients with HF. Objectives: The objective of this study was to evaluate the association between patients' self-reported adherence to a low-sodium diet and dietary habits related to sodium intake in patients with chronic HF. Methods: Patients with HF seen in a tertiary care Heart Function Clinic and who have been taught on a low-sodium diet with a target of less than 2300 mg/d were included. Self-perception of compliance and dietary habits related to sodium intake was evaluated by using a dietary questionnaire. Patients were divided into 3 groups according to self-reported adherence to a low-sodium diet: never, sometimes, and always. Results: Overall, 237 patients (median age, 66 years, 72.6% men) were included. Compared with the other 2 groups, patients who stated always following a low-sodium diet were less likely to use salt in cooking or at the table. However, 4.2% of the patients in the always group reported eating canned or package soups every day. Moreover, the highest proportion of patients eating fast foods 1 to 3 times a week was found among those in the sometimes group (22.9%) compared with the never (9.1%) and always (6.7%) groups (P = .002). Importantly, the rest of the food items did not show any significant differences between self-reported adherence groups. Conclusion: Self-report of adherence to a low-sodium diet is not reliable among patients with HF, who associate the idea of following a low-sodium diet mainly with not using salt for cooking or at the table but not with reducing frequency of intake of high-sodium processed foods. abstract_id: PUBMED:17688420 Normal-sodium diet compared with low-sodium diet in compensated congestive heart failure: is sodium an old enemy or a new friend? The aim of the present study was to evaluate the effects of a normal-sodium (120 mmol sodium) diet compared with a low-sodium diet (80 mmol sodium) on readmissions for CHF (congestive heart failure) during 180 days of follow-up in compensated patients with CHF. A total of 232 compensated CHF patients (88 female and 144 male; New York Heart Association class II-IV; 55-83 years of age, ejection fraction &lt;35% and serum creatinine &lt;2 mg/dl) were randomized into two groups: group 1 contained 118 patients (45 females and 73 males) receiving a normal-sodium diet plus oral furosemide [250-500 mg, b.i.d. (twice a day)]; and group 2 contained 114 patients (43 females and 71 males) receiving a low-sodium diet plus oral furosemide (250-500 mg, b.i.d.). The treatment was given at 30 days after discharge and for 180 days, in association with a fluid intake of 1000 ml per day. Signs of CHF, body weight, blood pressure, heart rate, laboratory parameters, ECG, echocardiogram, levels of BNP (brain natriuretic peptide) and aldosterone levels, and PRA (plasma renin activity) were examined at baseline (30 days after discharge) and after 180 days. The normal-sodium group had a significant reduction (P&lt;0.05) in readmissions. BNP values were lower in the normal-sodium group compared with the low sodium group (685+/-255 compared with 425+/-125 pg/ml respectively; P&lt;0.0001). Significant (P&lt;0.0001) increases in aldosterone and PRA were observed in the low-sodium group during follow-up, whereas the normal-sodium group had a small significant reduction (P=0.039) in aldosterone levels and no significant difference in PRA. After 180 days of follow-up, aldosterone levels and PRA were significantly (P&lt;0.0001) higher in the low-sodium group. The normal-sodium group had a lower incidence of rehospitalization during follow-up and a significant decrease in plasma BNP and aldosterone levels, and PRA. The results of the present study show that a normal-sodium diet improves outcome, and sodium depletion has detrimental renal and neurohormonal effects with worse clinical outcome in compensated CHF patients. Further studies are required to determine if this is due to a high dose of diuretic or the low-sodium diet. abstract_id: PUBMED:22492785 Low-sodium diet self-management intervention in heart failure: pilot study results. Background: Self-care management of a low-sodium diet is a critical component of comprehensive heart failure (HF) treatment. Aims: The primary purpose of this study was to examine the effectiveness of an educational intervention on reducing the dietary sodium intake of patients with HF. Secondary purposes were to examine the effects of the intervention on attitudes, subjective norm, and perceived behavioural control towards following a low-sodium diet. Methods: This was a randomized clinical trial of an educational intervention based on The Theory of Planned Behavior. Patients were randomized to either a usual care (n=25) or intervention group (n=27) with data collection at baseline, 6 weeks, and 6 months. The intervention group received low-sodium diet instructions and the usual care group received no dietary instructions. Nutrition Data Systems-Research software was used to identify the sodium content of foods on food diaries. Attitudes, subjective norm, and perceived behavioural control were measured using the Dietary Sodium Restriction Questionnaire. Results: Analysis of covariance (between-subjects effects) revealed that dietary sodium intake did not differ between usual care and intervention groups at 6 weeks; however, dietary sodium intake was lower in the intervention group (F=7.3, df=1,29, p=0.01) at 6 months. Attitudes subscale scores were higher in the intervention group at 6 weeks (F=7.6, df=1, 38, p&lt;0.01). Conclusion: Carefully designed educational programmes have the potential to produce desired patient outcomes such as low-sodium diet adherence in patients with heart failure. abstract_id: PUBMED:18325457 Patients differ in their ability to self-monitor adherence to a low-sodium diet versus medication. Objective: Poor adherence to a low-sodium diet (LSD) and prescribed medications increases rehospitalization risk in patients with heart failure (HF). Clinicians have difficulty assessing adherence objectively, so they depend on patients' self-report. The degree to which self-reported adherence reflects actual adherence is unclear. We examined patients' ability to self-monitor adherence to an LSD and medications by comparing self-reported adherence with objective evidence of adherence. Methods: A total of 133 patients with HF (male 71%; ejection fraction 35% +/- 14%) completed the Medical Outcomes Study Specific Adherence Scale. Adherence to the LSD and medication were assessed objectively using 24-hour urinary sodium excretion and dose counting with an electronic monitoring device, respectively. On the basis of self-report, patients were divided into adherent and non-adherent groups and evaluated for differences according to objective adherence. Results: There were no differences in urinary sodium levels between the self-reported LSD adherent and non-adherent groups (4560 mg vs. 4333 mg; P = .59). Self-reported adherent and non-adherent medication groups took 92.4% and 80.4% of prescribed doses, respectively (P &lt; .001). Conclusion: Patients were able to accurately estimate adherence to medication, but they failed to estimate LSD adherence. This finding suggests that we need to improve our means of evaluating adherence to the LSD and of educating patients more thoroughly about following the LSD. We speculated that the inability to estimate LSD adherence may be the result of gaps in patients' knowledge that preclude accurate self-assessment. Answer: The low-sodium diet is commonly recommended for patients with heart failure (HF), but evidence suggests that it may not be beneficial for all patients with stable heart failure. A study indicated that a diet with 2 g of salt per day increased neurohormonal activation associated with HF progression, and the body mass index (BMI) could influence the response to neurohormonal activation in a low-sodium diet in patients with HF (PUBMED:20414532). Another study found that patients with HF who adhered to a low-sodium diet had lower sodium intake and perceived more benefits from the diet, with positive attitudes toward the low-sodium diet predicting long-term adherence (PUBMED:27903829). Self-reported adherence to a low-sodium diet was found to be concordant with adherence measured by food diaries and 24-hour urinary sodium excretion, and it predicted cardiac event-free survival, suggesting that clinicians can use self-reported adherence as an indicator (PUBMED:26296246). However, patients often face challenges in adhering to a low-sodium diet, including lack of knowledge, interference with socialization, and lack of food selections (PUBMED:15935733). Educational interventions based on the theory of planned behavior, addressing patient attitude, subjective norm, and perceived control, may improve adherence to a low-sodium diet (PUBMED:20592543). Despite these interventions, many patients still do not adhere to the diet, and their recall of specific instructions is often poor (PUBMED:18326994). Self-reported adherence to a low-sodium diet is not always reliable, as patients may not reduce the frequency of intake of high-sodium processed foods (PUBMED:24598553). Interestingly, one study found that a normal-sodium diet compared with a low-sodium diet resulted in fewer readmissions for CHF and better clinical outcomes, suggesting that sodium restriction might have detrimental renal and neurohormonal effects (PUBMED:17688420). This indicates that the appropriateness of a low-sodium diet may vary among individuals with HF, and further studies are needed to clarify the role of sodium intake in the management of stable HF.
Instruction: Can sensory and/or motor reactions during percutaneous nerve evaluation predict outcome of sacral nerve modulation? Abstracts: abstract_id: PUBMED:19617755 Can sensory and/or motor reactions during percutaneous nerve evaluation predict outcome of sacral nerve modulation? Purpose: A major advantage of sacral nerve modulation in the treatment of fecal incontinence is the ability to determine the likely treatment outcome before implantation by means of a percutaneous nerve evaluation and a test stimulation period. This study evaluated the predictive value of both sensory and motor responses during percutaneous nerve evaluation for determining the outcome of subchronic test stimulation and permanent stimulation. Methods: All percutaneous nerve evaluation procedures performed between 2000 and 2007 were analyzed. Two hundred eight procedures (194 females; mean age, 56.7 years) were included in this study. Correct needle placement was confirmed by typical S-3 sensory and/or motor responses. The sensory and motor responses during the procedure were analyzed in relation to the outcomes of the test stimulation and permanent stimulation. Results: In all, 72.6% of patients had a successful subchronic test stimulation. A total of 13.9% had no motor response. There was no significant difference in outcome between the group with only sensory responses and the group with both sensory and motor responses (P = 0.89; odds ratio, 1.01; 95% confidence interval, 0.42-2.43). Correlation with permanent implantation showed no significant difference between both groups in outcome (P = 0.53; odds ratio, 0.48; 95% confidence interval, 0.17-1.41). Conclusion: Positive motor responses during percutaneous nerve evaluation are highly predictive of a successful outcome of subchronic test stimulation and permanent sacral nerve modulation. Sensory responses also have the same predictive value. For this reason, percutaneous nerve evaluation preferably should be performed in awake patients under local anesthesia to avoid missing those who may benefit from permanent stimulation but who do not have a motor response during the procedure. abstract_id: PUBMED:32636665 Sacral Nerve Modulation Has No Effect on the Postprandial Response in Irritable Bowel Syndrome. Purpose: Irritable bowel syndrome is a common gastrointestinal disorder with a global prevalence of approximately 11%. Onset or worsening of symptoms following digestion is one of the characteristics of the condition. The present study aimed at evaluating the postprandial sensory and motor response before and after treatment with sacral nerve modulation. Patients And Methods: Twenty-one irritable bowel syndrome patients, 12 diarrhea-predominant and 9 mixed, were eligible for a 6-week sacral nerve modulation test period. Patients were investigated with multimodal impedance planimetry including a standardized meal at baseline and at the end of 2 weeks of suprasensory stimulation embedded in the 6-week sacral nerve modulation period. Results: There was no statistical significant difference in the sensory response to heat or cold before and after sacral nerve modulation, p&gt;0.05. At baseline, wall tension increased after the meal (mean 124.79 [range 82.5 to 237.3] mmHg.mm before the meal, mean 207.76 [range, 143.5 to 429] mmHg.mm after the meal), p=0.048 indicating a postprandial response. During sacral nerve modulation, the postprandial increase in wall tension did not reach statistical significance (mean 86.79 [range 28.8 to 204.5] mmHg.mm before the meal, mean 159.71 [range 71.3 to 270.8] mmHg.mm after the meal), p=0.277. However, there was no statistically significant difference between the postprandial wall tension at baseline and during sacral nerve modulation, p=0.489. Likewise, we found no difference between pressure or stretch ratio at baseline and during sacral nerve modulation, p&gt;0.05. Conclusion: Sacral nerve modulation does not exert its positive treatments effects in diarrhea-predominant and mixed irritable bowel syndrome through a modulation of the postprandial response. abstract_id: PUBMED:35400117 Retroperitoneal hematoma post percutaneous sacral nerve evaluation: A case report. Sacral neuromodulation is an accepted therapy for various voiding dysfunction. We report a 71-year-old male with a history of BPH post TURP and overactive bladder. He was on anticoagulants for atrial fibrillation. He underwent uneventful percutaneous sacral nerve evaluation. Five days later, he showed no improvement. Temporary lead was removed in clinic without complications. On day ten, he developed lower abdominal, and genital skin bruising. CT scan showed presacral retroperitoneal hematoma. His Hemoglobin dropped. He was admitted, managed conservatively and discharged with a stable hemoglobin. Retroperitoneal hematoma post PNE is rare. Management is conservative. Angioembolization is reserved for unstable patients. abstract_id: PUBMED:35481714 Does response to percutaneous tibial nerve stimulation predict similar outcome to sacral nerve stimulation? Aims: Percutaneous tibial nerve stimulation (PTNS) is a simple neuromodulation technique to treat an overactive bladder. It is unclear whether the response to PTNS would suggest a similar response to sacral nerve stimulation (SNS), and whether PTNS could be utilized as an alternative test phase for an SNS implant. This study assessed whether PTNS response was a reliable indicator for subsequent SNS trials. Methods: We performed a retrospective review of the hospital databases to collect all patients who had PTNS and who subsequently had an SNS trial in two tertiary hospitals from 2014 to 2020. Response to both interventions was assessed. A 50% reduction in overactive symptoms (frequency-volume charts) was considered a positive response. McNemar's tests using exact binomial probability calculations were used. The statistical significance level was set to 0.05. Results: Twenty-three patients who had PTNS subsequently went on to a trial of SNS. All patients except one had previously poor response to PTNS treatment. Eight of them also failed the SNS trial. However, 15 patients (including the PTNS responder) had a successful SNS trial and proceeded with the second-stage battery implantation. The difference in response rates between the PTNS and SNS trial was statistically significant (p &lt; 0.001). Conclusions: Poor response to PTNS does not seem to predict the likelihood of patients responding to SNS. A negative PTNS trial should not preclude a trial of a sacral nerve implant. The predictive factors for good and poor responses will be the subject of a larger study. abstract_id: PUBMED:30305847 Cost-effectiveness of sacral nerve stimulation and percutaneous tibial nerve stimulation for faecal incontinence. Background: Subcutaneous sacral nerve stimulation is recommended by the United Kingdom (UK) National Institute for Health and Care Excellence (NICE) as a second-line treatment for patients with faecal incontinence who failed conservative therapy. Sacral nerve stimulation is an invasive procedure associated with complications and reoperations. This study aimed to investigate whether delivering less invasive and less costly percutaneous tibial nerve stimulation prior to sacral nerve stimulation is cost-effective. Methods: A decision analytic model was developed to estimate the cost-effectiveness of percutaneous tibial nerve stimulation with subsequent subcutaneous sacral nerve stimulation versus subcutaneous sacral nerve stimulation alone. The model was populated with effectiveness data from systematic reviews and cost data from randomized studies comparing both procedures in a UK National Health Service (NHS) setting. Results: Offering percutaneous tibial nerve stimulation prior to sacral nerve stimulation (compared with delivering sacral nerve stimulation straight away) was both more effective and less costly in all modeled scenarios. The estimated savings from offering percutaneous tibial nerve stimulation first were £662-£5,697 per patient. The probability of this strategy being cost-effective was around 80% at £20,000-£30,000 per quality-adjusted life-year (QALY). Conclusion: Our analyses suggest that offering patients percutaneous tibial nerve stimulation prior to sacral nerve stimulation can be both cost-effective and cost-saving in the treatment of faecal incontinence. abstract_id: PUBMED:22156864 Medium-term outcome of sacral nerve modulation for constipation. Background: Sacral nerve modulation has been reported as a minimally invasive and effective treatment for constipation refractory to conservative treatment. Objective: This study aimed to evaluate the efficacy and sustainability of sacral nerve modulation for constipation in the medium term (up to 6 years) and to investigate potential predictors of treatment success. Design: We performed a retrospective review of prospectively collected data. Settings: The study was performed at 2 tertiary-care centers in Europe with expertise in pelvic floor disorders and sacral nerve modulation. Patients: Patients were eligible if they had had symptoms of constipation persisting for at least 1 year, if conservative treatment (dietary modification, laxatives and biofeedback therapy) had failed, and if predefined excluded conditions were not present. Intervention: The first phase of the treatment process was percutaneous nerve evaluation. If this was successful, patients underwent sacral nerve modulation therapy with an implanted device (tined-lead and implantable pulse generator). Main Outcome Measure: Follow-up was performed at 1, 3, 6, and 12 months, and yearly thereafter. Outcome was assessed with the Wexner constipation score. Results: A total of 117 patients (13 men, 104 women) with a mean age of 45.6 (SD, 13.0) years underwent percutaneous nerve evaluation. Of these, 68 patients (58%) had successful percutaneous nerve evaluation and underwent implantation of a device. The mean Wexner score was 17.0 (SD, 3.8) at baseline and 10.2 (SD 5.3) after percutaneous nerve evaluation (p &lt; .001); the improvement was maintained throughout the follow-up period, although the number of patients continuing with sacral nerve modulation at the latest follow-up (median, 37 months; range, 4-92) was only 61 (52% of all patients who underwent percutaneous nerve evaluation). The sole predictive factor of outcome of percutaneous nerve evaluation was age: younger patients were more likely than older patients to have a successful percutaneous nerve evaluation phase. Limitations: The study was limited by a lack of consistent outcome measures. Conclusions: : Despite improvement in Wexner scores, at the latest follow-up sacral nerve modulation was only being used by slightly more than 50% of the patients who started the first phase of treatment. Further studies are needed to reassess the efficacy and sustainability of sacral nerve modulation. abstract_id: PUBMED:19966599 Factors associated with percutaneous nerve evaluation and permanent sacral nerve modulation outcome in patients with fecal incontinence. Purpose: Sacral nerve modulation is an established treatment for fecal incontinence. Little is known about predictive factors for successful percutaneous nerve evaluation (or test stimulation) and permanent sacral nerve modulation outcome. The purpose of this retrospective study was to discover predictive factors associated with temporary and permanent stimulation. Methods: We analyzed data from test stimulations performed in patients with fecal incontinence from March 2000 until May 2007. Successful outcome was defined as &gt;50% improvement of incontinence episodes in three weeks. Patients with a successful test stimulation were eligible for permanent sacral nerve modulation implantation. All patients who subsequently had permanent sacral nerve modulation were analyzed. Logistic regression was used to determine the predictive power of baseline demographics and diagnostic variables. Results: Test stimulations were performed in 245 patients (226 females; mean age, 56.6 (standard deviation, 12.8) years). Our analysis showed that older age (P = 0.014), external anal sphincter defects (P = 0.005), and repeated procedures after initial failure (P = 0.001) were significantly related to failure. One hundred seventy-three patients (70.6%) were eligible for permanent sacral nerve modulation implantation. The analysis showed no significant predictive factors related to permanent sacral nerve modulation. Conclusion: Three predictive factors were negatively associated with the outcome of test stimulation: older age, repeated procedures, and a defect in the external anal sphincter. These factors may indicate lower chances of success for test stimulation but do not exclude patients from sacral nerve modulation treatment. Although assessed in a selected patient group, no factors were predictive of the outcome of permanent stimulation. abstract_id: PUBMED:25449719 Evaluation of pediatric upper extremity peripheral nerve injuries. Introduction: The evaluation of motor and sensory function of the upper extremity after a peripheral nerve injury is critical to diagnose the location and extent of nerve injury as well as document functional recovery in children. Purpose: The purpose of this paper is to describe an approach to the evaluation of the pediatric upper extremity peripheral nerve injuries through a critical review of currently used tests of sensory and motor function. Methods: Outcome studies on pediatric upper extremity peripheral nerve injuries in the Medline database were reviewed. Results: The evaluation of the outcome in children less than 10 years of age with an upper extremity peripheral nerve injury includes careful observation of preferred prehension patterns, examination of muscle atrophy and sudomotor function, provocative tests, manual muscle testing and tests of sensory threshold and tactile gnosis. Conclusion: The evaluation of outcome in children with upper extremity peripheral nerve injuries warrants a unique approach. abstract_id: PUBMED:27094892 High Median Nerve Injury: Motor and Sensory Nerve Transfers to Restore Function. This article describes the clinically significant motor and sensory deficits that follow high median nerve injuries and addresses the indications, limitations, and outcomes of nerve transfers, when striving to overcome the deficits these patients' experiences. Preferred surgical reconstructive strategy using motor and sensory nerve transfers, and surgical techniques used to perform these transfers, are described. abstract_id: PUBMED:35891829 Use of Nerve Transfer Procedures for Motor and Sensory Restoration of a High Median Nerve Injury. High median nerve (HMN) injuries are unusual clinical conditions, but they generate significant disability of the affected extremities to perform even basic activities of daily living. Even though they can display different degrees of dysfunction due to overlapping innervation and musculature compensation, an early assessment of the existing functional deficits and a timely surgical approach can optimize the long-term outcome. The use of distal nerve transfer procedures has gained popularity since they reduce the distance between the injured zone and the disrupted targets, accelerate the nerve regeneration and subsequently optimize the postoperative motor and sensory recovery. This report describes a patient with a significant segmental loss of the median nerve at the upper third of the left arm after a motor vehicle accident that caused multiple other injuries. The motor deficit of this injury was managed soon after the admission with extensor carpi radialis brevis (ECRB) nerve transfer to the anterior interosseous nerve (AIN). Subsequently, double side-to-side cross-palm nerve allografts between the ulnar and median nerves were utilized to restore the sensory deficit of the HMN lesion. An important functional improvement was obtained with these nerve transfer procedures, and the patient successfully returned to the workforce without limitations. Other surgical options for motor and sensory reconstruction are briefly reviewed. Answer: Yes, sensory and/or motor reactions during percutaneous nerve evaluation can predict the outcome of sacral nerve modulation. A study analyzing percutaneous nerve evaluation procedures found that positive motor responses during the evaluation are highly predictive of a successful outcome of subchronic test stimulation and permanent sacral nerve modulation. Sensory responses were also found to have the same predictive value. Therefore, it is recommended that percutaneous nerve evaluation should be performed in awake patients under local anesthesia to avoid missing those who may benefit from permanent stimulation but do not have a motor response during the procedure (PUBMED:19617755).
Instruction: Outcomes variability in non-emergent esophageal foreign body removal: Is daytime removal better? Abstracts: abstract_id: PUBMED:26292907 Outcomes variability in non-emergent esophageal foreign body removal: Is daytime removal better? Objective: The objective of this study is to investigate differences between esophageal foreign body removal performed during standard operating room hours and those performed after-hours in asymptomatic patients. Methods: A retrospective chart review at a tertiary children's hospital identified 264 cases of patients with non-emergent esophageal foreign bodies between 2006 and 2011. Variables pertaining to procedure and recovery times, hospital charges, complications, length of stay, American Society of Anesthesiology (ASA) classification, and presence of mucosal injury were summarized and compared between cases performed during standard operating hours and those performed after-hours. Results: Cases performed during standard hours had significantly longer average wait times compared with after-hours cases (13.1h versus 9.0h, p&lt;0.001). No other clinical characteristics or outcomes were significantly different between groups. Longer wait times are not associated with mucosal injury or postoperative complications. Conclusion: There were no significant differences in procedure time, charges, or safety in after-hours removal of non-emergent esophageal foreign bodies compared to removal during standard operating hours. OR wait time was about 4h longer during standard hours compared with after-hours. This study could not assess the factors to determine the impact in differences in hospital resource utilization or work force, which may be significant between these two groups. abstract_id: PUBMED:35273870 Emergent Endoscopy for Esophageal Foreign Body Removal: The Impact of Location. Background Timely intervention is essential for the successful removal of ingested foreign bodies. Emergent endoscopy (EGD) is usually performed in the emergency department (ED), operating room (OR), intensive care unit (ICU), or endoscopy suite. However, because the endoscopy suite is not always available, this study investigated the impact of location outside of the endoscopy suite on the successful removal of ingested foreign bodies and other patient outcomes. Methodology We reviewed charts of patients who underwent EGD for foreign body removal at an academic quaternary center between January 01, 2012, and December 31, 2020. We defined successful EGD as retrieval of the foreign body at the first attempt and not requiring subsequent endoscopy or surgical intervention. We performed descriptive and inferential statistical analyses and conducted classification and regression trees to compare endoscopy procedure length (EPL) and hospital length of stay (HLOS) between different locations. Results We analyzed 77 patients, of whom 13 (17%) underwent endoscopy in the ICU, 46 (60%) in the OR, and 18 (23%) in the ED. Endoscopic removal failed in four (5%) patients. Endoscopy length was significantly shorter in the OR (67 (48-122) minutes) versus the ICU (158 (95-166) minutes, P = 0.004) and the ED (111 (92-155) minutes, P = 0.009). Time to procedure was similar if the procedure was performed in the ED (278 minutes), the ICU (331 minutes), or the OR (378 minutes). The median (interquartile range) of HLOS for the OR group (0.87 (0.54-2.03) days) was significantly shorter than the ICU group (2.26 (1.47-6.91) days, P = 0.007). Conclusions While performing endoscopy for esophageal foreign body removal in the OR may be associated with a shorter EPL and HLOS, no location was inferior for overall outcomes. Further prospective and randomized studies are needed to confirm our findings. abstract_id: PUBMED:30064579 Analysis of 334 Cases of Pediatric Esophageal Foreign Body Removal Suggests that Traditional Methods Have Similar Outcomes Whereas a Magnetic Tip Orogastric Tube Appears to Be an Effective, Efficient, and Safe Technique for Disc Battery Removal. Procedures and outcomes for pediatric esophageal foreign body removal were analyzed. Traditional methods of battery removal were compared with a magnetic tip orogastric tube (MtOGT). A single institution retrospective review from 1997 to 2014 of pediatric patients with esophageal foreign bodies was performed. Balloon extraction with fluoroscopy (performed in 173 patients with 91% success), flexible endoscopy (92% success in 102 patients), and rigid esophagoscopy (95% in 38 patients) had excellent success rates. A MtOGT had 100 per cent success in six disc battery patients, when other methods were more likely to fail, and was the fastest. Power analysis suggested 20 patients in the MtOGT group would be needed for significant savings in procedural time. Thirty-two per cent of all foreign bodies and 95 per cent of batteries had complications (P = 0.002) because of the foreign body. Overall, 1.2 per cent had severe complications, whereas 10 per cent of batteries had severe complications (P = 0.04). Each technique if applied appropriately can be a reasonable option for esophageal foreign body removal. Magnetic tip orogastric tubes used to extract ferromagnetic objects like disc batteries had the shortest procedure time and highest success rate although it was not statistically significant. Disc batteries require emergent removal and have a significant complication rate. abstract_id: PUBMED:34557374 Esophageal Foreign Body Removal: A Novel Approach. Upper esophageal foreign body impaction is a common clinical presentation and often requires medical attention. The most common foreign bodies encountered in the adult population are food-related, e.g., steak pieces and meat bones. Endoscopic interventions are indicated when the foreign objects fail to pass spontaneously. The standard methods to remove these foreign bodies include push technique and retrieval methods using various endoscopic instruments. However, we report a unique method that was used to remove a large upper esophageal impacted foreign body refractory to removal by standard procedures. abstract_id: PUBMED:36733890 Esophageal foreign body removal under holmium laser-assisted gastroscope: A case report. As a common clinical emergence, esophageal foreign body can lead to esophageal perforation followed by severe complications including aortic injury, mediastinal abscess and airway obstruction, leading to a high rate of mortality. Therefore, fast and effective diagnosis and treatment are of great necessity. In this case, holmium laser-assisted gastroscopy was adopted to remove the foreign body incarcerated in the esophagus, allowing patients to avoid traumatic and costly surgeries. It is a supplement to traditional methods of foreign body removal. The new combination tried in this report can bring development and innovation inspiration to the development of endoscopic technology. abstract_id: PUBMED:34741229 A double-scope technique enabled a patient with an esophageal plastic fork foreign body to avoid surgery: a case report and review of the literature. Foreign body ingestion is a common problem, and endoscopic removal is often performed with ancillary equipment. However, long, sharp foreign bodies are much more difficult to remove endoscopically than other objects and require emergent surgery. A 68-year-old man with a history of distal gastrectomy accidentally swallowed a plastic fork. He complained of chest pain at the visit. The plastic fork was located between the thoracic esophagus and remnant stomach. Endoscopic removal of the plastic fork was considered difficult, and surgery was deemed necessary. However, we were able to avoid surgery to remove the object using two endoscopes with hoods and a polypectomy snare. The first endoscope covered the sharp edge with a hood, and the snare grasped the neck of the plastic fork. The second endoscope covered the remaining sharp tip. A single operator held the two endoscopes and the snare and pulled them out together. This new double-scope technique is simple and useful for removing long, sharp foreign bodies, such as forks, from the esophagus. abstract_id: PUBMED:34938868 The practice of foreign body removal from the ear, nose, and upper esophageal in children in Ethiopia: A retrospective descriptive study. Background: Ear, nose, and upper esophageal foreign body (FB) impaction in children is a common emergency in-hospital service. There are no clear guidelines regarding the management of ingested FBs. This study aimed to determine the FB in terms of type, anatomic site, management outcome, and associated complications. Methods: Retrospective study of children with ear, nose, and upper esophageal FB managed under general anesthesia (GA) at operating room of Wolkite Hospital in the southern part of Ethiopia between January 2019 and February 2021. Data were collected from the medical chart of the patients using a prepared checklist. The parameters included were age, sex, FB anatomic site, type, management outcome, and associated complications related to FB or procedure modalities. Results: A total of 169 (31.4%) study subjects were required GA for the removal of FBs. The mean age was 4.45 ± 3.20 years. Under 5 years old children comprises 61.5% of total cases. The most common anatomic site of FB impaction was in the ear 97 (57.4%). The most commonly found type of FB was cereals or seeds, which constituted 102 (60.35%). The complication rate was 18.35%. Epistaxis was the commonest complication (6.51%) from the nose while canal abrasion (5.92%) was common from the ear. Conclusion: Ear, nose, and upper esophageal FBs were found more frequently in younger children. The ear was the most common anatomic site of FB impaction followed by the nose and upper esophageal. The most common type of FB was cereals or seeds. Level Of Evidence: 4. abstract_id: PUBMED:37284409 Navigating the Esophagus: Effective Strategies for Foreign Body Removal. Foreign body ingestion is a common medical emergency that can affect individuals of all ages and can be caused by various factors, including accidental ingestion, psychiatric disorders, intellectual disabilities, and substance abuse. The most common site for foreign body lodgment is the upper esophagus, followed by the middle esophagus, stomach, pharynx, lower esophagus, and duodenum. This article provides a case report of a 43-year-old male patient with a history of schizoaffective disorder and an indwelling suprapubic catheter who presented to the hospital due to foreign body ingestion. After examination, a metal clip from his Foley catheter was found lodged in his esophagus. The patient was intubated for the procedure, and an emergent endoscopic removal was performed to remove the metallic Foley component. No postoperative complications were observed, and the patient was successfully discharged. This case highlights the importance of considering foreign body ingestion in patients with chest pain, dysphagia, and vomiting. Prompt diagnosis and treatment are crucial to prevent potential complications such as perforation or gastrointestinal tract obstruction. The article also emphasizes the need for healthcare providers to know the different risk factors, variations, and common sites for foreign body lodgment to optimize patient care. Furthermore, the article highlights the importance of multidisciplinary care involving psychiatry and surgery to provide comprehensive care to patients with psychiatric disorders who may be at higher risk for foreign body ingestion. In conclusion, foreign body ingestion is a typical medical emergency that requires prompt diagnosis and treatment to prevent complications. This case report highlights the successful management of a patient with foreign body ingestion and emphasizes the importance of multidisciplinary care to optimize patient outcomes. abstract_id: PUBMED:27290934 Use of a video laryngoscope to facilitate removal of a long, sharp-pointed blade from the esophagus. Initial management of ingested esophageal foreign bodies involves airway assessment, determination of the requirement for and timing of therapeutic intervention, risk mitigation during removal, and identification of all indicated equipment for retrieval. Long, sharp-pointed objects lodged in the esophagus require emergent attention and should be retrieved endoscopically, if perforation has not occurred. Inducing general anesthesia and rapidly securing the airway can minimize the risk of aspiration, mitigate any effects of tracheal compression, avoid the potential of exacerbating existing trauma, and provide optimal conditions for removal of long, sharp-pointed esophageal foreign bodies. Video laryngoscopy provides improved recognition of anatomical structures in both normal and difficult airways, enabling assessment for hypopharyngeal and glottic trauma resulting from foreign body ingestion. The indirect view of video laryngoscopy also facilitates the coordinated manipulation of the airway by both the anesthesiologist and the surgeon as they visualize the anatomy together while securing the airway and removing the foreign body. abstract_id: PUBMED:25320385 Endoscopic removal of esophageal and ruminal foreign bodies in 5 Holstein calves. Endoscopic removal of esophageal and ruminal foreign bodies was successfully performed in 5 Holstein-Friesian calves under sedation or general anesthesia by using an electrocautery snare or a wire-guided Dormi basket. This report describes the endoscopic manipulations, treatment, and outcomes of esophageal foreign body removal in these calves. Answer: The study with PUBMED:26292907 investigated the differences in outcomes between esophageal foreign body removal performed during standard operating room hours and those performed after-hours in asymptomatic patients. The results indicated that there were no significant differences in procedure time, charges, or safety between after-hours removal and removal during standard operating hours. However, the wait time for the operating room was about 4 hours longer during standard hours compared to after-hours. The study concluded that longer wait times are not associated with mucosal injury or postoperative complications, suggesting that daytime removal does not necessarily lead to better outcomes compared to after-hours removal in the context of non-emergent esophageal foreign body removal.
Instruction: Do Talkativeness and Vocal Loudness Correlate With Laryngeal Pathology? Abstracts: abstract_id: PUBMED:26311493 Do Talkativeness and Vocal Loudness Correlate With Laryngeal Pathology? A Study of the Vocal Overdoer/Underdoer Continuum. Objectives: Assess the correlation between self-rating scales of talkativeness and loudness with various types of voice disorders. Design: This is a retrospective study. Methods: A total of 974 patients were analyzed. The cohort study included 430 consecutive patients presenting to the senior author with voice complaints from December 1995 to December 1998. The case-control study added 544 consecutive patients referred to the same examiner from January 1988 to December 1998 for vocal fold examination before thyroid, parathyroid, and carotid surgery. Patient responses on seven-point Likert self-rating scales of talkativeness and loudness were compared with laryngeal disease. Results: Mucosal lesions clearly associated with vibratory trauma are strongly associated with a high self-rating of talkativeness. Laryngeal deconditioning disorders were associated with a low self-rating of talkativeness. Conclusions: Use of a simple self-rating scale of vocal loudness and talkativeness during history taking can reliably orient the examiner to the types of voice disorders likely to be diagnosed subsequently during vocal capability testing and visual laryngeal examination. The high degree of talkativeness and loudness seen in vocal overdoers correlates well with mucosal disorders such as nodules, polyps, capillary ectasia, epidermoid inclusion cysts, and hemorrhage. A lower degree of talkativeness correlates with muscle deconditioning disorders such as vocal fold bowing, atrophy, presbyphonia, and vocal fatigue syndrome. abstract_id: PUBMED:29723332 Vocal risk in preachers: talkativeness, vocal loudness, and knowledge about vocal health and hygiene. Purpose The objective of this study was to investigate the knowledge of preachers about aspects of vocal health and hygiene and evaluate talkativeness and vocal loudness self-perceived during labor and extra-labor situations aiming to understand the possibility of vocal risk in these professionals. Methods Fifty male preachers aged 22 to 73 years were evaluated. They responded to two self-assessment questionnaires on vocal health and hygiene and talkativeness and vocal loudness. The results were submitted to statistical analysis. Results The preachers presented satisfactory scores in the Vocal Health and Hygiene Questionnaire; however, their scores in the Scale of Vocal Loudness and Talkativeness were lower in the labor situation compared with the extra-labor situations. The variables length of professional experience as a preacher and extra-labor talkativeness and vocal loudness were also associated with knowledge about vocal health and hygiene. Conclusion Preachers show good knowledge about vocal health and hygiene but are at high risk of vocal disorders due to excessive use of talkativeness and vocal loudness in the work environment. abstract_id: PUBMED:32245662 The Effect of Single Harmonic Tuning on Vocal Loudness. The study addresses the benefit of tuning single harmonics with vocal tract resonances to increase vocal loudness. The loudness of theoretically constructed vocal sounds with variable levels of sound energy in the first, second, and third harmonics is computed on the basis of ISO standard 226:2003. In comparison to increased loudness with changes in overall spectral slope, it is shown that single harmonic tuning requires a greater range of SPL to produce a similar range of loudness. For example, a 10-40 dB increase in the level of a single harmonic produces less than two doublings of loudness, whereas a spectral slope change from -12 dB/octave to -3 dB/octave can produce a similar doubling of loudness with only a 5 dB SPL increase. abstract_id: PUBMED:33455852 Influence of Loudness on Vocal Stability in the Male Passaggio. Introduction: Vocal registers and the frequency region where registration events occur, the passaggio, have been in focus of scientific research for almost 200 years. In professional tenors, it has been shown before that singing across the passaggio avoiding a register shift and therefore using their stage voice above the passaggio (SVaP) is associated with greater vocal stability than a register change to the falsetto. However, it is unclarified how much different loudness conditions contribute to this vocal stability. Material And Methods: Six professional tenors were asked to perform four pitch glides from A3 to A4 (220-440 Hz) on the vowel [i:]. These glides included (1) the passaggio from modal register to falsetto. The following glides into SVaP were performed under different loudness conditions, (2) mezzoforte (average loudness), (3) pianissimo (as quietly as possible), and (4) fortissimo (the loudest possible). During phonation, high speed videoendoscopy (HSV), electroglottography, and audio signals were recorded simultaneously. The glottal area waveform was derived based on the HSV material. Results: Modal to falsetto transitions were associated with relatively low sound pressure level and rise of open quotients (OQ) for the falsetto. Transitions to SVaP showed a clear dependence on the intended loudness. The OQs were lower the louder the task was. There was no clear evidence that transitions with softer voice showed greater stability of vocal fold oscillation patterns than louder tasks. Conclusions: The vocal fold oscillation pattern show- differences among various loudness conditions within the tenors' passaggio but no clear differences with regard to oscillatory stability. abstract_id: PUBMED:26667311 Correlation between personality type and vocal pathology: A nonrandomized case control study. Objectives/hypothesis: In this study we have made an attempt to find out if there is any correlation with type of personality (type A or B) and incidence of vocal pathology, subsequent to a tendency of vocal abuse. We also noted the loudness of speech and rate of speech for both personality types and compared these parameters for each personality type. Study Design: A total of 100 subjects (50 with vocal pathologies and 50 with normal vocal folds) underwent voice and personality assessment, and the above-mentioned factors were compared with statistical methods. Results: It was found that subjects with type A personality had a statistically significant increased incidence of vocal pathology, as compared to those with type B personality (P = .04). The other two parameters (i.e., loudness of speech and rate of speech) were both found to be higher in subjects with type A personality than those with type B, but did not attain statistical significance. Conclusions: This study shows that there is a very close relationship between personality type and voice quality, and the incidence of vocal abuse and subsequent vocal pathologies are heavily governed by the person's personality traits. Level Of Evidence: 3b Laryngoscope, 126:2063-2066, 2016. abstract_id: PUBMED:33228812 Vocal fold oscillation pattern changes related to loudness in patients with vocal fold mass lesions. Introduction: Vocal fold mass lesions can affect vocal fold oscillation patterns and therefore voice production. It has been previously observed that perturbation values from audio signals were lower with increased loudness. However, how much the oscillation patterns change with gradual alteration of loudness is not yet fully understood. Material And Methods: Eight patients with vocal fold mass lesions were asked to perform a glide from minimum to maximum loudness on the vowel /i/, ƒo of 125 Hz for male or 250 Hz for female voices. During phonation the subjects were simultaneously recorded with transnasal high speed videoendoscopy (HSV, 20,000 fps), electroglottography (EGG), and an audio recording. Based on the HSV material the Glottal Area Waveform (GAW) was segmented and GAW parameters were computed. Results: The greatest vocal fold irregularities were observed at different values between minimum and maximum sound pressure level. There was a relevant discrepancy between the HSV and EGG derived open quotients. Furthermore, the EGG derived sample entropy and GAW values also evidenced different behavior. Conclusions: The amount of vocal fold irregularity changes with varying loudness. Therefore, any evaluation of the voice should be performed under different loudness conditions. The discrepancy between EGG and GAW values appears to be much stronger in patients with vocal fold mass lesions than those with normal physiological conditions. Level Of Evidence: 4. abstract_id: PUBMED:8204291 Phonetograms in laryngeal lesions due to vocal abuse Using this test, we measure maximum and minimum vocal capacity (sound pressure in dB) from the lowest to the highest frequency that a person can emit and sustain. These measurements are represented on an easily interpreted graph that allows the magnitude of the vocal area to be evaluated. We made phonetograms of persons with nodules (3), polyps (16) and Reinke edema (10) and compared their results to those of a group of 25 healthy subjects. The vocal fields of patients with laryngeal polyp and Reinke's edema were severely reduced. abstract_id: PUBMED:11324652 Control of vocal loudness in young and old adults. This study examined the effect of aging on respiratory and laryngeal mechanisms involved in vocal loudness control. Simultaneous measures of subglottal pressure and electromyographic (EMG) activity from the thyroarytenoid (TA), lateral cricoarytenoid (LCA), and cricothyroid (CT) muscles were investigated in young and old individuals while they attempted to phonate at three loudness levels, "soft," "comfortable," and "loud." Voice sound pressure level (SPL) and fundamental frequency (F ) measures were also obtained. Across loudness conditions, subglottal pressure levels were similar for both age groups. Laryngeal EMG measures tended to be lower and more variable for old compared with young individuals. These differences were most apparent for the TA muscle. Finally, across the three loudness conditions, the old individuals generated SPLs that were lower overall than those produced by the young individuals but modulated loudness levels in a manner similar to that of the young subjects. These findings suggest that the laryngeal mechanism may be more affected than the respiratory system in these old individuals and that these changes may affect vocal loudness levels. abstract_id: PUBMED:29072672 The autoimmune rheumatic disease and laryngeal pathology Vocal disorders make up one of the autoimmune pathological conditions characterized by multiple organ system dysfunction. Laryngeal pathology in this condition has an autoimmune nature; it is highly diverse and poorly explored. The objective of the present work based on the analysis of the relevant literature publications was to study clinical manifestations of the autoimmune rheumatic disease affecting the larynx. 'Bamboo nodes' on the vocal folds is a rare manifestation of laryngeal autoimmune diseases. We found out references to 49 cases of this condition in the available literature. All the patients were women presenting with autoimmune diseases. The present review highlights the problems pertaining to etiology of 'bamboo nodes' on the vocal folds and the method for the treatment of this condition. abstract_id: PUBMED:7715379 The cricothyroid muscle does not influence vocal fold position in laryngeal paralysis. The status of the cricothyroid muscle, which is innervated by the superior laryngeal nerve, is believed to influence the vocal fold position in laryngeal paralysis. It is believed that isolated lesions of the recurrent laryngeal nerve generally result in the paralyzed vocal fold assuming a paramedian position but that with lesions of both the superior and recurrent laryngeal nerves, a more lateral (intermediate or cadaveric) vocal fold position can be expected. Twenty-six consecutive patients with unilateral vocal fold paralysis underwent transnasal fiberoptic laryngoscopy (TFL) and laryngeal electromyography (LEMG). By TFL, the vocal fold positions were paramedian in 8 patients, intermediate in 7, and lateral in 11. By LEMG, 13 patients had isolated recurrent laryngeal nerve lesions and 13 patients had combined (superior and recurrent laryngeal nerve) lesions. There was no correlation between the vocal fold position and the status of the cricothyroid muscle, i.e., the status of the cricothyroid muscle by LEMG did not predict the vocal fold position nor did the vocal fold position by TFL predict the site of lesion. In addition, we investigated the possibility that the degree of thyroarytenoid muscle recruitment (tone) might correlate with vocal fold position, but no relation was found. We conclude that 1. the cricothyroid muscle does not predictably influence the position of the vocal fold in unilateral paralysis; 2. thyroarytenoid muscle recruitment (tone) does not appear to influence vocal fold position; and 3. still unidentified and unknown factors may be responsible for determining vocal fold position in laryngeal paralysis. Answer: Yes, talkativeness and vocal loudness do correlate with laryngeal pathology. A study found that mucosal lesions, which are clearly associated with vibratory trauma, are strongly associated with a high self-rating of talkativeness. Conversely, laryngeal deconditioning disorders were associated with a low self-rating of talkativeness (PUBMED:26311493). Additionally, preachers, who are professional voice users, show good knowledge about vocal health and hygiene but are at high risk of vocal disorders due to excessive use of talkativeness and vocal loudness in their work environment (PUBMED:29723332). This suggests that individuals who use their voice extensively and loudly are more likely to develop certain types of voice disorders.
Instruction: Preoperative MR imaging of anal fistulas: Does it really help the surgeon? Abstracts: abstract_id: PUBMED:11152782 Preoperative MR imaging of anal fistulas: Does it really help the surgeon? Purpose: To evaluate the accuracy of magnetic resonance (MR) imaging with a quadrature phased-array coil for the detection of anal fistulas and to evaluate the additional clinical value of preoperative MR imaging, as compared with surgery alone. Materials And Methods: Fifty-six patients with anal fistulas underwent high-spatial-resolution MR imaging. Twenty-four had a primary fistula; 17, a recurrent fistula; and 15, a fistula associated with Crohn disease. MR imaging findings were withheld from the surgeon until surgery ended and verified, and surgery continued when required. Results: MR imaging provided important additional information in 12 (21%) of 56 patients. In patients with Crohn disease, the benefit was 40% (six of 15); in patients with recurrent fistulas, 24% (four of 17); and in patients with primary fistulas, 8% (two of 24). The difference between patients with or without Crohn disease and between patients with a simple fistula versus the rest was significant (P &lt;.05). The sensitivity and specificity for detecting fistula tracks were 100% and 86%, respectively; abscesses, 96% and 97%, respectively; horseshoe fistulas, 100% and 100%, respectively; and internal openings, 96% and 90%, respectively. Conclusion: High-spatial-resolution MR imaging is accurate for detecting anal fistulas. It provides important additional information in patients with Crohn disease-related and recurrent anal fistulas and is recommended in their preoperative work-up. abstract_id: PUBMED:15498901 Clinical examination, endosonography, and MR imaging in preoperative assessment of fistula in ano: comparison with outcome-based reference standard. Purpose: To prospectively evaluate the relative accuracy of digital examination, anal endosonography, and magnetic resonance (MR) imaging for preoperative assessment of fistula in ano by comparison to an outcome-derived reference standard. Materials And Methods: Ethical committee approval and informed consent were obtained. A total of 104 patients who were suspected of having fistula in ano underwent preoperative digital examination, 10-MHz anal endosonography, and body-coil MR imaging. Fistula classification was determined with each modality, with reviewers blinded to findings of other assessments. For fistula classification, an outcome-derived reference standard was based on a combination of subsequent surgical and MR imaging findings and clinical outcome after surgery. The proportion of patients correctly classified and agreement between the preoperative assessment and reference standard were determined with trend tests and kappa statistics, respectively. Results: There was a significant linear trend (P &lt; .001) in the proportion of fistula tracks (n = 108) correctly classified with each modality, as follows: clinical examination, 66 (61%) patients; endosonography, 87 (81%) patients; MR imaging, 97 (90%) patients. Similar trends were found for the correct anatomic classification of abscesses (P &lt; .001), horseshoe extensions (P = .003), and internal openings (n = 99, P &lt; .001); endosonography was used to correctly identify the internal opening in 90 (91%) patients versus 96 (97%) patients with MR imaging. Agreement between the outcome-derived reference standard and digital examination, endosonography, and MR imaging for classification of the primary track was fair (kappa = 0.38), good (kappa = 0.68), and very good (kappa = 0.84), respectively, and fair (kappa = 0.29), good (kappa = 0.64), and very good (kappa = 0.88), respectively, for classification of abscesses and horseshoe extensions combined. Conclusion: Endosonography with a high-frequency transducer is superior to digital examination for the preoperative classification of fistula in ano. While MR imaging remains superior in all respects, endosonography is a viable alternative for identification of the internal opening. abstract_id: PUBMED:24238135 Magnetic resonance imaging of perianal fistulas. Perianal fistulization is the result of a chronic inflammation of the perianal tissues. A wide spectrum of clinical manifestations, ranging from simple to complex fistulas, can be seen, the latter especially in patients with Crohn disease. Failure to detect secondary tracks and hidden abscesses may lead to therapeutic failure, such as insufficient response to medical treatment and relapse after surgery. Currently, magnetic resonance (MR) imaging is the preferred technique for evaluating perianal fistulas and associated complications. Initially used most often in the preoperative setting, MR imaging now also plays an important role in evaluating the response to medical therapy. abstract_id: PUBMED:12880990 MR imaging of fistula-in-ano. Accurate preoperative assessment of fistula-in-ano is mandatory if the fistula is not to recur. In recent years, MRI has become pre-eminent for fistula assessment and recent studies have shown that not only is MRI more accurate than surgical assessment, but that surgery based on MRI can reduce further disease recurrence by approximately 75%. The main role of MRI is to alert the surgeon to fistula tracks and extensions that would otherwise have gone undetected and, thus, untreated at the time of surgical assessment under general anaesthetic. abstract_id: PUBMED:25728829 Role of tridimensional endoanal ultrasound (3D-EAUS) in the preoperative assessment of perianal sepsis. Purpose: The aim of this study was to evaluate the accuracy of tridimensional endoanal ultrasound (3D-EAUS) in the diagnosis of perianal sepsis comparing the results with the surgical findings, considered as reference standard. Methods: From January 2009 to January 2013, all the patients referred for the assessment and treatment of perianal sepsis with suspected anorectal origin were enrolled in the study. All patients gave informed written consent. Prior to surgery, all the patients underwent anamnestic evaluation, clinical examination, and unenhanced and H2O2-enhanced 3D-EAUS. Surgery was performed by a colorectal surgeon blinded to the 3D-EAUS results. Results: A total of 212 patients with suspected perianal suppurations were assessed during the study period. In 12 patients, the H2O2-enhanced 3D-EAUS was not performed, and so, they were excluded from the study. Very good agreement between 3D-EAUS and examination under anesthesia (EUA) in the classification of primary fistula tracts (kappa = 0.93) and in the identification of fistula internal opening (kappa = 0.97) was found. There was a good concordance (kappa = 0.71) between 3D-EAUS and surgery in the detection of fistula secondary extensions. The overall sensitivity and specificity of 3D-EAUS in the diagnosis of perianal sepsis were 98.3 and 91.3% respectively. Conclusion: 3D-EAUS is a safe and reliable technique in the assessment of perianal sepsis. It may assist the surgeon in delineating the fistula tract anatomy and in determining the origin of sepsis, supporting the preoperative planning of definitive and appropriate surgical therapy. abstract_id: PUBMED:25469082 Imaging of anal fistulas: comparison of computed tomographic fistulography and magnetic resonance imaging. The primary importance of magnetic resonance (MR) imaging in evaluating anal fistulas lies in its ability to demonstrate hidden areas of sepsis and secondary extensions in patients with fistula in ano. MR imaging is relatively expensive, so there are many healthcare systems worldwide where access to MR imaging remains restricted. Until recently, computed tomography (CT) has played a limited role in imaging fistula in ano, largely owing to its poor resolution of soft tissue. In this article, the different imaging features of the CT and MRI are compared to demonstrate the relative accuracy of CT fistulography for the preoperative assessment of fistula in ano. CT fistulography and MR imaging have their own advantages for preoperative evaluation of perianal fistula, and can be applied to complement one another when necessary. abstract_id: PUBMED:24119050 Preoperative mapping of fistula-in-ano: a new three-dimensional MRI-based modelling technique. Aim: We aimed to develop an intuitive, interactive, three-dimensional (3D) MRI modelling technique to produce a 3D image of fistula-in-ano. Method: The 3D model was created from standard two-dimensional (2D) MRI sequences to produce an image that is anatomically correct. Individual muscle and soft-tissue layers were extracted from T1-weighted sequences and fistula pathology from short TI inversion recovery (STIR) sequences, to produce two separate volumes. These were then fused using postprocessing software (Vitrea Workstation version 6.3) to generate a 3D model. Results: The final 3D model was incorporated into a PDF file that has an integrated computer aided design (CAD) viewer, allowing the surgeon to rotate it in any direction during preoperative planning or whilst in theatre. Conclusion: As an adjunct to 2D MRI images and the associated radiology report, this model communicates the fistula anatomy to the clinician more clearly and should be particularly useful in complex cases. abstract_id: PUBMED:24030783 Peroxide-enhanced endoanal ultrasound in preoperative assessment of complex fistula-in-ano. Background: In complex fistula-in-ano, preoperative imaging can help identify secondary tracts and abscesses that can be missed, leading to recurrence. We evaluated hydrogen peroxide-enhanced endoanal ultrasound (PEEUS) in the characterization of fistula compared with standard clinical and operative assessment. Methods: Patients with complex fistula-in-ano treated between February 2008 and May 2009 at our institution were prospectively evaluated by PEEUS with recording of the preoperative clinical examination and intraoperative details of the fistula. Of the 135 patients with fistula-in-ano, 68 met the inclusion criteria for complex fistula-in-ano. Correlation of clinical findings and PEEUS to the gold standard intraoperative findings was assessed in characterizing the fistula. The percent agreement between the clinical and PEEUS findings against the gold standard was derived, and the kappa statistic for agreement was determined. Results: The mean age of the cohort was 42.54 ± 10.86 years. The fistula tracts were curvilinear, high, and transsphincteric in 16 (23.53%), 8 (11.76%), and 42 (61.76%) patients, respectively. Secondary tracts and associated abscess cavities were seen in 28 (33.82%) and 35 (51.47%) patients, respectively. PEEUS correlated better than clinical examination with regard to site (92.65 vs 79.41%; p &lt; 0.001) and course (91.18 vs 77.94%; p &lt; 0.001) of secondary tract and associated abscesses (89.71 vs 80.88%; p = 0.02). There was a trend of better correlation of PEEUS compared to clinical examination in classifying the primary tract as per Park's system (88.24 vs 79.41%; p = 0.06), but it did not reach statistical significance. PEEUS and clinical examination were comparable in correlation of the level of the primary tract (kappa: 0.86 vs 0.78; p = 0.22) and the site of internal opening (kappa: 0.97 vs 0.89; p = 0.22). The operative decision was changed in 13 (19.12%) subjects based on PEEUS findings. Conclusions: PEEUS is a feasible and efficient tool in the routine preoperative assessment of complex fistula-in-ano. abstract_id: PUBMED:30607867 Preoperative assessment of simple and complex anorectal fistulas: Tridimensional endoanal ultrasound? Magnetic resonance? Both? Purpose: The purpose of the study is to evaluate the diagnostic value of tridimensional endoanal ultrasound (3D-EAUS) and magnetic resonance (MR) in the preoperative assessment of both simple and complex anorectal fistulas. Methods: All the patients referred for the treatment of anal fistulas were enrolled in this study and underwent, as preoperative assessment, anamnestic evaluation, clinical examination, and unenhanced and H2O2-enhanced 3D-EAUS and MR. The results of imaging evaluation were compared with surgical findings, considered as reference standard. Results: During the study period, 124 patients operated on for anal fistulas underwent complete preoperative imaging assessment. Perfect agreement between 3D-EAUS and surgery in the anal fistulas' severity grading was found (K = 1). The fistulas were classified as simple in 68/126 (53.9%) and complex in 58/126 (46.03%) cases, according to fistulas' Parks' classification and the most recent American Guidelines. In both simple and complex anal fistulas, 3D-EAUS did not show a significantly higher accuracy in the evaluation of internal openings, if compared with MR (P = 0.47; McNemar's Chi-square test). In the complex anal fistulas, MR showed a significantly higher accuracy in the evaluation of secondary extensions if compared with 3D-EAUS (P = 0.041; McNemar's Chi-square test), whereas in the simple anal fistulas, no significant difference was found. Conclusion: In the preoperative work-up of patients with anorectal fistulas, 3D-EAUS may represent the first-line diagnostic tool. In cases of fistulas classified as complex by 3D-EAUS, MR may be indicated as adjunctive diagnostic imaging examination, to more carefully describe the fistulas' complete anatomy. abstract_id: PUBMED:2727338 Computed tomography and anorectal malformations. Their preoperative evaluation The anatomical patterns of anorectal malformations have so far been studied according to the principles which inspire Peña's technique for the surgical treatment of anorectal anomalies. Thus, the diagnostic study of anorectal malformations has by the authors been considered a work of classification, but of identification. Among the diagnostic procedures in use in our Institute, preoperative CT of pelvis is performed to assess the presence and to define the development of muscular sphincteric structures towards prognostic evaluation of continence, the major long-term goal. Seven patients, aged 1-30 months, with anorectal anomalies were studied with preoperative CT of the pelvis. CT scans showed well-developed sphincteric muscles in 3 patients (2 with rectoureteral and 1 with rectovestibular fistulas), poorly developed muscular structures in 3 patients (with rectovaginal, cloacal and prostatic fistulas), absence of muscular structures in 1 case (with rectovesical fistula). CT findings were always confirmed at surgery, except for the case with rectovesical malformation where CT scans did not allow to identify the thin sphincteric musculature. The authors believe preoperative CT of the pelvis, together with other clinical and radiological examinations, to be a valid mean in the prognostic evaluation of continence. Answer: Preoperative MR imaging of anal fistulas does indeed help the surgeon. High-spatial-resolution MR imaging has been shown to be accurate for detecting anal fistulas, providing important additional information in patients with Crohn disease-related and recurrent anal fistulas. It is recommended in their preoperative work-up due to its high sensitivity and specificity for detecting fistula tracks, abscesses, horseshoe fistulas, and internal openings (PUBMED:11152782). MR imaging is superior to both clinical examination and endosonography in the preoperative classification of fistula in ano, with a very good agreement with the outcome-derived reference standard for classification of the primary track and abscesses/horseshoe extensions (PUBMED:15498901). MR imaging is the preferred technique for evaluating perianal fistulas and associated complications, playing a crucial role in both preoperative settings and in evaluating the response to medical therapy (PUBMED:24238135). Studies have shown that surgery based on MRI can reduce further disease recurrence by approximately 75%, as MRI alerts the surgeon to fistula tracks and extensions that might otherwise go undetected (PUBMED:12880990). While MR imaging is relatively expensive and sometimes less accessible, it remains superior in all respects for preoperative evaluation of perianal fistula, although computed tomographic fistulography can be a complementary tool when necessary (PUBMED:25469082). In summary, preoperative MR imaging is a valuable tool for surgeons in the assessment of anal fistulas, particularly in complex cases, as it enhances the detection of critical anatomical features that could influence surgical planning and outcomes.
Instruction: Does active dissemination of evidence result in faster knowledge transfer than passive diffusion? Abstracts: abstract_id: PUBMED:24694945 Does active dissemination of evidence result in faster knowledge transfer than passive diffusion?: An analysis of trends of the management of pediatric asthma and croup in US emergency departments from 1995 to 2009. Objective: This study aimed to compare knowledge transfer (KT) in the emergency department (ED) management of pediatric asthma and croup by measuring trends in corticosteroid use for both conditions in EDs. Methods: A retrospective, cross-sectional study of the National Hospital Ambulatory Medical Care Survey data between 1995 and 2009 of corticosteroid use at ED visits for asthma or croup was conducted. Odds ratios (OR) were calculated using logistic regression. Trends over time were compared using an interaction term between disease and year and were adjusted for all other covariates in the model. We included children aged 2 to 18 years with asthma who received albuterol and were triaged emergent/urgent. Children aged between 3 months to 6 years with croup were included. The main outcome measure was the administration of corticosteroids in the ED or as a prescription at the ED visit. Results: The corticosteroid use in asthma visits increased from 44% to 67% and from 32% to 56% for croup. After adjusting for patient and hospital factors, this trend was significant both for asthma (OR, 1.07; 95% confidence interval [CI], 1.04-1.10) and croup (OR, 1.07; 95% CI, 1.03-1.12). There was no statistical difference between the 2 trends (P = 0.69). Hospital location in a metropolitan statistical area was associated with increased corticosteroid use in asthma (OR, 1.76; 95% CI, 1.10-2.82). Factors including sex, ethnicity, insurance, or region of the country were not significantly associated with corticosteroid use. Conclusions: During a 15-year period, knowledge transfer by passive diffusion or active guideline dissemination resulted in similar trends of corticosteroid use for the management of pediatric asthma and croup. abstract_id: PUBMED:19705558 Diffusion theory and knowledge dissemination, utilization, and integration in public health. Legislators and their scientific beneficiaries express growing concerns that the fruits of their investment in health research are not reaching the public, policy makers, and practitioners with evidence-based practices. Practitioners and the public lament the lack of relevance and fit of evidence that reaches them and barriers to their implementation of it. Much has been written about this gap in medicine, much less in public health. We review the concepts that have guided or misguided public health in their attempts to bridge science and practice through dissemination and implementation. Beginning with diffusion theory, which inspired much of public health's work on dissemination, we compare diffusion, dissemination, and implementation with related notions that have served other fields in bridging science and practice. Finally, we suggest ways to blend diffusion with other theory and evidence in guiding a more decentralized approach to dissemination and implementation in public health, including changes in the ways we produce the science itself. abstract_id: PUBMED:17850495 Illuminating the processes of knowledge transfer in nursing. Rationale: Over the past 10 years, there has been a propensity to translate research findings and evidence into clinical practice, and concepts such as knowledge transfer, research dissemination, research utilization, and evidence-based practice have been described in the nursing literature. Aim: This manuscript shows a selective review of the definitions and utilization of these concepts and offers a perspective on their interrelationships by indicating how knowledge transfer processes are the basis of all the concepts under review. Findings: Definitions and utilization of knowledge transfer in the literature have been influenced by educational and social perspectives and indicate two important processes that are rooted in the mechanisms of research dissemination, research utilization, and evidence-based practice. These processes refer to a cognitive and an interpersonal dimension. Knowledge transfer underlies a process involving cognitive resources as well as an interpersonal process where the knowledge is transferred between individuals or groups of individuals. Conclusion And Implications: This manuscript can contribute to our understanding of the theoretical foundations linking these concepts and these processes by comparing and contrasting them. It also shows the value and empirical importance of the cognitive and interpersonal processes of knowledge transfer by which research findings and evidence can be successfully translated and implemented into the nursing clinical practice. abstract_id: PUBMED:37326457 Active and passive exploration for spatial knowledge acquisition: A meta-analysis. Literature reported mixed evidence on whether active exploration benefits spatial knowledge acquisition over passive exploration. Active spatial learning typically involves at least physical control of one's movement or navigation decision-making, while passive participants merely observe during exploration. To quantify the effects of active exploration in learning large-scale, unfamiliar environments, we analysed previous findings with the multi-level meta-analytical model. Potential moderators were identified and examined for their contributions to the variability in effect sizes. Of the 128 effect sizes retrieved from 33 experiments, we observed a small to moderate advantage of active exploration over passive observation. Important moderators include gender composition, decision-making, types of spatial knowledge, and matched visual information. We discussed the implications of the results along with the limitations. abstract_id: PUBMED:16979468 Evidence-based approaches to dissemination and diffusion of physical activity interventions. With the increasing availability of effective, evidence-based physical activity interventions, widespread diffusion is needed. We examine conceptual foundations for research on dissemination and diffusion of physical activity interventions; describe two school-based program examples; review examples of dissemination and diffusion research on other health behaviors; and examine policies that may accelerate the diffusion process. Lack of dissemination and diffusion evaluation research and policy advocacy is one of the factors limiting the impact of evidence-based physical activity interventions on public health. There is the need to collaborate with policy experts from other fields to improve the interdisciplinary science base for dissemination and diffusion. The promise of widespread adoption of evidence-based physical activity interventions to improve public health is sufficient to justify devotion of substantial resources to the relevant research on dissemination and diffusion. abstract_id: PUBMED:20004552 Moving knowledge to action through dissemination and exchange. Objective: The objective of this article is to discuss the knowledge dissemination and exchange components of the knowledge translation process that includes synthesis, dissemination, exchange, and ethically sound application of knowledge. This article presents and discusses approaches to knowledge dissemination and exchange and provides a summary of factors that appear to influence the effectiveness of these processes. It aims to provide practical information for researchers and knowledge users as they consider what to include in dissemination and exchange plans developed as part of grant applications. Study Design And Setting: Not relevant. Results And Conclusions: Dissemination is targeting research findings to specific audiences. Dissemination activities should be carefully and appropriately considered and outlined in a dissemination plan focused on the needs of the audience who will use the knowledge. Researchers should engage knowledge users to craft messages and help disseminate research findings. Knowledge brokers, networks, and communities of practice hold promise as innovative ways to disseminate and facilitate the application of knowledge. Knowledge exchange or integrated knowledge translation involves active collaboration and exchange between researchers and knowledge users throughout the research process. abstract_id: PUBMED:37438787 IKT Guiding Principles: demonstration of diffusion and dissemination in partnership. Introduction: Integrated knowledge translation (IKT) is a partnered approach to research that aims to ensure research findings are applied in practice and policy. IKT can be used during diffusion and dissemination of research findings. However, there is a lack of understanding how an IKT approach can support the diffusion and dissemination of research findings. In this study, we documented and described the processes and outcomes of an IKT approach to diffusing and disseminating the findings of consensus recommendations for conducting spinal cord injury research. Methods: Communication of the IKT Guiding Principles in two phases: a diffusion phase during the first 102 days from the manuscript's publication, followed by a 1147 day active dissemination phase. A record of all inputs was kept and all activities were tracked by monitoring partnership communication, a partnership tracking survey, a project curriculum vitae, and team emails. Awareness outcomes were tracked through Google Analytics and a citation-forward search. Awareness includes the website accesses, the number of downloads, and the number of citations in the 29 month period following publication. Results: In the diffusion period, the recommendations were viewed 60 times from 4 different countries, and 4 new downloads. In the dissemination period, the recommendations were viewed 1109 times from 39 different countries, 386 new downloads, and 54 citations. Overall, during dissemination there was a 17.5% increase in new visitors to the website a month and a 95.5% increase in downloads compared to diffusion. Conclusion: This project provides an overview of an IKT approach to diffusion and dissemination. Overall, IKT may be helpful for increasing awareness of research findings faster; however, more research is needed to understand best practices and the the impact of an IKT approach on the diffusion and dissemination versus a non-partnered approach. abstract_id: PUBMED:26294458 Comparing three knowledge communication strategies - Diffusion, Dissemination and Translation - through randomized controlled studies. This paper describes a series of three randomized controlled case studies comparing the effectiveness of three strategies for communicating new research-based knowledge (Diffusion, Dissemination, Translation), to different Assistive Technology (AT) stakeholder groups. Pre and post intervention measures for level of knowledge use (unaware, aware, interested, using) via the LOKUS instrument, assessed the relative effectiveness of the three strategies. The latter two approaches were both more effective than diffusion but also equally effective. The results question the value added by tailoring research findings to specific audiences, and instead supports the critical yet neglected role for relevance in determining knowledge use by stakeholders. abstract_id: PUBMED:26001482 Active migration and passive transport of malaria parasites. Malaria parasites undergo a complex life cycle between their hosts and vectors. During this cycle the parasites invade different types of cells, migrate across barriers, and transfer from one host to another. Recent literature hints at a misunderstanding of the difference between active, parasite-driven migration and passive, circulation-driven movement of the parasite or parasite-infected cells in the various bodily fluids of mosquito and mammalian hosts. Because both active migration and passive transport could be targeted in different ways to interfere with the parasite, a distinction between the two ways the parasite uses to get from one location to another is essential. We discuss the two types of motion needed for parasite dissemination and elaborate on how they could be targeted by future vaccines or drugs. abstract_id: PUBMED:18034664 Evaluation of two evidence-based knowledge transfer interventions for physicians. A cluster randomized controlled factorial design trial: the CardioDAS Study. To investigate the potential benefits of two modes of evidence-based knowledge transfer ('active' and 'passive' modes) in terms of improvement of intention of prescription, knowledge, and real prescription in practice, we performed an open randomized controlled trial (CardioDAS) using a factorial design (two tested interventions: 'active' and 'passive' knowledge transfer) and a hierarchical structure (cluster of physicians for each department level). The participants were cardiologists working in French public hospitals. In the 'passive' transfer group, cardiologists received evidence-based knowledge material (available on Internet) every week for a duration of 1 year. In the 'active' transfer group, two knowledge brokers (EA, PN) visited the participating departments (every 2 months for 1 year, 2 h per visit). The primary outcome consisted in the adjusted absolute mean variation of score (difference between post- and pre-study session) of answers to simulated cases assessing the intention to prescribe. Secondary outcomes were the variation of answers to a multiple-choice questionnaire (MCQ) assessing knowledge and of the conformity of real prescriptions to evidence-based reference assessing the behavioral change. Twenty-two French units (departments) of cardiology were randomized (72 participating cardiologists). In the 'active' transfer group, the primary outcome was more improved than that in the control (P = 0.031 at the department level, absolute mean improvement of 5 points/100). The change in knowledge transfer (MCQ) was also significant (P = 0.039 at the department level, absolute mean improvement of 6 points/100). However, no benefit was shown in terms of prescription conformity to evidence. For the 'passive' mode of knowledge transfer and for the three outcomes considered, no improvement was identified. CardioDAS findings confirm that 'active' knowledge transfer has some impact on participants' intent to prescribe and knowledge, but no effect on behavioral outcome. 'Passive' transfer seems far less efficient. In addition, the size of the benefit remains small and its consequences limited in practice. Answer: The question of whether active dissemination of evidence results in faster knowledge transfer than passive diffusion has been explored in various studies, with mixed results. One study analyzing trends in the management of pediatric asthma and croup in US emergency departments from 1995 to 2009 found that knowledge transfer by passive diffusion or active guideline dissemination resulted in similar trends of corticosteroid use for both conditions (PUBMED:24694945). This suggests that in this particular context, active dissemination did not significantly outpace passive diffusion in terms of knowledge transfer. However, another study, the CardioDAS Study, which evaluated two evidence-based knowledge transfer interventions for physicians, found that 'active' knowledge transfer had some impact on participants' intent to prescribe and knowledge, but no effect on behavioral outcome. 'Passive' transfer was found to be far less efficient (PUBMED:18034664). This indicates that active dissemination can be more effective than passive diffusion in certain scenarios, particularly in improving knowledge and intent to prescribe among physicians. A meta-analysis comparing active and passive exploration for spatial knowledge acquisition found a small to moderate advantage of active exploration over passive observation (PUBMED:37326457). While this study is not directly related to the dissemination of evidence in a healthcare or research setting, it does suggest that active engagement can lead to better knowledge acquisition. In the context of integrated knowledge translation (IKT), an IKT approach to diffusing and disseminating the findings of consensus recommendations for conducting spinal cord injury research showed that active dissemination increased awareness of research findings faster than diffusion (PUBMED:37438787). This supports the idea that active strategies can accelerate the spread of knowledge. Overall, while some studies suggest that active dissemination can be more effective than passive diffusion in certain contexts, the evidence is not uniformly conclusive. Factors such as the type of knowledge being transferred, the audience, and the context of the dissemination may influence the effectiveness of active versus passive strategies (PUBMED:19705558, PUBMED:17850495, PUBMED:20004552, PUBMED:26294458, PUBMED:26001482, PUBMED:16979468). Therefore, it may not be possible to make a definitive statement that active dissemination always results in faster knowledge transfer than passive diffusion without considering these variables.
Instruction: Does comorbidity explain the ethnic inequalities in cervical cancer survival in New Zealand? Abstracts: abstract_id: PUBMED:21486460 Does comorbidity explain the ethnic inequalities in cervical cancer survival in New Zealand? A retrospective cohort study. Background: There are large ethnic differences in cervical cancer survival in New Zealand that are only partly explained by stage at diagnosis. We investigated the association of comorbidity with cervical cancer survival, and whether comorbidity accounted for the previously observed ethnic differences in survival. Methods: The study involved 1,594 cervical cancer cases registered during 1994-2005. Comorbidity was measured using hospital events data and was classified using the Elixhauser instrument; effects on survival of individual comorbid conditions from the Elixhauser instrument were also assessed. Cox regression was used to estimate adjusted cervical cancer mortality hazard ratios (HRs). Results: Comorbidity during the year before diagnosis was associated with cervical cancer-specific survival: those with an Elixhauser count of ≥3 (compared with a count of zero) had a HR of 2.17 (1.32-3.56). The HR per unit of Elixhauser count was 1.25 (1.11-1.40). However, adjustment for the Elixhauser instrument made no difference to the mortality HRs for Māori and Asian women (compared to 'Other' women), and made only a trivial difference to that for Pacific women. In contrast, concurrent adjustment for 12 individual comorbid conditions from the Elixhauser instrument reduced the Māori HR from 1.56 (1.19-2.05) to 1.44 (1.09-1.89), i.e. a reduction in the excess risk of 21%; and reduced the Pacific HR from 1.95 (1.21-3.13) to 1.62 (0.98-2.68), i.e. a reduction in the excess risk of 35%. Conclusions: Comorbidity is associated with cervical cancer-specific survival in New Zealand, but accounts for only a moderate proportion of the ethnic differences in survival. abstract_id: PUBMED:19223561 Socioeconomic inequalities in cancer survival in New Zealand: the role of extent of disease at diagnosis. We examined socioeconomic inequalities in cancer survival in New Zealand among 132,006 people ages 15 to 99 years who had a cancer registered (1994-2003) and were followed up to 2004. Relative survival rates (RSR) were calculated using deprivation-specific life tables. A census-based measure of socioeconomic position (New Zealand deprivation based on the 1996 census) based on residence at the time of cancer registration was used. All RSRs were age-standardized, and further standardization was used to investigate the effect of extent of disease at diagnosis on survival. Weighted linear regression was used to estimate the deprivation gap (slope index of inequality) between the most and least deprived cases. Socioeconomic inequalities in cancer survival were evident for all of the major cancer sites, with the deprivation gap being particularly high for prostate (-0.15), kidney and uterus (both -0.14), bladder (-0.12), colorectum (-0.10), and brain (+0.10). Accounting for extent of disease explained some of the inequalities in survival from breast and colorectal cancer and melanoma and all of the deprivation gaps in survival of cervical cancer; however, it did not affect RSRs for cancers of the kidney, uterus, and brain. No substantial differences between the total compared with the non-Māori population were found, indicating that the findings were not due to confounding by ethnicity. In summary, socioeconomic disparities in survival were consistent for nearly all cancer sites, persisted in ethnic-specific analyses, and were only partially explained by differential extent of disease at diagnosis. Further investigation of reasons for persisting inequalities is required. abstract_id: PUBMED:19847659 Determinants of inequalities in cervical cancer stage at diagnosis and survival in New Zealand. Objective: The aim of this study is to assess whether ethnic inequalities in cervical cancer mortality are due to differences in survival independent of stage and age at diagnosis, and to assess the contribution of screening to stage at diagnosis. Methods: Demographic data and cervical screening history were collected for 402 women with histologically proven primary invasive cervical cancer, diagnosed in New Zealand between 1 January 2000 and 30 September 2002. Date of death was available for women who died up to 30 September 2004. Results: A Cox proportional hazard model showed that, after adjusting for age, the Māori mortality rate was 1.80 times (95% CI 1.07-3.04) that of non-Māori. This reduced to 1.25 (95% CI 0.74-2.11) when stage at diagnosis was also adjusted for. Among determinants of late stage at diagnosis, older age and being Māori significantly increased the risk, while screening was protective. Conclusions: These results indicate that later stage at diagnosis is the main determinant of Māori women's higher mortality from cervical cancer. Improving cervical screening among Māori women would reduce stage at diagnosis and therefore ethnic inequalities in mortality. abstract_id: PUBMED:22504054 Which factors account for the ethnic inequalities in stage at diagnosis and cervical cancer survival in New Zealand? Objective: There are substantial ethnic inequalities in stage at diagnosis and cervical cancer survival in New Zealand. We assessed what proportions of these differences were due to screening history (for the analyses of late stage diagnosis), stage at diagnosis (for the analyses of survival), comorbid conditions (for the analyses of survival), and travel time to the nearest General Practitioner and cancer centre. Methods: The study involved 1594 cervical cancer cases registered during 1994-2005. We used G-computation to assess the validity of the estimates obtained by standard logistic regression methods. Results: Māori women had a higher risk of late stage diagnosis compared with 'Other' (mainly European) women (odds ratio (OR) = 2.71; 95% confidence interval 1.98, 3.72); this decreased only slightly (OR 2.39; 1.72, 3.30) after adjustment for screening history, and travel time to the nearest General Practitioner and cancer centre. In contrast, the (non-significantly) elevated risk in Pacific women (1.39; 0.76, 2.54) disappeared almost completely when adjusted for the same factors (1.06; 0.57, 1.96). The hazard ratio of mortality for cervical cancer for Māori women was 2.10 (1.61, 2.73) and decreased to 1.45 (1.10, 1.92) after adjustment for stage at diagnosis, comorbid conditions, and travel time to the nearest General Practitioner and cancer centre; the corresponding estimates for Pacific women were 1.96 (1.23, 3.13) and 1.55 (0.93, 2.57). The G-computation analyses gave similar findings. Conclusions: The excess relative risk of late stage diagnosis in Māori women remains largely unexplained, while more than half of the excess relative risk of mortality in Māori and Pacific women is explained by differences in stage at diagnosis and comorbid conditions. abstract_id: PUBMED:19808843 Does screening history explain the ethnic differences in stage at diagnosis of cervical cancer in New Zealand? Background: There are ethnic disparities in cervical cancer survival in New Zealand. The objectives of this study were to assess the associations of screening history, ethnicity, socio-economic status (SES) and rural residence with stage at diagnosis in women diagnosed with cervical cancer in New Zealand during 1994-2005. Methods: The 2323 cases were categorized as 'ever screened' if they had had at least one smear prior to 6 months before diagnosis, and as 'regular screening' if they had had no more than 36 months between any two smears in the period 6-114 months before diagnosis. Logistic regression was used to estimate the associations of screening history, ethnicity, SES and urban/rural residence with stage at diagnosis. Results: The percentages 'ever screened' were 43.3% overall, 24.8% in Pacific, 30.5% in Asian, 40.6% in Māori and 46.1% in 'Other' women. The corresponding estimates for 'regular screening' were 14.0, 5.7, 7.8, 12.5 and 15.3%. Women with 'regular screening' had a lower risk of late stage diagnosis [odds ratio (OR) 0.16, 95% confidence interval (CI) 0.10-0.26], and the effect was greater for squamous cell carcinoma (OR 0.12, 95% CI 0.07-0.23) than for adenocarcinoma (OR 0.32, 95% CI 0.13-0.82). The increased risk of late-stage diagnosis (OR 2.72, 95% CI 1.99-3.72) in Māori (compared with 'Other') women decreased only slightly when adjusted for screening history (OR 2.45, 95% CI 1.77-3.39). Conclusions: Over half of cases had not been 'ever screened'. Regular screening substantially lowered the risk of being diagnosed at a late stage. However, screening history does not appear to explain the ethnic differences in stage at diagnosis. abstract_id: PUBMED:25530328 Comparison of cancer survival in New Zealand and Australia, 2006-2010. Background And Aims: Previous studies have shown substantially higher mortality rates from cancer in New Zealand compared to Australia, but these studies have not included data on patient survival. This study compares the survival of cancer patients diagnosed in 2006-10 in the whole populations of New Zealand and Australia. Method: Identical period survival methods were used to calculate relative survival ratios for all cancers combined, and for 18 cancers each accounting for more than 50 deaths per year in New Zealand, from 1 to 10 years from diagnosis. Results: Cancer survival was lower in New Zealand, with 5-year relative survival being 4.2% lower in women, and 3.8% lower in men for all cancers combined. Of 18 cancers, 14 showed lower survival in New Zealand; the exceptions, with similar survival in each country, being melanoma, myeloma, mesothelioma, and cervical cancer. For most cancers, the differences in survival were maximum at 1 year after diagnosis, becoming smaller later; however, for breast cancer, the survival difference increased with time after diagnosis. Conclusion: The lower survival in New Zealand, and the higher mortality rates shown earlier, suggest that further improvements in recognition, diagnosis, and treatment of cancer in New Zealand should be possible. As the survival differences are seen soon after diagnosis, issues of early management in primary care and time intervals to diagnosis and treatment may be particularly important. abstract_id: PUBMED:27669745 Ethnic inequalities in cancer incidence and mortality: census-linked cohort studies with 87 million years of person-time follow-up. Background: Cancer makes up a large and increasing proportion of excess mortality for indigenous, marginalised and socioeconomically deprived populations, and much of this inequality is preventable. This study aimed to determine which cancers give rise to changing ethnic inequalities over time. Methods: New Zealand census data from 1981, 1986, 1991, 1996, 2001, and 2006, were all probabilistically linked to three to five subsequent years of mortality (68 million person-years) and cancer registrations (87 million person years) and weighted for linkage bias. Age-standardised rate differences (SRDs) for Māori (indigenous) and Pacific peoples, each compared to European/Other, were decomposed by cancer type. Results: The absolute size and percentage of the cancer contribution to excess mortality increased from 1981-86 to 2006-11 in Māori males (SRD 72.5 to 102.0 per 100,000) and females (SRD 72.2 to 109.4), and Pacific females (SRD -9.8 to 42.2) each compared to European/Other. Specifically, excess mortality (SRDs) increased for breast cancer in Māori females (linear trend p &lt; 0.01) and prostate (p &lt; 0.01) and colorectal cancers (p &lt; 0.01) in Māori males. The incidence gap (SRDs) increased for breast (Māori and Pacific females p &lt; 0.01), endometrial (Pacific females p &lt; 0.01) and liver cancers (Māori males p = 0.04), and for cervical cancer it decreased (Māori females p = 0.03). The colorectal cancer incidence gap which formerly favoured Māori, decreased for Māori males and females (p &lt; 0.01). The greatest contributors to absolute inequalities (SRDs) in mortality in 2006-11 were lung cancer (Māori males 50 %, Māori females 44 %, Pacific males 81 %), breast cancer (Māori females 18 %, Pacific females 23 %) and stomach cancers (Māori males 9 %, Pacific males 16 %, Pacific females 20 %). The top contributors to the ethnic gap in cancer incidence were lung, breast, stomach, endometrial and liver cancer. Conclusions: A transition is occurring in what diseases contribute to inequalities. The increasing excess incidence and mortality rates in several obesity- and health care access-related cancers provide a sentinel warning of the emerging drivers of ethnic inequalities. Action to further address inequalities in cancer burden needs to be multi-pronged with attention to enhanced control of tobacco, obesity, and carcinogenic infectious agents, and focus on addressing access to effective screening and quality health care. abstract_id: PUBMED:23331365 Improving survival disparities in cervical cancer between Māori and non-Māori women in New Zealand: a national retrospective cohort study. Objective: Māori women in New Zealand have higher incidence of and mortality from cervical cancer than non-Māori women, however limited research has examined differences in treatment and survival between these groups. This study aims to determine if ethnic disparities in treatment and survival exist among a cohort of Māori and non-Māori women with cervical cancer. Methods: A retrospective cohort study of 1911 women (344 Māori and 1567 non-Māori) identified from the New Zealand Cancer Register with cervical cancer (adenocarcinoma, adenosquamous or squamous cell carcinoma) between 1 January 1996 and 31 December 2006. Results: Māori women with cervical cancer had a higher receipt of total hysterectomies, and similar receipt of radical hysterectomies and brachytherapy as primary treatment, compared to non-Māori women (age and stage adjusted). Over the cohort period, Māori women had poorer cancer specific survival than non-Māori women (mortality hazard ratio (HR) 2.07, 95% confidence interval (CI): 1.63-2.62). From 1996 to 2005, the survival for Māori improved significantly relative to non-Māori. Conclusion: Māori continue to have higher incidence and mortality than non-Māori from cervical cancer although disparities are improving. Survival disparities are also improving. Treatment (as measured) by ethnicity is similar. Implications: Primary prevention and early detection remain key interventions for addressing Māori needs and reducing inequalities in cervical cancer in New Zealand. abstract_id: PUBMED:35129595 Ethnic Differences in Cancer Rates Among Adults With Type 2 Diabetes in New Zealand From 1994 to 2018. Importance: People with type 2 diabetes have greater risk for some site-specific cancers, and risks of cancers differ among racial and ethnic groups in the general population of Aotearoa New Zealand. The extent of ethnic disparities in cancer risks among people with type 2 diabetes in New Zealand is unclear. Objective: To compare the risks of 21 common adult cancers among Māori, Pasifika, and New Zealand European individuals with type 2 diabetes in New Zealand from 1994 to 2018. Design, Setting, And Participants: This population-based, matched cohort study used data from the primary care audit program in Auckland, New Zealand, linked with national cancer, death, and hospitalization registration databases, collected from January 1, 1994, to July 31, 2018, with follow-up data obtained through December 31, 2019. Using a tapered matching method to balance potential confounders (sociodemographic characteristics, lifestyle, anthropometric and clinical measurements, treatments [antidiabetes, antihypertensive, lipid-lowering, and anticoagulant], period effects, and recorded duration of diabetes), comparative cohorts were formed between New Zealand European and Māori and New Zealand European and Pasifika individuals aged 18 years or older with type 2 diabetes. Sex-specific matched cohorts were formed for sex-specific cancers. Exposures: Māori, Pasifika, and New Zealand European (reference group) ethnicity. Main Outcomes And Measures: The incidence rates of 21 common cancers recorded in nationally linked databases between 1994 and 2018 were the main outcomes. Weighted Cox proportional hazards regression was used to assess ethnic differences in risk of each cancer. Results: A total of 33 524 adults were included: 15 469 New Zealand European (mean [SD] age, 61.6 [13.2] years; 8522 [55.1%] male), 6656 Māori (mean [SD] age, 51.2 [12.4] years; 3345 [50.3%] female), and 11 399 Pasifika (mean [SD] age, 52.8 [12.7] years; 5994 [52.6%] female) individuals. In the matched New Zealand European and Māori cohort (New Zealand European: 8361 individuals; mean [SD] age, 58.9 [12.9] years; 4595 [55.0%] male; Māori: 5039 individuals; mean [SD] age, 51.4 [12.3] years; 2542 [50.5%] male), significant differences between New Zealand European and Māori individuals were identified in the risk for 7 cancers. Compared with New Zealand European individuals, the hazard ratios (HRs) among Māori individuals were 15.36 (95% CI, 4.50-52.34) for thyroid cancer, 7.94 (95% CI, 1.57-40.24) for gallbladder cancer, 4.81 (95% CI, 1.08-21.42) for cervical cancer (females only), 1.97 (95% CI, 1.30-2.99) for lung cancer, 1.81 (95% CI, 1.08-3.03) for liver cancer, 0.56 (95% CI, 0.35-0.90) for colon cancer, and 0.11 (95% CI, 0.04-0.27) for malignant melanoma. In the matched New Zealand European and Pasifika cohort (New Zealand European: 9340 individuals; mean [SD] age, 60.6 [13.1] years; 4885 [52.3%] male; Pasifika: 8828 individuals; mean [SD] age, 53.1 [12.6] years; 4612 [52.2%] female), significant differences between New Zealand European and Pasifika individuals were identified for 6 cancers. Compared with New Zealand European individuals, HRs among Pasifika individuals were 25.10 (95% CI, 3.14-200.63) for gallbladder cancer, 4.47 (95% CI, 1.25-16.03) for thyroid cancer, 0.48 (95% CI, 0.30-0.78) for colon cancer, 0.21 (95% CI, 0.09-0.48) for rectal cancer, 0.21 (95% CI, 0.07-0.65) for malignant melanoma, and 0.01 (95% CI, 0.01-0.10) for bladder cancer. Conclusions And Relevance: In this cohort study, differences in the risk of 21 common cancers were found between New Zealand European, Māori, and Pasifika groups of adults with type 2 diabetes in New Zealand from 1994 to 2018. Research into the mechanisms underlying these differences as well as additional screening strategies (eg, for thyroid and gallbladder cancers) appear to be warranted. abstract_id: PUBMED:19580365 Demographic differences in stage at diagnosis and cervical cancer survival in New Zealand, 1994-2005. Objective: To investigate ethnic, socioeconomic, and urban/rural differences in stage at diagnosis and cervical cancer survival in New Zealand. Methods: The study involved 1594 cervical cancer cases registered during 1994-2005. Cox regression was used to estimate adjusted cervical cancer mortality hazard ratios (HRs). Results: Māori and Pacific women had higher death rates than Other (predominantly European) women, with age and year of diagnosis adjusted HRs of 2.15 (95% CI 1.68-2.75) and 1.98 (95% CI 1.25-3.13), respectively, whereas Asian women had a lower (nonstatistically significant) risk (0.81, 95% CI 0.47-1.42). Adjustment for stage reduced the HR in Māori to 1.62 (95% CI 1.25-2.09), but there was little change for Pacific or Asian women. These patterns varied over time: for cases diagnosed during 1994-1997, the HR for Māori women was 2.34 (95% CI 1.68-3.27), which reduced to 1.83 (95% CI 1.29-2.60) when adjusted for stage; for cases diagnosed during 2002-2005, the corresponding estimates were 1.54 (95% CI 0.75-3.13) and 0.90 (95% CI 0.43-1.89). Socioeconomic status and urban/rural residence had only marginal effects. Conclusions: There were major ethnic differences in cervical cancer survival in New Zealand that were only partly explained by stage at diagnosis. These patterns varied over time, with postdiagnostic factors playing an important role in the high Māori mortality rates in the 1990s, but in more recent years, the excess mortality in Māori women appeared to be almost entirely due to stage at diagnosis, indicating that ethnic differences in access to and uptake of screening and treatment of premalignant lesions may have been playing a major role. Answer: Comorbidity is associated with cervical cancer-specific survival in New Zealand, but it accounts for only a moderate proportion of the ethnic differences in survival (PUBMED:21486460). The study found that while comorbidity during the year before diagnosis was associated with cervical cancer-specific survival, adjustment for comorbidity using the Elixhauser instrument made no significant difference to the mortality hazard ratios (HRs) for Māori and Asian women compared to 'Other' women, and only a trivial difference for Pacific women. However, when adjusting for 12 individual comorbid conditions from the Elixhauser instrument, there was a reduction in the excess risk of mortality for Māori and Pacific women, indicating that comorbidity does contribute to the ethnic disparities in survival, but does not fully explain them. Other factors, such as socioeconomic status, extent of disease at diagnosis, and screening history, also play roles in the observed ethnic inequalities in cervical cancer survival. Socioeconomic inequalities in cancer survival were evident across major cancer sites, and accounting for the extent of disease at diagnosis explained some of the inequalities in survival from breast and colorectal cancer and melanoma, and all of the deprivation gaps in survival of cervical cancer (PUBMED:19223561). Ethnic disparities in treatment and survival between Māori and non-Māori women with cervical cancer are improving, but primary prevention and early detection remain key interventions for addressing Māori needs and reducing inequalities (PUBMED:23331365). In summary, while comorbidity is a factor in cervical cancer survival, it does not fully account for the ethnic inequalities observed in New Zealand. Other determinants, including screening history, stage at diagnosis, socioeconomic factors, and access to quality healthcare, contribute to these disparities and need to be addressed to improve survival outcomes for all ethnic groups (PUBMED:19847659; PUBMED:22504054; PUBMED:19808843; PUBMED:19580365).
Instruction: Occlusion rate, venous symptoms and patient satisfaction after radiofrequency-induced thermotherapy (RFITT®): are there differences between the great and the small saphenous veins? Abstracts: abstract_id: PUBMED:26098324 Occlusion rate, venous symptoms and patient satisfaction after radiofrequency-induced thermotherapy (RFITT®): are there differences between the great and the small saphenous veins? Background: Previous studies on the therapy of insufficient saphenous veins mainly compare different treatment methods. Only a few investigate differences of a specific treatment option between the great (GSV) and the small saphenous vein (SSV). The aim of this study was to evaluate the efficacy, clinical improvement and patient satisfaction after radiofrequency-induced thermotherapy (RFITT®) with regard to the treated vein. Patients And Methods: We included 65 patients (40 women, 25 men; mean age 54.75 years) who were treated with RFITT® for incompetent saphenous veins (n = 83: 62 GSV, 21 SSV). Occlusion rates were determined by duplex-sonography. Additionally, we performed a prospective analysis of venous symptoms and signs by means of a standardized questionnaire and of patient satisfaction using a semi-quantitative rating (1 = very good, 6 = insufficient). Results: The GSV group showed a significantly greater reduction of venous symptoms in comparison to the SSV group (p = 0.005) despite no significant differences in long term occlusion rates (mean time after operation: 22 months) of 90 % in the GSV group and 81.8 % in the SSV group (p = 0.598). Following the procedure, detailed analysis revealed significantly more swelling (p = 0.022), feeling of heavy legs (p = 0.002) and nightly calf cramps (p = 0.001) in the SSV group. Additionally, RFITT® led to a significant improvement in patient satisfaction in the GSV group (from 1.93 at day 1 - 3 to 1.41 after 6 - 12 months, p = 0.009) but not in the SSV group (from 2.29 to 2.07, p = 0.43). Conclusions: With regard to the improvement of venous symptoms and patient satisfaction, the benefit of RFITT® is greater for patients with incompetent GSV compared to those with incompetent SSV. abstract_id: PUBMED:28956506 One-year results of the use of endovenous radiofrequency ablation utilising an optimised radiofrequency-induced thermotherapy protocol for the treatment of truncal superficial venous reflux. Background In previous in vitro and ex vivo studies, we have shown increased thermal spread can be achieved with radiofrequency-induced thermotherapy when using a low power and slower, discontinuous pullback. We aimed to determine the clinical success rate of radiofrequency-induced thermotherapy using this optimised protocol for the treatment of superficial venous reflux in truncal veins. Methods Sixty-three patients were treated with radiofrequency-induced thermotherapy using the optimised protocol and were followed up after one year (mean 16.3 months). Thirty-five patients returned for audit, giving a response rate of 56%. Duplex ultrasonography was employed to check for truncal reflux and compared to initial scans. Results In the 35 patients studied, there were 48 legs, with 64 truncal veins treated by radiofrequency-induced thermotherapy (34 great saphenous, 15 small saphenous and 15 anterior accessory saphenous veins). One year post-treatment, complete closure of all previously refluxing truncal veins was demonstrated on ultrasound, giving a success rate of 100%. Conclusions Using a previously reported optimised, low power/slow pullback radiofrequency-induced thermotherapy protocol, we have shown it is possible to achieve a 100% ablation at one year. This compares favourably with results reported at one year post-procedure using the high power/fast pullback protocols that are currently recommended for this device. abstract_id: PUBMED:35815780 Relationship between great saphenous vein recanalization, venous symptoms reappearance, and varicose veins recurrence rates after endovenous radiofrequency ablation. The term "recurrence" in chronic venous disease remains not yet well defined, despite numerous reports describing patterns and causes of the presence of recurrent varicose veins (RVVs). Moreover, saphenous trunk recanalization (STR) has also been documented as one of the major source of RVVs and it is widely used to indicate the "failure" of endovenous ablation. Finally, reappearance of venous symptoms (VSym) should be considered to reach a complete "recurrence" evaluation. RVVs, STR, and VSym rates and mutual co-presence after endovenous treatments are still unclear. The aim of this report is to describe and analyze these three recurrence components after 6 years in patients underwent radiofrequency ablation of the great saphenous vein. abstract_id: PUBMED:36581000 Midterm results of radiofrequency ablation with multiple heat cycles for incompetent saphenous veins. Objective: Recent reports suggest that the number of radiofrequency ablation (RFA) cycles impacts the depth of vein wall damage. This study evaluates the midterm occlusion rate after delivering increased energy during RFA of incompetent saphenous veins. Methods: Between 2016 and 2019, consecutive patients who underwent RFA with multiple heat cycles were enrolled in the study. The exclusion criterion was previous treatment history for chronic venous disease. Duplex ultrasound data and medical records were reviewed retrospectively. Results: This study enrolled 217 patients (345 veins). Follow-up examinations were performed for 65% of treated veins after 6 months, 31% after 12 months, and 26% after more than 24 months with a mean follow-up period of 23 ± 18.9 months. The numbers of great saphenous and small saphenous veins were 178 and 62, respectively. According to the Kaplan-Meier method, the occlusion rate of saphenous veins was 100% at 3 years and 95.4% at 5 years. Except for one case (0.3%) of endovenous heat-induced thrombosis class 2, no significant side effects were noted. Conclusions: Routine use of RFA with multiple heat cycles for incompetent saphenous veins exhibits good clinical outcomes considering midterm occlusion rate without an increase in side effects. abstract_id: PUBMED:31661439 Comparison of ultrasound results following endovenous laser ablation and radiofrequency ablation in the treatment of varicose veins. Purpose: Superficial venous insufficiency is a common problem associated with varicose veins. In addition to classical symptoms, it may result in skin changes, venous ulcers and has a great impact on patients' health-related quality of life. In the last decade, minimally invasive techniques such as endovenous laser ablation (EVLA) and radiofrequency ablation (RFA) have been developed as alternatives to surgery in an attempt to reduce morbidity and improve efficiency. The aim of this study is to evaluate the efficacy of EVLA and RF therapies in superficial venous insufficiency. Material And Methods: Fifty legs belonging to 50 patients with symptomatic primary venous insufficiency were treated. 25 saphenous veins treated with 1470 nmdiode laser, while 25, saphenous veins treated with bipolar Radiofrequency Induced Thermotherapy (RF). All patients underwent postoperative duplex scanning within 6 month after the procedure and followed clinically, to determine the severity of the venous disease. Complications and occlusion rates were recorded. Results: Total occlusion rates in RF and EVLA groups were 100% and was 100%, respectively. There was no significant difference between groups (p=0,140). Major complications such as skin burns, deep venous thrombosis have not been detected for both groups. 2 patients treated with EVLA had erythema (8%) and 1 patient had a pain sensation (4%). 1 patient in the RF group had erythema (4%), 1 had pain (4%) and 1 had a burning sensation (4%). Conclusion: EVLA and RF therapies in saphenous vein insufficiency are effective, minimally invasive, safe, easy to use treatment modalities with good patient satisfaction and high occlusion rates. Key Words: EVLA, Radiofrquency, Venous insufficiency. abstract_id: PUBMED:23162247 Bipolar radiofrequency-induced thermotherapy of great saphenous vein: Our initial experience. The incidence of varicose veins in lower limbs is increasing in the Indian subcontinent. With the advent of radiofrequency ablation (RFA), an effective minimally invasive technique is now available to treat varicose veins. RFA can be performed with either unipolar or bipolar probes. We present a simple technique for bipolar radiofrequency-induced thermotherapy of the great saphenous vein. This can be a safe and effective alternative to surgical procedures. abstract_id: PUBMED:36592349 Safety and effectiveness of indirect radiofrequency ablation (closure FAST) of incompetent great saphenous veins with Type I aneurysms: Long-term results radiofrequency ablation for saphenous aneurysms. Background: Assess the safety and effectiveness of indirect radiofrequency ablation (RFA, Closure FAST) for the treatment of incompetent great saphenous veins (GSVs) with type 1 aneurysms. Methods: This was a retrospective analysis performed in three centers (2007-2021). All patients presenting with saphenous aneurysms close to the junction (within 2 cm) were included. They were treated with RFA. Phlebectomies and/or sclerotherapy were performed during the same treatment session. Duplex ultrasound (DUS) was performed early after the procedure and then, more than a year later. Results: Eight patients (11 limbs) were included between June 2007 and May 2021 with a median diameter of the GSV aneurysm 21 mm (IQR 17.2-23.4). No severe adverse events occurred apart from one endovenous heat-induced thrombosis (EHIT) class III (9.1%). After more than a year (mean 7.2 ± 4.2, median 8 years), none of the aneurysms was present on DUS and the truncal obliteration rate was 100%. Conclusion: RFA appears to be a safe and effective treatment for patients presenting with incompetent saphenous veins with the type 1 aneurysm. abstract_id: PUBMED:37644641 Comparison of cyanoacrylate closure and radiofrequency ablation for the treatment of small saphenous veins. Background: The objective of this study was to compare the early and mid-term results of radiofrequency ablation and cyanoacrylate ablation used in the treatment of small saphenous insufficiency. Methods: A total of 84 patients with isolated small saphenous vein insufficiency who underwent either cyanoacrylate ablation (CA) (Group 1, n = 40) or radiofrequency ablation (RFA) (Group 2, n = 44) were analyzed retrospectively. Results: The occlusion rate of target vessel was 95% in Group 1 and 93.1% in Group 2 patients, respectively, at 1-year follow-up without any significant difference. Sural nerve injury was observed in 3 (6.8%) patients in Group 2 due to the thermal damage of the RFA device. Conclusions: While both techniques can be used with satisfactory and safe results in 1-year follow-up period, cyanoacrylate ablation may have a better safety profile compared to radiofrequency ablation due to lower complication rates in terms of paresthesia and sural nerve damage with similar occlusion rates. abstract_id: PUBMED:25926429 Successful segmental thermal ablation of varicose saphenous veins in a patient with confirmed vascular Ehlers-Danlos syndrome. We describe here the successful scheduled treatment of varicose veins by radiofrequency segmental thermal ablation in a 43-year-old patient with vascular Ehlers-Danlos syndrome. Her venous disease started at the age of 16 years, 1 year prior to her first major Ehlers-Danlos syndrome-related event which led to the diagnosis of her genetic condition. Surgical stripping was contra-indicated because of Ehlers-Danlos syndrome at the age of 18 years. More than 20 years later, her venous disease had become highly symptomatic despite daily compression and pain medication. Venous reassessment evidenced incompetent right and left great saphenous and left small saphenous veins, with increased diameters of both sapheno-femoral and sapheno-popliteal junctions. Radiofrequency endovenous ablation rather than surgery was considered because of its minimally invasive nature and because of standardized energy delivery.All intended-to-be-treated incompetent saphenous vein segments were occluded successfully, followed by an important improvement of clinical disease severity at day 30, persistent at 1 year post-treatment. Duplex ultrasound confirmed closure and fibrotic retraction of all treated venous segments at 1 year. This report shows that radiofrequency endovenous ablation may be a safe and effective therapy of varicose veins in patients with diagnosed vascular Ehlers-Danlos syndrome. abstract_id: PUBMED:37155634 Radiofrequency ablation of the great saphenous vein; does the choice of monopolar or bipolar catheters affect outcomes? Objectives: Radiofrequency-based procedure is one of the leading methods of endovenous thermal ablation. The most fundamental difference with regards to currently available radiofrequency ablation systems is the way of electric current flow given to the vein wall; bipolar segmental and monopolar ablation. This study aimed to compare the monopolar ablation method with conventional bipolar segmental endovenous radiofrequency ablation method for the management of incompetent saphenous veins. Methods: Between November 2019 and November 2021, 121 patients with incompetent varicose veins who were treated either with the F-Care/monopolar (N = 49) or ClosureFast/bipolar (N = 72) were included in the study. A single extremity of each patient with isolated great saphenous vein insufficiency was enrolled. The differences between the two groups in demographic parameters, disease severity, treated veins, peri- and postoperative complications, and treatment efficacy indicators were retrospectively evaluated. Results: There was no statistically significant difference between the groups regarding demographic parameters, disease severity, and treated veins in preoperative period (p &gt; 0,05). The average procedural time was 21.4 ± 4 minutes in the monopolar group, while it was 17.1 ± 3 minutes in the bipolar group. In both groups, the venous clinical severity scores declined significantly compared with the preoperative period, however; there was no difference between groups (p &gt; 0,05). The occlusion rate of the saphenofemoral junction and proximal saphenous vein after 1 year was 94.1% in the bipolar group and 91.8% in the monopolar group (p = 0.4) while there was a significant difference in the occlusion rate of the shaft and distal part of the saphenous vein (93.2% in the bipolar group and 80.4% in the monopolar group, p = 0.04). Postoperative complications (bruising and skin pigmentation) were slightly higher in the bipolar group (p = 0.02, p = 0.01). Conclusions: Both systems are effective in treating the venous insufficiency of the lower extremity. Monopolar system revealed a better early postoperative course with similar occlusion rate of the proximal part of saphenous vein compared with bipolar system, however; the occlusion of the lower half of the saphenous vein was significantly lower which may negatively affect long-term occlusion rates and recurrence of the disease. Answer: The occlusion rate, venous symptoms, and patient satisfaction after radiofrequency-induced thermotherapy (RFITT®) do show differences between the great saphenous vein (GSV) and the small saphenous vein (SSV). A study that included 65 patients treated with RFITT® for incompetent saphenous veins found that the GSV group exhibited a significantly greater reduction of venous symptoms compared to the SSV group, despite no significant differences in long-term occlusion rates, which were 90% for the GSV group and 81.8% for the SSV group. The SSV group experienced significantly more swelling, a feeling of heavy legs, and nightly calf cramps following the procedure. Additionally, there was a significant improvement in patient satisfaction in the GSV group but not in the SSV group (PUBMED:26098324). Another study with a one-year follow-up of 63 patients treated with an optimized RFITT protocol for superficial venous reflux in truncal veins reported a 100% success rate in complete closure of all previously refluxing truncal veins, which included both GSV and SSV (PUBMED:28956506). However, this study did not differentiate between the outcomes for GSV and SSV. In terms of safety and effectiveness, a study on the treatment of incompetent GSVs with type 1 aneurysms using indirect RFA (Closure FAST) reported a 100% truncal obliteration rate after more than a year, with no severe adverse events apart from one case of endovenous heat-induced thrombosis (EHIT) class III (PUBMED:36592349). Another study comparing cyanoacrylate closure and RFA for the treatment of SSVs found similar occlusion rates for both techniques at a 1-year follow-up, with RFA having a slightly higher incidence of sural nerve injury due to thermal damage (PUBMED:37644641). Overall, while RFITT® appears to be effective for both GSV and SSV, there may be a greater benefit in terms of symptom improvement and patient satisfaction for patients with incompetent GSV compared to those with incompetent SSV (PUBMED:26098324).
Instruction: MR imaging of anterior cruciate ligament tears: is there a gender gap? Abstracts: abstract_id: PUBMED:25442023 MR imaging of cruciate ligaments. Cruciate ligament injuries, and in particular injuries of the anterior cruciate ligament (ACL), are the most commonly reconstructed ligamentous injuries of the knee. As such, accurate preoperative diagnosis is essential in optimal management of patients with cruciate ligament injuries. This article reviews the anatomy and biomechanics of the ACL and posterior cruciate ligament (PCL) and describes the magnetic resonance (MR) imaging appearances of complete and partial tears. Normal postoperative appearances of ACL and PCL reconstructions as well as MR imaging features of postoperative complications will also be reviewed. abstract_id: PUBMED:7489293 MR imaging of anterior cruciate ligament injuries. MR imaging has become the imaging modality of choice to evaluate the integrity of the anterior cruciate ligament (ACL). This article discusses the normal anatomy of the ACL and the clinical diagnosis of ACL disruption. The MR imaging appearance of chronic and acute ACL injuries, and the relative value of primary and secondary signs in injury diagnosis are reviewed. The clinical value of MR imaging in the evaluation of the ACL-deficient knee is also discussed. abstract_id: PUBMED:11000171 MR imaging of anterior cruciate ligament reconstruction graft. Objective: The objective was to determine the MR imaging findings that differentiate intact anterior cruciate ligament reconstruction graft, partial-thickness tear, and full-thickness tear, using arthroscopy as the gold standard. Materials And Methods: Sixteen consecutive MR imaging examinations were retrospectively and independently evaluated by two musculoskeletal radiologists for primary signs (graft signal, orientation, fiber continuity, complete discontinuity, and thickness) and secondary signs (anterior tibial translation, uncovered posterior horn lateral meniscus, posterior cruciate ligament hyperbuckling, and abnormal posterior cruciate ligament line) of anterior cruciate ligament reconstruction graft tear in 15 patients with follow-up arthroscopy. Results were compared with arthroscopy, and both receiver operating characteristic curves and kappa values for interobserver variability were calculated. Results: Arthroscopy revealed four full-thickness graft tears, seven partial-thickness tears, and five intact grafts. Of the primary signs, graft fiber continuity in the coronal plane and 100% graft thickness in the sagittal or coronal plane were most valuable in excluding full-thickness tear. Complete discontinuous graft in the coronal plane also was valuable in diagnosis of full-thickness tear. Of the secondary signs, anterior tibial translation and uncovered posterior horn lateral meniscus assisted in differentiating graft tear (partial or full thickness) from intact graft. The other primary and secondary signs were less valuable. Kappa values were highest for graft fiber continuity and graft discontinuity in the coronal plane. Conclusion: Full-thickness anterior cruciate ligament graft tear can be differentiated from partial-thickness tear or intact graft by evaluating for graft fiber continuity (coronal plane), complete graft discontinuity (coronal plane), and graft thickness (coronal or sagittal plane). abstract_id: PUBMED:35512889 Preoperative and Postoperative Magnetic Resonance Imaging of the Cruciate Ligaments. The anterior cruciate ligament and posterior cruciate ligament are key stabilizers of the knee. Magnetic resonance (MR) imaging excels at depiction of injury in both the native and reconstructed cruciate ligaments as well as associated injuries. This article reviews the anatomy, injury patterns, and relevant surgical techniques crucial to making accurate interpretation of MR imaging of the cruciate ligaments. abstract_id: PUBMED:14504836 MR imaging of anterior cruciate ligament tears: is there a gender gap? Objective: Clinically, females receive anterior cruciate ligament (ACL) tears more commonly than males. We explored whether gender differences exist in MR imaging patterns of ACL tears. Design And Patients: At 1.5 T, two observers evaluated MR examinations of 84 consecutive age-matched patients (42 males, 42 females, aged 16-39) with ACL tears, for mechanism of injury, extent and type of tear, the presence of secondary signs and associated osseous, meniscal and ligamentous injuries. Results: The most common mechanism of injury for both females and males was the pivot shift mechanism (67 and 60%, respectively). Females were more commonly imaged in the acute stage of tear than males (98 and 67%, respectively, p=0.001) and more commonly possessed the typical posterolateral tibial bone contusion pattern (88 and 62%, respectively, p=0.0131). Males exhibited a deeper femoral notch sign (2.7 and 2.0 mm, p=0.007) and medial meniscal, lateral collateral ligament and posterior cruciate ligament injuries more commonly than females (48 and 24%, p=0.009, 30 and 7%, p=0.035, 17 and 0%, p=0.035). There was no significant difference between genders for the presence of other secondary signs and contusion patterns, associated lateral meniscal tears, presence of O'Donoghue's triad or associated medial collateral ligament injuries. Conclusion: Gender differences in MR imaging patterns of ACL tears exist: females are more commonly imaged in the acute stage and more commonly possess posterolateral tibial bone contusions; males have a more severe presentation than females, associated with more severe lateral femoral condyle and soft tissue injuries. abstract_id: PUBMED:1987623 Anterior cruciate ligament reconstruction: evaluation with MR imaging. Fifty magnetic resonance (MR) imaging examinations were performed in 37 patients after arthroscopic anterior cruciate ligament (ACL) reconstruction with patellar bone-tendon-tibial bone autografts. T1-weighted sagittal and axial images were obtained. In 34 patients with clinically stable ACL autografts, 43 of 47 MR examinations demonstrated a well-defined, intact ACL autograft. All three patients with ACL laxity failed to demonstrate a well-defined autograft, for an overall correlation between MR imaging and clinical examination results of 92%. Of the 12 patients who underwent second-look arthroscopy, 100% correlation was present between MR imaging and arthroscopic results. As in the nonreconstructed knee, buckling of the posterior cruciate ligament was suggestive of ACL laxity. MR imaging also documented optimum placement of bone tunnels in the femur and tibia. MR imaging has proved to be an excellent noninvasive imaging modality for evaluating ACL reconstruction, while also providing ancillary information about the postoperative knee. abstract_id: PUBMED:30843077 Review of magnetic resonance imaging features of complications after anterior cruciate ligament reconstruction. The anterior cruciate ligament (ACL) is an important stabiliser of the knee and is commonly torn in sports injuries. Common indications for imaging after ACL reconstruction include persistent symptoms, limitation of motion and re-injury. Important postoperative complications include graft failure, impingement, arthrofibrosis and graft degeneration. This article aimed to familiarise the radiologist with magnetic resonance (MR) imaging appearances of properly positioned intact ACL grafts and to provide a comprehensive review of MR imaging features of complications following ACL reconstruction. abstract_id: PUBMED:10580941 Anterior cruciate ligament tears: MR imaging-based diagnosis in a pediatric population. Purpose: To evaluate the diagnostic accuracy of primary and secondary magnetic resonance (MR) imaging findings of anterior cruciate ligament (ACL) tears in young patients with immature skeletal systems. Materials And Methods: MR images obtained in 43 patients aged 5-16 years who underwent arthroscopy were retrospectively reviewed. Two reviewers evaluated primary findings (abnormal signal intensity, abnormal course as defined by Blumensaat angle, and discontinuity), secondary findings (bone bruise in lateral compartment, anterior tibial displacement, uncovering of posterior horn of lateral meniscus, posterior cruciate ligament line, and posterior cruciate angle), and meniscal and other ligamentous injuries. Results: There were 19 ACL tears and 24 intact ACLs. Overall sensitivity and specificity of MR imaging in detecting ACL tears were 95% and 88%, respectively. Sensitivities of the primary findings were 94% for abnormal Blumensaat angle; 79%, abnormal signal intensity; and 21% discontinuity. The specificity of all primary findings was 88% or greater. The sensitivity and specificity of the secondary findings, respectively, were 68% and 88% for bone bruise; 63% and 92%, anterior tibial displacement; 42% and 96%, uncovered posterior horn of lateral meniscus; 68% and 92%, positive posterior cruciate line; and 74% and 71%, abnormal posterior cruciate angle. Fifteen (79%) patients had meniscal tears, and five (26%) had collateral ligament injuries. Conclusion: Primary and secondary findings of ACL tears in young patients have high specificity and are useful for diagnosis. abstract_id: PUBMED:1887042 Acute and chronic tears of the anterior cruciate ligament: differential features at MR imaging. To evaluate the differential features of acute and chronic tears of the anterior cruciate ligament at magnetic resonance (MR) imaging, the authors performed a retrospective evaluation of findings in 81 MR examinations correlated with results at arthroscopy. Intact anterior cruciate ligaments (ACLs) were present in 29 patients; acute complete ACL tears, in 22; and chronic complete ACL tears, in 30. Acute tears were accurately distinguished from intact ligaments and were characterized by the presence of edema. Chronic tears had a more variable appearance: Nine (30%) were depicted at MR as intact bands with low signal intensity that bridged the expected origin and insertion of the ACL. This appearance is likely due to the presence of bridging fibrous scars within the intercondylar notch. Five of these nine cases were correctly characterized as chronically torn because of the presence of focal angulation. In four of these nine cases the scarred fragments produced a relatively straight band that mimicked an intact ligament. Although chronic and acute ACL tears usually have distinct findings at MR, a chronic tear will occasionally be difficult to distinguish from an intact ligament. abstract_id: PUBMED:17499182 Advanced MR imaging of the cruciate ligaments. The anterior and posterior cruciate ligaments are crucial stabilizers of the knee. These ligaments are named by the location of their tibial attachments. Each ligament is composed of separate functional bundles that differ in size but are equally important in function. MR imaging is accurate and sensitive, making it the imaging technique of choice for evaluating these ligaments. Acute and chronic injuries involving the cruciate ligaments have typical appearances and associated findings. MR imaging interpretation must take into account atypical injuries and imaging pitfalls. Knowledge of normal ligament reconstruction techniques allows differentiation of the normal postoperative appearance from reconstruction failure and complications. Ligament reconstruction techniques, complications, and appearances are reviewed in this article. Answer: Yes, there appears to be a gender gap in MR imaging patterns of anterior cruciate ligament (ACL) tears. A study explored whether gender differences exist in MR imaging patterns of ACL tears and found that females are more commonly imaged in the acute stage of tear than males (98% and 67%, respectively) and more commonly possess the typical posterolateral tibial bone contusion pattern (88% and 62%, respectively). In contrast, males exhibited a deeper femoral notch sign and had more common medial meniscal, lateral collateral ligament, and posterior cruciate ligament injuries than females. There was no significant difference between genders for the presence of other secondary signs and contusion patterns, associated lateral meniscal tears, presence of O'Donoghue's triad, or associated medial collateral ligament injuries. These findings suggest that there are indeed gender differences in MR imaging patterns of ACL tears (PUBMED:14504836).
Instruction: Does light pressure effleurage reduce pain and anxiety associated with genetic amniocentesis? Abstracts: abstract_id: PUBMED:11132586 Does light pressure effleurage reduce pain and anxiety associated with genetic amniocentesis? A randomized clinical trial. Objective: To determine if light pressure effleurage (leg rubbing) during genetic amniocentesis reduces procedure-related pain and anxiety. Methods: Two hundred women with singleton gestations undergoing genetic amniocentesis between 15-22 weeks recorded their level of anticipated pain and anxiety on a 10-cm linear visual analog scale prior to the amniocentesis. Subjects were then randomized to receive effleurage or no effleurage by the assisting nurse during the procedure. Subjects were blinded to the effleurage nature of the study. Following the amniocentesis, subjects repeated the pain and anxiety scoring. Results: The two groups were similar with respect to subject and procedure characteristics, as well as anticipated pain or anxiety prior to amniocentesis. Postamniocentesis pain and anxiety scoring were similar in the two groups. The mean effleurage acceptance score was 8.3 +/- 1.8 (out of 10), and 90.2% of subjects reported that they would want effleurage with future amniocenteses. Conclusions: Although well accepted by women, light pressure effleurage during genetic amniocentesis does not reduce procedure-related pain or anxiety. abstract_id: PUBMED:23963662 Comparison of the effectiveness of different counseling methods before second trimester genetic amniocentesis in Thailand. Objective: The objective of this study is to compare the effectiveness of counseling methods before second trimester genetic amniocentesis. Study Design: The design of this study is a randomized controlled study comparing the improvement in patients' knowledge satisfaction anxiety and perceived pain between computer-assisted instruction (CAI) and leaflet self-reading (LSR) and subsequent individual counseling among pregnant women scheduled for second trimester genetic amniocentesis in a developing country. Results: There were 164 and 157 participants in the LSR and CAI groups, respectively. In both groups, knowledge improved significantly after LSR/CAI (p &lt; 0.001) and increased further after individual counseling (p &lt; 0.001). After combined counseling, knowledge was significantly higher in the LSR than in the CAI group (p = 0.032). Knowledge was associated with higher level of education and previous exposure to genetic counseling. Pain decreased more in the CAI than in the LSR group after completion of counseling (p = 0.021). Reduction in anxiety and increase in satisfaction did not differ between the groups. Counseling method did not affect the final decision of patients to accept amniocentesis. Conclusion: Both counseling methods improved patients' knowledge and satisfaction and reduced pain and anxiety. In combination with individual counseling, LSR was more effective than CAI in improving patients' knowledge before second trimester genetic amniocentesis. abstract_id: PUBMED:11851963 Maternal pain and anxiety in genetic amniocentesis: expectation versus reality. Objective: To investigate maternal perceptions of both pain and anxiety before and after genetic amniocentesis. Study Design: This prospective study of midtrimester, singleton pregnancies was conducted between March 2000 and July 2000. Study variables included patient demographics, medical and obstetric histories, indication for amniocentesis and a description of the source of information used by the patient regarding the procedure and technical degree of difficulty. Maternal pain and anxiety associated with performing amniocentesis were subjectively quantified with the use of the visual analog scale (VAS). Statistical analysis included Wilcoxon signed rank test, anova, and simple and stepwise regression analyses. Results: One hundred and eighty-three women participated in the study. Perception of pain before amniocentesis was significantly higher compared to that expressed immediately after the procedure, with a mean VAS score of 3.7 +/- 2.5 vs. 2.1 +/- 2.0 (P &lt; 0.0001). Similarly, perception of anxiety was significantly greater prior to the procedure, with a mean VAS score of 4.6 +/- 2.8 vs. 2.8 +/- 2.4 after the amniocentesis (P &lt; 0.0001). Perceptions of pain and anxiety were significantly and positively correlated to each other both before and after the procedure (P &lt; 0.0001). History of a prior amniocentesis was the only variable associated with reducing expected pain and anxiety (negative correlation, P &lt; 0.001), whereas the technical degree of difficulty was the only significant variable impacting on the actual pain and anxiety (positive correlation, P &lt; 0.005). Conclusions: Preamniocentesis counseling should emphasize the fact that, for most women, the actual pain and anxiety experienced during the procedure are significantly lower than expected. In fact, on a scale of 0-10, the mean level of pain was only 2.1, with a slightly higher mean level of anxiety. abstract_id: PUBMED:20043555 Clinical correlates of pain with second-trimester genetic amniocentesis. Objective: To determine the correlation of clinical factors and maternal perceptions of pain with genetic amniocentesis. Material And Method: This prospective study of midtrimester, singleton pregnancies was conducted between February 2007 and March 2008. Study variables included patient dermographics, previous amniocentesis, previous abdominal surgery, maternal anxiety score, abdominal wall thickness, needle insertion through placenta and the depth of needle insertion. Maternal pain with performing amniocentesis was subjectively quantified with the Thai short-form McGill Pain Questionnaire. The independent T-test, one way ANOVA and linear regression were used for analysis, a probability value of &lt; 0.05 was considered significant. Results: One hundred and twenty-five pregnant women participated in the present study: 18.4% reported no pain, 69.6% described the pain as mild, 11.2% described the pain as discomforting and 0.8% described the pain as horrible. Mean intensity of pain was 2.1 +/- 1.9 (on a scale 0-10). Pain was most often described as fearful, shooting, throbbing and sharp. Parity, gestational age, maternal BMI, anxiety score, previous surgery, needle insertion through the placenta, abdominal wall thickness and the depth of needle insertion were not correlated with perceived pain. Conclusion: Most of the women reported no pain or mild or discomfort with genetic amniocentesis. Clinical factors were not associated with maternal perceptions of pain. abstract_id: PUBMED:15343234 Clinical correlates of pain with amniocentesis. Objective: The purpose of this study was to determine whether sensory or affective dimensions of pain with genetic amniocentesis are associated with identifiable clinical correlates. Study Design: Women completed the short-form McGill Pain Questionnaire after second-trimester genetic amniocentesis. The effect of maternal weight, parity, previous amniocentesis, previous surgery, history of menstrual cramps, maternal anxiety, presence of fibroid tumors, and depth and location of needle insertion on pain intensity was determined. The T-test, correlation matrix, Kruskal-Wallis test, and multiple logistic regression were used for analysis; a probability value of &lt;.05 was considered significant. Results: One hundred twenty-one women were enrolled: 19.3% reported no pain, 42.9% described the pain as mild, 31.1% described the pain as discomforting, and 6.7% described the pain as distressing or horrible. Mean intensity of pain was 1.6+/-1.3 (on a scale of 0-7). Pain was most often described as sharp, cramping, fearful, and stabbing. Anxiety and pain were increased in women with an indication of abnormal serum screen as compared with women with advanced maternal age. Anxiety and a history of menstrual cramps were associated with increased affective dimensions of pain and had moderate correlation with quantified pain intensity. A history of previous amniocentesis and needle insertion in the lower one third of the uterus were associated with increased pain. Maternal weight, parity, previous surgery, fibroid tumors, and depth of needle insertion were not correlated with perceived pain. Presence or absence of an accompanying person was not associated with pain intensity. Conclusion: Women report mild pain or discomfort with genetic amniocentesis. Increased pain is associated with increased maternal anxiety, a history of menstrual cramps, a previous amniocentesis, and insertion of the needle in the lower uterus. abstract_id: PUBMED:29952499 Music Listening to Decrease Pain during Second Trimester Genetic Amniocentesis: A Randomized Trial. Objective: To evaluate whether music listening decreased pain perception during second trimester genetic amniocentesis. Material And Method: We conducted a prospective randomized study to compare the pain perception using a visual analogue scale (VAS), pain rating, future decision to repeat the procedure, and pain perception compared to a venipuncture before and after the second trimester genetic amniocentesis between groups of pregnant women who underwent amniocentesis with and without music listening. Results: Three hundred thirty two pregnant women were enrolled; 161 listened and 171 did not listen to the music. The pre-procedure anxiety, the anticipated pain, post-procedure pain/ anxiety median VAS scores, pain rating, future decision and level of pain compare to a venipuncture in the music-listening and non-music-listening groups did not show statistically significant difference. The pre-procedure anxiety median VAS scores were 1.3 and 0.5 in the music-listening and non-musiclistening groups, respectively and the anticipated pain median VAS scores were 4.8 and 4.5 in the music-listening and non-music-listening groups, respectively. The post-procedure median VAS pain/anxiety scores were 2.7 and 2.5 in the music-listening and non-music-listening groups, respectively. Conclusion: Music listening was not significantly effective in reducing pain during second trimester genetic amniocentesis. abstract_id: PUBMED:22526448 Efficacy of cryoanalgesia in decreasing pain during second trimester genetic amniocentesis: a randomized trial. Objective: To evaluate the effectiveness of cryoanalgesia in decreasing the degree of pain sensation during second trimester genetic amniocentesis. Materials And Methods: We performed a prospective randomized study comparing the anticipated and actual pain before and after second trimester genetic amniocentesis between pregnant women who received and did not receive cryoanalgesia. The pain was measured using the visual analog score (VAS), ranging from 0 to 10. Results: Three hundred and seventy-two pregnant women participated in our study. One hundred and eighty-four and 188 pregnant women were randomized to cryoanalgesia received and non-cryoanalgesia received groups, respectively. The pre-procedure anxiety mean VAS scores and the anticipated pain mean VAS scores between the groups were not significantly different (P = 0.25 and 0.18, respectively). The pre-procedure anxiety and the anticipated pain mean ± SD VAS scores in the cryoanalgesia and non-cryoanalgesia groups were 5.7 ± 0.37 vs. 8.0 ± 0.82 and 5.4 ± 1.34 vs. 5.6 ± 1.42, respectively. The post-procedure pain and anxiety mean VAS scores in the cryoanalgesia group were statistically less significant than those from the non-cryoanalgesia group (mean ± SD = 3.2 ± 1.60 and 3.8 ± 1.58, respectively, P = 0.004). Most pregnant women claimed to have experienced moderate pain and accepted to undergo a second trimester genetic amniocentesis again if indicated. Conclusion: Cryoanalgesia is effective in decreasing the pain sensation and could be routinely applied to all pregnant women before the second trimester genetic amniocentesis. abstract_id: PUBMED:19860366 Perceived pain and anxiety before and after amniocentesis among pregnant Turkish women. Purpose Of Investigation: To examine maternal perception of pain and anxiety before and soon after midtrimester genetic amniocentesis. Methods: Two hundred and ninety-two women consecutive were prospectively included in the study between March and December 2002. Study variables included age, gestational age, gravidity, parity, educational history, history of previous invasive prenatal procedures, indication for amniocentesis and source of information regarding amniocentesis. Maternal pain and anxiety associated with performing amniocentesis were subjectively quantified with the use of the visual analog scale (VAS). Results: Actual pain after amniocentesis was significantly lower compared with perceived pain before the procedure (3 [0-10] vs. 5 [0-10], p &lt; 0.001). Perceived anxiety before amniocentesis was significantly higher than perceived anxiety immediately after amniocentesis (7 [0-10] vs. 5 [0-10], p &lt; 0.001). Women who were informed about the procedure beforehand perceived the procedure to be less painful and expressed less anxiety before and after amniocentesis. Conclusions: Pre-amniocentesis counseling should emphasize that the actual pain and anxiety experienced during the procedure are low in intensity and significantly lower than expected. abstract_id: PUBMED:9816637 Psychological aspects of pain experience in amniocentesis Unlabelled: This study describes the psychological aspects and the experience of pain in 2 groups of women undergoing amniocentesis in relation to the different indication for the procedure and duration of gestation in the respective groups. Methods And Subjects: Data were collected after amniocentesis from 100 pregnant women, 50 of whom underwent age-related amniocentesis (14 weeks of gestation) and 50 for measurement of amniotic fluid insulin levels (30 weeks of gestation). A semi-structured psychological interview was used to collect information concerning experience of pain, cognition before amniocentesis, knowledge about the procedure and the most distressing aspects of amniocentesis. Visual analog scales (VAS, 0 to 10) were used to assess the degree of pain and emotional mood state. Results: The results showed that emotional mood state before amniocentesis was fair (mean = 4.6) and increased after amniocentesis (mean = 8.1) The degree of pain during amniocentesis was assessed at a mean value of 2.9 measured on the 10-point visual analog scale. Fifty per cent of the sample considered the physical, the other 50% the psychological (anxiety and uncertainty) aspects as the most distressing factor of amniocentesis. Although women were given prior detailed information about amniocentesis they overestimated the degree of pain, related risk factors, and duration of the medical procedure. There were no differences in the investigated variables between the two different indications for amniocentesis or the weeks of gestation. Conclusion: The findings of the study indicate the psychological distress associated with amniocentesis, which is limited to a short period of time, and is independent of the indication for amniocentesis. abstract_id: PUBMED:18279876 Anticipated and perceived pain from midtrimester amniocentesis. Objective: To evaluate the anticipated and perceived pain and determine the factors contributing to pain perception in women undergoing midtrimester amniocentesis. Methods: A visual analog scale was used to quantify pain in this prospective study of 64 women undergoing amniocentesis in the same clinical setting. The analysis was done using the paired-samples t test, analysis of variance, the Kruskal-Wallis test, the Mann-Whitney U test, and Pearson correlation analysis. Results: The postprocedural pain scores were significantly less than the preprocedural pain scores (P&lt;0.01). Parity, a previous amniocentesis, the indication for amniocentesis, and the previous loss of a child were factors contributing to pain scores. No correlations were found between pain score and maternal age, week of pregnancy at the time of the procedure, parity, or having lost a child. Conclusion: The perceived pain was less than the anticipated pain, and the factors contributing to pain should be kept in mind when counseling patients undergoing midtrimester amniocentesis. Answer: No, light pressure effleurage does not reduce pain and anxiety associated with genetic amniocentesis. A randomized clinical trial conducted to determine the effect of light pressure effleurage (leg rubbing) during genetic amniocentesis found that although the technique was well accepted by women, it did not reduce procedure-related pain or anxiety (PUBMED:11132586).
Instruction: A Radiographic Assessment of Failed Anterior Cruciate Ligament Reconstruction: Can Magnetic Resonance Imaging Predict Graft Integrity? Abstracts: abstract_id: PUBMED:24821755 A Radiographic Assessment of Failed Anterior Cruciate Ligament Reconstruction: Can Magnetic Resonance Imaging Predict Graft Integrity? Background: Magnetic resonance imaging (MRI) showing an "intact" anterior cruciate ligament (ACL) graft may not correlate well with examination findings. Reasons for an ACL graft dysfunction may be from malpositioned tunnels, deficiency of secondary stabilizers, repeat injuries, or a combination of factors. Purpose: To evaluate the concordance/discordance of an ACL graft assessment between an arthroscopic evaluation, physical examination, and MRI and secondarily to evaluate the contributing variables to discordance. Study Design: Case series; Level of evidence, 4. Methods: A total of 50 ACL revisions in 48 patients were retrospectively reviewed. The ACL graft status was recorded separately based on Lachman and pivot-shift test data, arthroscopic findings from operative reports, and MRI evaluation and was categorized into 3 groups: intact, partial tear, or complete tear. Two independent evaluators reviewed all of the preoperative radiographs and MRI scans, and interrater and intrarater reliability were evaluated. Concordance and discordance between a physical examination, arthroscopic evaluation, and MRI evaluation of the ACL graft were calculated. Graft position and type, mechanical axis, collateral ligament injuries, chondral and meniscal injuries, and mechanism of injury were evaluated as possible contributing factors using univariate and multivariate analyses. Sensitivity and specificity of MRI to detect a torn ACL graft and meniscal and chondral injuries on arthroscopic evaluation were calculated. Results: The interobserver and intraobserver reliability for the MRI evaluation of the ACL graft were moderate, with combined κ values of .41 and .49, respectively. The femoral tunnel position was vertical in 88% and anterior in 46%. On MRI, the ACL graft was read as intact in 24%; however, no graft was intact on arthroscopic evaluation or physical examination. The greatest discordance was between the physical examination and MRI, with a rate of 52%. An insidious-onset mechanism of injury was significantly associated with discordance between MRI and arthroscopic evaluation of the ACL (P = .0003) and specifically with an intact ACL graft on MRI (P = .0014). The sensitivity and specificity of MRI to detect an ACL graft tear were 60% and 87%, respectively. Conclusion: Caution should be used when evaluating a failed ACL graft with MRI, especially in the absence of an acute mechanism of injury, as it may be unreliable and inconsistent. abstract_id: PUBMED:11764353 Magnetic resonance imaging of reconstructed anterior cruciate ligament. After anterior cruciate ligament reconstruction with autologous patellar tendon, 23 patients who had clinically stable knees were studied prospectively with sequential magnetic resonance imaging 1, 2, 3, 6, and 12 months after surgery. The images of the anterior cruciate ligament were obtained with a 1.5 tesla magnetic resonance scanner in the oblique sagittal, coronal, and oblique axial planes. The cross-sectional area and signal intensity on the reconstructed anterior cruciate ligament were measured in an oblique axial image. The usefulness of the oblique axial image in evaluating the integrity of the reconstructed anterior cruciate ligament was seen. The result showed that the diameter of the graft increased by 70% of its initial size and the signal intensity of the reconstructed graft also showed a tendency to increase. In three patients, there was discontinuity in the graft direction on the oblique sagittal image, but on the oblique axial image there was no evidence of reconstructed anterior cruciate ligament rupture in the sequential images. This shows the value of the oblique axial image in evaluating the integrity of the reconstructed anterior cruciate ligament. Also, sufficient notchplasty in anterior cruciate ligament reconstruction may be needed to prevent graft impingement. abstract_id: PUBMED:37742146 Prediction of Hamstring Autograft sizes for Anterior Cruciate Ligament Reconstruction using Preoperative Magnetic Resonance Imaging. Background: The purpose of this study is to determine whether preoperative magnetic resonance image measurements can predict the hamstring tendon autograft diameter during anterior cruciate ligament reconstruction. Methods: We prospectively evaluated Forty-two patients with anterior cruciate ligament injury who underwent reconstruction using hamstring tendon autograft. Preoperative diameters and cross-sectional areas of the hamstring tendons were estimated using magnetic resonance imaging of the knee. Intraoperative diameters of the hamstring tendon graft were measured using a cylindrical graft sizer. We used Pearson's correlation test to compare the Preoperative and intraoperative graft size measurements. A possible cutoff value for the hamstring graft size was determined using Receiver operating characteristic analysis. Results: The mean age of the patient in the study was 27.5 ± 8.5 years. There were statistically significant correlations between preoperative and intraoperative hamstring tendon graft measurements (P &lt; 0.001). Our study found 13.3 mm² cross-sectional area as the cutoff for predicting 7mm of quadrupled hamstring graft size with both sensitivity and specificity of 85.7 %, respectively. Conclusions: We can conclude that preoperative magnetic resonance imaging measurements can predict the intraoperative graft size. This study can help in preoperatively planning for the graft choice. abstract_id: PUBMED:19036719 Preoperative magnetic resonance assessment of patellar tendon dimensions for graft selection in anterior cruciate ligament reconstruction. Background: A bone patellar tendon bone autograft is one of the standard graft choices for anterior cruciate ligament reconstruction. However, its use can be limited when the patellar tendon is too narrow or too long. Hypothesis: A preoperative assessment of patellar tendon dimensions using magnetic resonance imaging would be accurate and reliable. Patients undergoing anterior cruciate ligament reconstruction would have wide ranges of patellar tendon dimensions, and a significant proportion of patients would have a too narrow and/or too long patellar tendon as the graft choice. There would be a demographic predictor to identify the patients with inappropriate patellar tendon dimensions. Study Design: Cohort study (diagnosis); Level of evidence, 3. Methods: The accuracy and reliability of magnetic resonance assessments of patellar tendon dimensions were assessed by comparing the intraoperative measurements using a ruler in 55 knees and 10 knees, respectively. Data from the magnetic resonance assessments in 147 knees undergoing anterior cruciate ligament reconstruction were used for the normative documentation of the patellar tendon dimensions (width, thickness, and length) and identification of demographic predictors for the dimensions. Results: Preoperative magnetic resonance assessments of the patellar tendon dimensions were accurate and reliable. Korean patients undergoing anterior cruciate ligament reconstruction had wide variations in patellar dimensions, and a significant portion of the patients had an inappropriate patellar tendon (longer than 5 cm in 4.1% and narrower than 27 mm at middle portion in 15.6%) for the graft source. Patient height was the predictor used for patellar tendon width. The mathematical equation used to estimate the width based on patient height was: tendon width at middle portion (mm) = 0.202 x patient height (cm) - 5.07. Conclusion: Preoperative magnetic resonance assessment of patellar tendon dimensions can be a valuable tool with satisfactory accuracy and reliability when the autologous patellar tendon is considered as the graft source for anterior cruciate ligament reconstruction. abstract_id: PUBMED:8368412 Evaluation of arthroscopic anterior cruciate ligament reconstruction using magnetic resonance imaging. Thirty-two patients who had arthroscopic anterior cruciate ligament reconstruction using a bone-patellar tendon-bone autograft underwent subsequent magnetic resonance imaging of the knee. A total of 32 magnetic resonance imaging examinations were performed from 10 days to 39 months postoperatively. The anatomic plane of the autograft was determined by obtaining a coronal pilot scan of the graft fixation screws or screw and staple. T1-weighted, T2-weighted, proton density, and gradient-echo imaging sequences were then obtained in the anatomic plane, as well as T1-weighted coronal images. The autograft was defined on the basis of visualization of fiber continuity on T2-weighted images as follows: 1) intact; 2) having a partial tear; or 3) having a complete tear. These results were then correlated with clinical examination and, in 10 cases, subsequent arthroscopy. Magnetic resonance imaging correlated with clinical findings in 31 of 32 patients. In addition, of the 10 patients who underwent subsequent arthroscopy, magnetic resonance scanning correlated in all cases with arthroscopic findings. T2-weighted and, in some cases, proton density images were most useful in visualizing the autograft. T2-weighted magnetic resonance imaging in the anatomic plane of the anterior cruciate ligament autograft can be a useful diagnostic tool in the evaluation of patients with patellar tendon anterior cruciate ligament reconstructions when graft integrity is in question. abstract_id: PUBMED:20410802 Anterior cruciate ligament graft reconstruction: clinical, technical, and imaging overview. The anterior cruciate ligament (ACL) is one of the most frequently torn ligaments of the knee. With more than 100,000 ACL reconstructions performed yearly in the United States, evaluation of ACL grafts with magnetic resonance imaging is a common occurrence in daily clinical practice. Anterior cruciate ligament reconstructions vary from single bundle, double bundle, selective bundle, and physeal-sparing techniques. Complications of ACL graft reconstructions include graft tears, graft laxity, arthrofibrosis, and hardware failure or migration. This article offers a comprehensive review of ACL reconstruction for the consulting radiologist. abstract_id: PUBMED:21704471 Using magnetic resonance imaging to predict adequate graft diameters for autologous hamstring double-bundle anterior cruciate ligament reconstruction. Purpose: To determine whether the preoperative magnetic resonance imaging (MRI) cross-sectional area (CSA) of the hamstring tendons can predict intraoperative bundle diameters during double-bundle anterior cruciate ligament reconstruction. Methods: A prospective study of 34 patients undergoing anterior cruciate ligament reconstruction with hamstring autografts was performed. CSAs of independent and combined hamstring tendon diameters were correlated to preoperative magnetic resonance images. Results: Intraoperative tendon diameter measurement positively correlated with preoperative MRI tendon CSA measurement for gracilis (P = .0006), semitendinosus (P = .001), and final graft size (P = .001). Double-stranded gracilis grafts greater than or equal to 5 mm in diameter had a mean preoperative MRI gracilis CSA of 9.98 mm(2) compared with a mean of 7.76 mm(2) for grafts less than 5 mm (P = .002). Double-stranded semitendinosus grafts greater than or equal to 6 mm had a mean preoperative MRI tendon CSA of 17.33 mm(2) compared with 14.80 mm(2) for grafts less than 6 mm (P = .02). Final grafts of diameter greater than or equal to 7 mm had a mean preoperative MRI total tendon CSA of 26.54 mm(2) compared with 22.22 mm(2) for grafts under 7 mm (P = .06). Conclusions: Preoperative MRI is a clinically useful tool to assess hamstring tendon graft diameter. We recommend preoperative CSA threshold values of 10 mm(2) and 17 mm(2) for the gracilis and semitendinosus tendons, respectively, to reliably predict the potential for a double-bundle anterior cruciate ligament reconstruction. Level Of Evidence: Level IV, therapeutic case series. abstract_id: PUBMED:17379920 Assessment of anterolateral rotatory instability in the anterior cruciate ligament-deficient knee using an open magnetic resonance imaging system. Background: In the clinical evaluation of the anterior cruciate ligament-deficient knee, anterolateral rotatory instability is assessed by manual tests such as the pivot-shift test, which is subjective and not quantitative. Hypothesis: The anterolateral rotatory instability in an anterior cruciate ligament-deficient knee can be quantified by our newly developed method using open magnetic resonance imaging. Study Design: Controlled laboratory study. Methods: Eighteen subjects with anterior cruciate ligament-deficient knees and 18 with normal knees were recruited. We administered the Slocum anterolateral rotatory instability test in the open magnetic resonance imaging scanner and scanned the sagittal view of the knee. The anterior displacements of the tibia at the medial and lateral compartments were measured. Furthermore, we examined 14 anterior cruciate ligament-deficient knees twice to assess intraobserver and interobserver reproducibility and evaluated the difference and interclass correlation coefficient of 2 measures. Results: In the anterior cruciate ligament-deficient knee, displacement was 14.4 +/- 5.5 mm at the lateral compartment and 1.6 +/- 2.3 mm at the medial compartment; in the normal knee, displacement was 0.7 +/- 1.9 mm and -1.1 +/- 1.2 mm, respectively. The difference and interclass correlation coefficient between 2 repeated measures at the lateral compartment were 1.0 +/- 0.7 mm and .98 for intraobserver reproducibility and 1.1 +/- 0.7 mm and .91 for interobserver reproducibility. Conclusion: This method is useful to assess the anterolateral rotatory instability of the anterior cruciate ligament-deficient knee. Clinical Relevance: This method can be used in the clinical assessment of anterior cruciate ligament stability, such as comparing studies of graft positions or 2-bundle anatomic reconstruction and the conventional 1-bundle technique. abstract_id: PUBMED:32197954 Can we predict the graft diameter for autologous hamstring in anterior cruciate ligament reconstruction? To achieve in the reconstruction of the anterior cruciate ligament a graft with strength, tension and low comorbidity is fundamental. An emerging concept is that a graft diameter of less than 7mm carries a greater risk of re-rupture and instability. Consequently, different methods are being sought to predict intra-surgical size. The objective is to predict the size of the hamstring graft by measuring the area of the semitendinous and gracilis tendon with magnetic resonance imaging (MRI). Methodology: We carried out an observational retrospective study of 56 patients. They underwent anterior cruciate ligament reconstruction with 4-GST hamstring graft. The parameters evaluated were anthropometric data, hamstring graft diameter, area of gracilis and semitendinosus tendon in MRI. The measurements were made by three independent evaluators. Results: The mean diameter of the intrasurgical graft was 8.46mm, in the MRI the area of the gracilis was 8,875mm and the semitendinosus area was 13,068mm. Their mean was 22.12 for the automatic measurement and 21.53 for the manual measurement. The interobserver correlation was regular for the automatic measurement (ICC = 0.595) and low for the manual measurement (ICC = 0.446). The result of the intraobserver correlation was excellent (ICC = 0.917). We did not obtain a statistical correlation between the measurement of areas and the increase of the graft diameter (R = 0.069, P = .63). Conclusion: We determined with our results that the intrasurgical graft size is not predictable with the measurement of the area of the gracilis and semitendinosus tendon on the MRI. abstract_id: PUBMED:11000171 MR imaging of anterior cruciate ligament reconstruction graft. Objective: The objective was to determine the MR imaging findings that differentiate intact anterior cruciate ligament reconstruction graft, partial-thickness tear, and full-thickness tear, using arthroscopy as the gold standard. Materials And Methods: Sixteen consecutive MR imaging examinations were retrospectively and independently evaluated by two musculoskeletal radiologists for primary signs (graft signal, orientation, fiber continuity, complete discontinuity, and thickness) and secondary signs (anterior tibial translation, uncovered posterior horn lateral meniscus, posterior cruciate ligament hyperbuckling, and abnormal posterior cruciate ligament line) of anterior cruciate ligament reconstruction graft tear in 15 patients with follow-up arthroscopy. Results were compared with arthroscopy, and both receiver operating characteristic curves and kappa values for interobserver variability were calculated. Results: Arthroscopy revealed four full-thickness graft tears, seven partial-thickness tears, and five intact grafts. Of the primary signs, graft fiber continuity in the coronal plane and 100% graft thickness in the sagittal or coronal plane were most valuable in excluding full-thickness tear. Complete discontinuous graft in the coronal plane also was valuable in diagnosis of full-thickness tear. Of the secondary signs, anterior tibial translation and uncovered posterior horn lateral meniscus assisted in differentiating graft tear (partial or full thickness) from intact graft. The other primary and secondary signs were less valuable. Kappa values were highest for graft fiber continuity and graft discontinuity in the coronal plane. Conclusion: Full-thickness anterior cruciate ligament graft tear can be differentiated from partial-thickness tear or intact graft by evaluating for graft fiber continuity (coronal plane), complete graft discontinuity (coronal plane), and graft thickness (coronal or sagittal plane). Answer: The study titled "A Radiographic Assessment of Failed Anterior Cruciate Ligament Reconstruction: Can Magnetic Resonance Imaging Predict Graft Integrity?" (PUBMED:24821755) aimed to evaluate the concordance/discordance between arthroscopic evaluation, physical examination, and MRI in assessing ACL graft status. The study involved 50 ACL revisions in 48 patients, and the ACL graft status was categorized into intact, partial tear, or complete tear based on different assessments. The results showed moderate interobserver and intraobserver reliability for MRI evaluation of the ACL graft. Notably, the MRI read the ACL graft as intact in 24% of cases, but no graft was found intact upon arthroscopic evaluation or physical examination. The greatest discordance was between physical examination and MRI, with a rate of 52%. The study found that an insidious-onset mechanism of injury was significantly associated with discordance between MRI and arthroscopic evaluation of the ACL, particularly with an intact ACL graft on MRI. The sensitivity and specificity of MRI to detect an ACL graft tear were 60% and 87%, respectively. The conclusion was that caution should be used when evaluating a failed ACL graft with MRI, especially in the absence of an acute mechanism of injury, as it may be unreliable and inconsistent.
Instruction: Can surface imaging improve the patient setup for proton postmastectomy chest wall irradiation? Abstracts: abstract_id: PUBMED:27025165 Can surface imaging improve the patient setup for proton postmastectomy chest wall irradiation? Purposes/objectives: For postmastectomy radiation therapy by proton beams, the usual bony landmark based radiograph setup technique is indirect because the target volumes are generally superficial and far away from major bony structures. The surface imaging setup technique of matching chest wall surface directly to treatment planning computed tomography was evaluated and compared to the traditional radiograph-based technique. Methods And Materials: Fifteen postmastectomy radiation therapy patients were included, with the first 5 patients positioned by standard radiograph-based technique; radiopaque makers, however, were added on the patient's skin surface to improve the relevance of the setup. AlignRT was used to capture patient surface images at different time points along the process, with the calculated position corrections recorded but not applied. For the remaining 10 patients, the orthogonal x-ray imaging was replaced by the AlignRT setup procedure followed by a beamline radiograph at the treatment gantry angle only as confirmation. The position corrections recorded during all fractions for all patients (28-31 each) were analyzed to evaluate the setup accuracy. The time spent on patient setup and treatment delivery was also analyzed. Results: The average position discrepancy over the treatment course relative to the planning computed tomography was significantly larger in the radiograph only group, particularly in translations (3.2 ± 2.0 mm in vertical, 3.1 ± 3.0 mm in longitudinal, 2.6 ± 2.5 mm in lateral), than AlignRT assisted group (1.3 ± 1.3 mm in vertical, 0.8 ± 1.2 mm in longitudinal, 1.5 ± 1.4 mm in lateral). The latter was well within the robustness limits (±3 mm) of the pencil beam scanning treatment established in our previous studies. The setup time decreased from an average of 11 minutes using orthogonal x-rays to an average of 6 minutes using AlignRT surface imaging. Conclusions: The use of surface imaging allows postmastectomy chest wall patients to be positioned more accurately and substantially more efficiently than radiograph only-based techniques. abstract_id: PUBMED:29032865 Electron postmastectomy chest wall plus comprehensive nodal irradiation technique using Electron Monte Carlo dose algorithm. For left-sided postmastectomy patients requiring chest wall plus comprehensive nodal irradiation, sometimes traditional techniques such as partial wide tangents or electron/tangents combination are not feasible due to abnormal chest wall contour or heart position or unusually wide excision scar. We developed electron chest wall irradiation technique using Electron Monte Carlo (EMC) dose algorithm that will achieve heart sparing with acceptable ipsilateral lung dose, minimal contralateral lung, and breast dose. Ten left-sided postmastectomy patients with very challenging anatomy were selected for this dosimetry study. The en face electron fields were designed from a single isocenter and gantry angle with different energy beams using different cutouts that matched on the skin. Smaller energy was used in the central thin chest wall part and higher energy in the medial internal mammary nodes (IMN) area, superior part of the thick chest wall, and/or axillary nodal area. The electron fields were matched to the photon supraclavicular field in the superior region. Daily field junctions were used to feather the match lines between all the fields. Electron field dose calculations were done with Monte Carlo. Five patients' chest wall fields were planned with 6/9MeVcombination, 1 with 6/12 MeV, 2 with 9/12 MeV, 1 with 9/16 MeV, and 1 with 6/9/12 MeV. Institutional criteria of prescription dose of 50 Gy for target volumes and normal tissue dose were met with this technique in spite of the challenging anatomy. Mean heart dose averaged 3.0 Gy ± 0.8 Gy. For ipsilateral lung, V20Gy and V5Gy averaged 33.2% ± 4.5% and 64.6% ± 9.6%, respectively. For contralateral lung, V5Gy averaged 5.1% ± 5.0%. For contralateral breast, V5Gy averaged 3.3% ± 3.1%. The electron chest wall irradiation technique using EMC dose algorithm can provide adequate dose coverage to the chest wall, IMNs, and/or axillary nodes while achieving heart sparing with acceptable ipsilateral lung dose, minimal contralateral lung, and breast dose. abstract_id: PUBMED:15519791 Postmastectomy electron-beam chest-wall irradiation in women with breast cancer. Purpose: This retrospective study evaluates the results of postmastectomy electron-beam chest-wall irradiation in patients with breast cancer. Methods And Materials: From 1980 to 1994, 144 women with localized breast cancer received postmastectomy radiotherapy. The chest wall was irradiated by electron beam, 6 to 12 MeV energy, depending on wall thickness, 2.0 Gy daily, 5 times/week for total dose of 50 Gy. Forty-one patients received 16-Gy boosts to the mastectomy scar. In addition, the supraclavicular and axilla areas were irradiated by anterior field with 6-MV photon beam. Results: Median follow-up was 84 months. Fifteen patients (10%) had local-regional recurrence (LRR) and 57 patients (40%) had systemic relapse (SR). Median time from mastectomy to LRR was 20 months and median time to SR was 33 months. Axillary lymph nodes status influenced both LRR and SR. LRR rate was 0% in N0 and 12% in N1 disease; SR rate was 14% in N0 and 45% in N1 disease. Disease-free and overall survival was 58% and 67% in 10 years and 50% and 55% in 20 years, respectively. No cardiac toxicity was related to left chest-wall irradiation. Conclusion: Postmastectomy electron-beam chest-wall irradiation is as effective as photon-beam irradiation in breast cancer. abstract_id: PUBMED:35261503 Surface Dose Measurements in Chest Wall Postmastectomy Radiotherapy to Achieve Optimal Dose Delivery with 6 MV Photon Beam. Aim: A tissue-equivalent bolus of sufficient thickness is used to overcome build up effect to the chest wall region of postmastectomy radiotherapy (PMRT) patients with tangential technique till Radiation Therapy Oncology Group (RTOG) Grade 2 (dry desquamation) skin reaction is observed. The aim of this study is to optimize surface dose delivered to chest wall in three-dimensional radiotherapy using EBT3 film. Materials And Methods: Measurements were conducted with calibrated EBT3 films with thorax phantom under "open beam, Superflab gel (0.5 cm) and brass bolus conditions to check correlation against TPS planned doses. Eighty-two patients who received 50 Gy in 25# were randomly assigned to Group A (Superflab 0.5 cm gel bolus for first 15 fractions followed by no bolus in remaining 10 fractions), Group B or Group C (Superflab 0.5 cm gel or single layer brass bolus, respectively, till reaching RTOG Grade 2 skin toxicity). Results: Phantom measured and TPS calculated surface doses were within - 5.5%, 4.7%, and 8.6% under open beam, 0.5 cm gel, and single layer of brass bolus applications, respectively. The overall surface doses (OSD) were 80.1% ±2.9% (n = 28), 92.6% ±4.6% (n = 28), and 87.4% ±4.7% (n = 26) in Group A, B, and C, respectively. At the end of treatment, 7 out of 28; 13 out of 28; and 9 out of 26 patients developed Grade 2 skin toxicity having the OSD value of 83.0% ±1.6% (n = 7); 93.7% ±3.2% (n = 13); and 89.9% ±5.6% (n = 9) in Groups A, B, and C, respectively. At the 20th-23rd fraction, 2 out of 7; 6 out of 13; and 4 out of 9 patients in Groups A, B, and C developed a Grade 2 skin toxicity, while the remaining patients in each group developed at the end of treatment. Conclusions: Our objective to estimate the occurrence of optimal dose limit for bolus applications in PMRT could be achieved using clinical EBT3 film dosimetry. This study ensured correct dose to scar area to protect cosmetic effects. This may also serve as quality assurance on optimal dose delivery for expected local control in these patients. abstract_id: PUBMED:31603108 Clinical implementation of brass mesh bolus for chest wall postmastectomy radiotherapy and film dosimetry for surface dose estimates. Objective: This study presents the dosimetric data taken with radiochromic EBT3 film with brass mesh bolus using solid water and semi-breast phantoms, and its clinical implementation to analyze the surface dose estimates to the chest wall in postmastectomy radiotherapy (PMRT) patients. Materials And Methods: Water-equivalent thickness of brass bolus was estimated with solid water phantom under 6 megavoltage photon beam. Following measurements with film were taken with no bolus, 1, 2, and 3 layers of brass bolus: (a) surface doses on solid water phantom with normal incidence and on curved surface of a locally fabricated cylindrical semi-breast phantom for tangential field irradiation, (b) depth doses (in solid phantom), and (c) surface dose measurements around the scar area in six patients undergoing PMRT with prescribed dose of 50 Gy in 25 fractions. Results: Water-equivalent thickness (per layer) of brass bolus 2.09 ± 0.13 mm was calculated. Surface dose measured by film under the bolus with solid water phantom increased from 25.2% ±0.9% without bolus to 62.5% ± 3.1%, 80.1% ± 1.5%, and 104.4% ± 1.7% with 1, 2, and 3 layers of bolus, respectively. Corresponding observations with semi-breast phantom were 32.6% ± 5.3% without bolus to 96.7% ± 9.1%, 107.3% ± 9.0%, and 110.2% ± 8.7%, respectively. A film measurement shows that the dose at depths of 3, 5, and 10 cm is nearly same with or without brass bolus and the percentage difference is &lt;1.5% at these depths. Mean surface doses from 6 patients treated with brass bolus ranged from 79.5% to 84.9%. The bolus application was discontinued between 18th and 23rd fractions on the development of Grade 2 skin toxicity for different patients. The total skin dose to chest wall for a patient was 3699 cGy from overall treatment with and without bolus. Conclusions: Brass mesh bolus does not significantly change dose at depths, and the surface dose is increased. This may be used as a substitute for tissue-equivalent bolus to improve surface conformity in PMRT. abstract_id: PUBMED:35093111 Dosimetric evaluation of photons versus protons in postmastectomy planning for ultrahypofractionated breast radiotherapy. Background: Ultrahypofractionation can shorten the irradiation period. This study is the first dosimetric investigation comparing ultrahypofractionation using volumetric arc radiation therapy (VMAT) and intensity-modulated proton radiation therapy (IMPT) techniques in postmastectomy treatment planning. Materials And Methods: Twenty postmastectomy patients (10-left and 10-right sided) were replanned with both VMAT and IMPT techniques. There were four scenarios: left chest wall, left chest wall including regional nodes, right chest wall, and right chest wall including regional nodes. The prescribed dose was 26 Gy(RBE) in 5 fractions. For VMAT, a 1-cm bolus was added for 2 in 5 fractions. For IMPT, robust optimization was performed on the CTV structure with a 3-mm setup uncertainty and a 3.5% range uncertainty. This study aimed to compare the dosimetric parameters of the PTV, ipsilateral lung, contralateral lung, heart, skin, esophageal, and thyroid doses. Results: The PTV-D95 was kept above 24.7 Gy(RBE) in both VMAT and IMPT plans. The ipsilateral lung mean dose of the IMPT plans was comparable to that of the VMAT plans. In three of four scenarios, the V5 of the ipsilateral lung in IMPT plans was lower than in VMAT plans. The Dmean and V5 of heart dose were reduced by a factor of 4 in the IMPT plans of the left side. For the right side, the Dmean of the heart was less than 1 Gy(RBE) for IMPT, while the VMAT delivered approximately 3 Gy(RBE). The IMPT plans showed a significantly higher skin dose owing to the lack of a skin-sparing effect in the proton beam. The IMPT plans provided lower esophageal and thyroid mean dose. Conclusion: Despite the higher skin dose with the proton plan, IMPT significantly reduced the dose to adjacent organs at risk, which might translate into the reduction of late toxicities when compared with the photon plan. abstract_id: PUBMED:29907510 Reducing X-ray imaging for proton postmastectomy chest wall patients. Purpose: Proton postmastectomy radiation therapy (PMRT) patients are positioned daily using surface imaging with additional x-ray imaging for confirmation. This study aims to investigate whether weekly x-ray imaging with daily surface imaging, as performed for photon treatment, is sufficient to maintain PMRT patient positioning fidelity. Methods And Materials: Calculated radiographic corrections and surface imaging residual setup errors were analyzed at the treatment angle for 28 PMRT patients (with and without breast implant, left and right sided). The temporal repartition of radiographic translations &gt;3 mm occurring after surface imaging positioning was studied as well as their impact on the final patient position, defined as the comparison between the treatment angle surface image and the planning computed tomography scan. To compare both sets of images, the traditional bony anatomy landmarks on the digitally reconstructed radiographs were replaced by 3 radiopaque markers placed over the patient's skin tattoos. The temporal variation of the distances between these skin markers was analyzed, as were the surface imaging statistics. Results: Discrepancies between surface imaging and x-ray imaging were more frequent for patients without breast implants and among reconstructed patients with large implants. One-quarter of studied patients exhibited calculated radiographic translations &gt;3 mm during the last week of treatment. In most circumstances, applying radiographic corrections did not affect patient position, which remained within 3 mm/2° robustness tolerances. One patient's implant shifted following computed tomography planning; this shift would not have been detected without x-ray imaging. Conclusion: Initial and weekly x-ray acquisition combined with daily surface imaging seems adequate both for routine PMRT positioning and to monitor potential changes in the treatment area. The limits of the surface imaging system, however, need to be better specified among patients without breast implants or in those with large implant reconstructions. abstract_id: PUBMED:32020969 Dosimetric and isocentric variations due to patient setup errors in CT-based treatment planning for breast cancer by electronic portal imaging. Background: Inaccuracies in treatment setup during radiation therapy for breast cancers may increase risks to surrounding normal tissue toxicities, i.e. organs at risks (OARs), and compromise disease control. This study was planned to evaluate the dosimetric and isocentric variations and determine setup reproducibility and errors using an online electronic portal imaging (EPI) protocol. Methods: A total of 360 EPIs in 60 patients receiving breast/chest wall irradiation were evaluated. Cumulative dose-volume histograms (DVHs) were analyzed for mean doses to lung (V20) and heart (V30), setup source to surface distance (SSD) and central lung distance (CLD), and shifts in anterior-posterior (AP), superior-inferior (SI), and medial lateral (ML) directions. Results: Random errors ranged from 2 to 3 mm for the breast/chest wall (medial and lateral) tangential treatments and 2-2.5 mm for the anterior supraclavicular nodal field. Systematic errors ranged from 3 to 5 mm in the AP direction for the tangential fields and from 2.5 to 5 mm in the SI and ML direction for the anterior supraclavicular nodal field. For right-sided patients, V20 was 0.69-3.96 Gy, maximum lung dose was 40.5 Gy, V30 was 1.4-3 Gy, and maximum heart dose was 50.5 Gy. Similarly, for left-sided patients, the CLD (treatment planning system) was 25 mm-30 mm, CLD (EPIs) was 30-40 mm, V20 was 0.9-5.9 Gy, maximum lung dose was 45 Gy, V30 was 2.4-4.1 Gy, and maximum heart dose was 55 Gy. Conclusion: Online assessment of patient position with matching of EPIs with digitally reconstructed radiographs (DRRs) is a useful method in evaluation of interfraction reproducibility in breast irradiation. abstract_id: PUBMED:38018758 A robust treatment planning approach for chest motion in postmastectomy chest wall intensity modulated radiation therapy. Purpose: Chest wall postmastectomy radiation therapy (PMRT) should consider the effects of chest wall respiratory motion. The purpose of this study is to evaluate the effectiveness of robustness planning intensity modulated radiation therapy (IMRT) for respiratory movement, considering respiratory motion as a setup error. Material And Methods: This study analyzed 20 patients who underwent PMRT (10 left and 10 right chest walls). The following three treatment plans were created for each case and compared. The treatment plans are a planning target volume (PTV) plan (PP) that covers the PTV within the body contour with the prescribed dose, a virtual bolus plan (VP) that sets a virtual bolus in contact with the body surface and prescribing the dose that includes the PTV outside the body contour, and a robust plan (RP) that considers respiratory movement as a setup uncertainty and performs robust optimization. The isocenter was shifted to reproduce the chest wall motion pattern and the doses were recalculated for comparison for each treatment plan. Result: No significant difference was found between the PP and the RP in terms of the tumor dose in the treatment plan. In contrast, VP had 3.5% higher PTV Dmax and 5.5% lower PTV V95% than RP (p &lt; 0.001). The RP demonstrated significantly higher lung V20Gy and Dmean by 1.4% and 0.4 Gy, respectively, than the PP. The RP showed smaller changes in dose distribution affected by chest wall motion and significantly higher tumor dose coverage than the PP and VP. Conclusion: We revealed that the RP demonstrated comparable tumor doses to the PP in treatment planning and was robust for respiratory motion compared to both the PP and the VP. However, the organ at risk dose in the RP was slightly higher; therefore, its clinical use should be carefully considered. abstract_id: PUBMED:29615067 Does setup on rectal wall improve rectal cancer boost radiotherapy? Background: Rectal cancer patients that show a pathological complete response (pCR) after neo-adjuvant chemo-radiotherapy, have better prognosis. To increase pCR rates several studies escalate the tumor irradiation dose. However, due to lacking tumor contrast on online imaging techniques, no direct tumor setup can be performed and large boost margins are needed to ensure tumor coverage. The purpose of this study was to evaluate the feasibility of performing a setup on rectal wall for rectal cancer boost radiotherapy, thereby using rectal wall nearby the tumor as tumor position surrogate. Methods: For sixteen patients, daily MRI's were performed during 1 week of radiotherapy. On each of these images, tumor and rectum were delineated. Residual displacements were determined per surface voxel after setup on bony anatomy or nearby rectal wall and setup errors for both setups were compared. Furthermore for every rectal wall voxel nearby the tumor, displacement was compared with the closest tumor point and correlation was determined. Results: Mean (SD) setup error was 2.7 mm (3.3 mm) and 2.2 mm (3.2 mm) after setup on bony anatomy and rectal wall respectively. Nevertheless, similar PTV-margin estimates i.e. 95th percentile distances, were found; 8.0 mm. Also, a merely moderate correlation; ρ = 0.66 was found between rectal wall and tumor displacement. Further investigation into tumor and rectal mobility differences showed that the rectal wall lacks appropriate anatomical landmarks to find true displacements, especially to capture motion along the rectal wall. Conclusions: Setup on rectal wall slightly reduces mean setup errors but requires a similar PTV-margin as compared to setup on bony anatomy. Rectal mobility might be similar to tumor mobility, but due the absence of anatomical landmarks in the rectum, displacements along the rectal wall are not detected on current online imaging. Therefore, to further reduce tumor position uncertainties, direct or indirect online tumor visualization is needed. Answer: Yes, surface imaging can improve the patient setup for proton postmastectomy chest wall irradiation. A study that included 15 postmastectomy radiation therapy patients found that the average position discrepancy over the treatment course relative to the planning computed tomography was significantly larger in the group set up with radiographs only, particularly in translations, than in the group assisted by AlignRT surface imaging. The AlignRT assisted group had discrepancies well within the robustness limits (±3 mm) of the pencil beam scanning treatment established in previous studies. Additionally, the setup time decreased from an average of 11 minutes using orthogonal x-rays to an average of 6 minutes using AlignRT surface imaging, indicating that surface imaging not only improves accuracy but also efficiency in patient positioning (PUBMED:27025165).
Instruction: Is the "red flag" referral pathway effective in diagnosing colorectal carcinoma? Abstracts: abstract_id: PUBMED:23620610 Is the "red flag" referral pathway effective in diagnosing colorectal carcinoma? Introduction: In 2000-2004 there were, on average, 93 8 new cases of colorectal cancer (CRC) diagnosed per annum in Northern Ireland, accounting for 13.9% of all cancers. The two week "red flag" referral system aims to detect 90% of patients with CRC for prompt treatment. The aim of this study is to examine the impact of the "red flag" referral system on identification of patients with CRC, time to treatment and stage of disease. Methods: A random sample of 200 patients referred via the "red flag" system was identified from the local cancer patient tracker database. Data pertaining to demographics, time to hospital appointment, appropriateness of referral and diagnosis were collected. For patients identified with CRC, the stage of disease and time to first definitive treatment were also documented. Results: Of the 200 patients, 56% were female. The age range was 27-93 years. Eighty three percent were seen within 14 days of referral. Referrals adhered to the guidelines in 45% of cases. There were 4 pancreatic cancers, 1 endometrial cancer, 1 ovarian cancer and 1 myelodysplasia diagnosed. Three patients were diagnosed with CRC (1.5%). Of these, 1 was palliative and the remaining 2 commenced definitive management within 6 days of decision to treat. Conclusion: The "red flag" referral system does not appear to be effective in identifying patients with CRC but did identify patients with other types of cancer. Less than half of the referrals adhered to the guidelines. A review of this system should be undertaken. abstract_id: PUBMED:21976660 Assessment of a rapid referral pathway for suspected colorectal cancer in Madrid. Objective: To assess the results achieved with a rapid referral pathway for suspected colorectal cancer (CRC), comparing with the standard referral pathway. Methods: Three-year audit of patients suspected of having CRC routed via a rapid referral pathway, and patients with CRC routed via the standard referral pathway of a health care district serving a population of 498,000 in Madrid (Spain). Outcomes included referral criteria met, waiting times, cancer diagnosed and stage of disease. Results: Two hundred and seventy-two patients (mean age 68.8 years, SD 14.0; 51% male) were routed via the rapid referral pathway for colonoscopy. Seventy-nine per cent of referrals fulfilled the criteria for high risk of CRC. Fifty-two cancers were diagnosed: 26% Stage A (Astler-Coller), 36% Stage B, 24% Stage C and 14% Stage D. Average waiting time to colonoscopy for the rapid referral patients was 18.5 days (SD 19.1) and average waiting time to surgery was 28.6 days (SD 23.9). Colonoscopy was performed within 15 days in 65% of CRC rapid referral patients compared to 43% of standard pathway patients (P = 0.004). Overall waiting time for patients with CRC in the rapid referral pathway was 52.7 days (SD 32.9); while for those in the standard pathway, it was 71.5 days (SD 57.4) (P = 0.002). Twenty-six per cent Stage A CRC was diagnosed in the rapid referral pathway compared to 12% in the standard pathway (P &lt; 0.001). Conclusion: The rapid referral pathway reduced waiting time to colonoscopy and overall waiting time to final treatment and appears to be an effective strategy for diagnosing CRC in its early stages. abstract_id: PUBMED:32540874 Delays in referral from primary care worsen survival for patients with colorectal cancer: a retrospective cohort study. Background: Delays in referral for patients with colorectal cancer may occur if the presenting symptom is falsely attributed to a benign condition. Aim: To investigate whether delays in referral from primary care are associated with a later stage of cancer at diagnosis and worse prognosis. Design And Setting: A national retrospective cohort study in England including adult patients with colorectal cancer identified from the cancer registry with linkage to Clinical Practice Research Datalink, who had been referred following presentation to their GP with a 'red flag' or 'non-specific' symptom. Method: The hazard ratios (HR) of death were calculated for delays in referral of between 2 weeks and 3 months, and &gt;3 months, compared with referrals within 2 weeks. Results: A total of 4527 (63.5%) patients with colon cancer and 2603 (36.5%) patients with rectal cancer were included in the study. The percentage of patients presenting with red-flag symptoms who experienced a delay of &gt;3 months before referral was 16.9% of those with colon cancer and 13.5% of those with rectal cancer, compared with 35.7% of patients with colon cancer and 42.9% of patients with rectal cancer who presented with non-specific symptoms. Patients referred after 3 months with red-flag symptoms demonstrated a significantly worse prognosis than patients who were referred within 2 weeks (colon cancer: HR 1.53; 95% confidence interval [CI] = 1.29 to 1.81; rectal cancer: HR 1.30; 95% CI = 1.06 to 1.60). This association was not seen for patients presenting with non-specific symptoms. Delays in referral were associated with a significantly higher proportion of late-stage cancers. Conclusion: The first presentation to the GP provides a referral opportunity to identify the underlying cancer, which, if missed, is associated with a later stage in diagnosis and worse survival. abstract_id: PUBMED:30564682 Cancer suspicion in general practice, urgent referral, and time to diagnosis: a population-based GP survey nested within a feasibility study using information technology to flag-up patients with symptoms of colorectal cancer. Background: Patients with symptoms of possible colorectal cancer are not always referred for investigation. Aim: To ascertain barriers and facilitators to GP referral of patients meeting the National Institute for Health and Care Excellence (NICE) guidelines for urgent referral for suspected colorectal cancer. Design & Setting: Qualitative study in the context of a feasibility study using information technology in GP practices to flag-up patients meeting urgent referral criteria for colorectal cancer. Method: Semi-structured interview with 18 GPs and 12 practice managers, focusing on early detection of colorectal cancer, issues in the use of information technology to identify patients and GP referral of these patients for further investigation were audiotaped, transcribed verbatim, and analysed according to emergent themes. Results: There were two main themes: wide variation in willingness to refer and uncertainty about whether to refer; and barriers to referral. Three key messages emerged: there was a desire to avoid over-referral, lack of knowledge of guidelines, and the use of individually-derived decision rules for further investigation or referral of symptoms. Some GPs were unaware that iron deficiency anaemia or persistent diarrhoea are urgent referral criteria. Alternatives to urgent referral included undertaking no investigations, trials of iron therapy, use of faecal occult blood tests (FOBt) and non-urgent referral. In minority ethnic groups (South Asians) anaemia was often accepted as normal.Concerns about over-referral were linked to financial pressures and perceived criticism by healthcare commissioners, and a reluctance to scare patients by discussing suspected cancer. Conclusion: GPs' lack of awareness of referral guidelines and concerns about over-referral are barriers to early diagnosis of colorectal cancer. abstract_id: PUBMED:28600231 The "two-week wait" referral pathway is not associated with improved survival for patients with colorectal cancer. Aim: To improve survival rates in patients diagnosed with cancer in the UK, a two-week wait (2ww) referral to first appointment target and a 62 day referral to treatment target were introduced in 2004. This study analyses survival rates for patients diagnosed with colorectal cancer (CRC) by mode of referral and referral to treatment time. Method: A prospectively maintained database of CRC outcomes at the University Hospitals of Leicester NHS Trust was analysed. Data for patients diagnosed with CRC was analysed for survival. Comparisons were made by mode of referral (2ww, urgent, routine, emergency, national bowel cancer screening programme (NBCSP) and other screening pathways). In addition, this study assessed referral to initial treatment times for patients undergoing cancer resection (&lt;62days group vs. &gt;62days group). Inter-group comparisons were made using the Mann-Whitney-U-test. Kaplan-Meier survival probability estimates were calculated for overall survival and the log-rank test was used to compare the survival distributions in different groups. Results: Overall survival (median time) was significantly lower for patients referred by the '2ww' pathway (3.5 years, 95% CI: 2.7-4.30), in comparison to the 'routine' (5.4 years, 95% CI: 4.5-6.6) pathway (p &lt; 0.001). Patients referred on the '2ww' pathway were 1.34 times more likely to have stage IV disease at presentation in comparison to patients referred by the 'routine' pathway. Comparison of referral to initial treatment times showed there was no significant difference in survival between the &lt;62days group and the &gt;62days group (7.1 vs. 6.54, p = 0.620). Conclusion: Patients diagnosed with CRC by the 2ww pathway had shorter survival times than those referred by a routine pathway. abstract_id: PUBMED:30042499 Therapeutic potential of adenovirus-mediated TFF2-CTP-Flag peptide for treatment of colorectal cancer. TFF2 is a small, secreted protein with anti-inflammatory properties. We previously have shown that TFF2 gene delivery via adenovirus (Ad-Tff2) suppresses colon tumor growth in colitis associated cancer. Therefore, systemic administration of TFF2 peptide could potentially provide a similar benefit. Because TFF2 shows a poor pharmacokinetic, we sought to modify the TFF2 peptide in a manner that would lower its clearance rate but retain bioactivity. Given the absence of a sequence-based prediction of TFF2 functionality, we chose to genetically fuse the C-terminus of TFF2 with the carboxyl-terminal peptide of human chorionic gonadotropin β subunit, and inserted into adenoviral vector that expresses Flag. The resulting Ad-Tff2-CTP-Flag construct translates into a TFF2 fused with two CTP and three Flag motifs. Administered Ad-Tff2-CTP-Flag decreased tumorigenesis and suppressed the expansion of myeloid cells in vivo. The fusion peptide TFF2-CTP-Flag delivered by adenovirus Ad-Tff2-CTP-Flag as well purified recombinant fusion TFF2-CTP-Flag was retained in the blood longer compared with wild-type TFF2 delivered by Ad-Tff2 or recombinant TFF2. Consistently, purified recombinant fusion TFF2-CTP-Flag suppressed expansion of myeloid cells by down-regulating cyclin D1 mRNA in vitro. Here, we demonstrate for the very first time the retained bioactivity and possible pharmacokinetic advantages of TFF2 with a modified C-terminus. abstract_id: PUBMED:37646879 Diagnosing Provider, Referral Patterns, Facility Type, and Patient Satisfaction Among Iowa Rectal Cancer Patients. Purpose: Rectal cancer treatment at high-volume centers is associated with higher likelihood of guideline-concordant care and improved outcomes. Whether rectal cancer patients are referred for treatment at high-volume hospitals may depend on diagnosing provider specialty. We aimed to determine associations of diagnosing provider specialty with treating provider specialty and characteristics of the treating facility for rectal cancer patients in Iowa. Methods: Rectal cancer patients identified using the Iowa Cancer Registry completed a mailed survey on their treatment experience and decision-making process. Provider type was defined by provider specialty and whether the provider referred patients elsewhere for surgery. Multivariable-adjusted logistic regression models were used to examine predictors of being diagnosed by a general surgeon who also performed the subsequent surgery. Results: Of 417 patients contacted, 381 (76%) completed the survey; our final analytical sample size was 267. Half of respondents were diagnosed by a gastroenterologist who referred them elsewhere; 30% were diagnosed by a general surgeon who referred them elsewhere, and 20% were diagnosed by a general surgeon who performed the surgery. Respondents who were ≥ 65 years old, had less than a college education, and who made &lt; $50,000 per year were more likely to be diagnosed by a general surgeon who performed surgery. In multivariable-adjusted models, respondents diagnosed and treated by the same general surgeon were more likely to have surgery at hospitals with low annual colorectal cancer surgery volume and less likely to be satisfied with their care. Conclusions: Among rectal cancer patients in Iowa, respondents who were diagnosed and treated by the same provider were less likely to get treatment at a high-volume facility. This study informs the importance of provider referral in centralization of rectal cancer care. abstract_id: PUBMED:36169748 COVID-19 Pandemic Had Minimal Impact on Colonoscopy Completion After Colorectal Cancer Red Flag Sign or Symptoms in US Veterans. Background: Delays in colonoscopy work-up for red flag signs or symptoms of colorectal cancer (CRC) during the COVID-19 pandemic are not well characterized. Aims: To examine colonoscopy uptake and time to colonoscopy after red flag diagnosis, before and during the COVID-19 pandemic. Methods: Cohort study of adults ages 50-75 with iron deficiency anemia (IDA), hematochezia, or abnormal stool blood test receiving Veterans Health Administration (VHA) care from April 2019 to December 2020. Index date was first red flag diagnosis date, categorized into "pre" (April-December 2019) and "intra" (April-December 2020) policy implementation prioritizing diagnostic procedures, allowing for a 3-month "washout" (January-March 2020) period. Outcomes were colonoscopy completion and time to colonoscopy pre- vs. intra-COVID-19, examined using multivariable Cox models with hazard ratios (aHRs) and 95% confidence intervals (CIs). Results: There were 52,539 adults with red flag signs or symptoms (pre-COVID: 25,154; washout: 7527; intra-COVID: 19,858). Proportion completing colonoscopy was similar pre- vs. intra-COVID-19 (27.0% vs. 26.5%; p = 0.24). Median time to colonoscopy among colonoscopy completers was similar for pre- vs. intra-COVID-19 (46 vs. 42 days), but longer for individuals with IDA (60 vs. 49 days). There was no association between time period and colonoscopy completion (aHR: 0.99, 95% CI 0.95-1.03). Conclusions: Colonoscopy work-up of CRC red flag signs and symptoms was not delayed within VHA during the COVID-19 pandemic, possibly due to VHA policies supporting prioritization and completion. Further work is needed to understand how COVID-19 policies on screening and surveillance impact CRC-related outcomes, and how to optimize colonoscopy completion after a red flag diagnosis. abstract_id: PUBMED:33749502 The effectiveness of the colorectal cancer referral pathway - identification of colorectal cancer in a Swedish region. Introduction: To shorten the time for diagnosis of suspected colorectal cancer (CRC), a standardized colorectal cancer referral pathway (CCRP) was introduced in Sweden in September 2016. However, the effects of the CCRP are still uncertain, and CRC is also found in patients undergoing a routine colonoscopy. Objective: To identify all CRC-cases in the Region Örebro County and to investigate via which diagnostic pathway they were diagnosed. Furthermore, to investigate the reasons for and possible effect of not being included in the CCRP for cases found via colonoscopy. Methods: Review of medical records of patients with CRC referred to the department of surgery in the Region Örebro County in 2016-2018 (n = 459). Results: In CRC-cases found through colonoscopy (n = 347), 37.5% were diagnosed via a routine waiting list and 62.5% within the CCRP. No difference in tumor stage or tumor grade was found between the two groups. The non-CCRP showed a longer time to diagnosis than the CCRP group (21.5 days, IQR 7-43 vs. 13 days, IQR 8-17 (p &lt; .001), respectively). Non-rectal cancer was more common in the non-CCRP group (81.5% vs. 57.6%, p &lt; .001). The non-CCRP group had lower median Hb-value (106, IQR 87-129 vs. 117, IQR 101-136, p = .001). 85% of the non-CCRP group was found to meet one or more CCRP referral criteria, with bleeding anemia being the dominant criterion to meet. Conclusion: The CCRP did not appear to improve prognostic outcomes for CRC-patients. ClinicalTrials.gov Identifier: NCT04585516. abstract_id: PUBMED:38317073 Colorectal cancer pre-diagnostic symptoms are associated with anatomic cancer site. Background: Signs and red flag symptoms in colorectal cancer (CRC) patients who are below the recommended screening age are often overlooked, leading to delayed diagnosis and worse prognosis. This study investigates how patient pre-diagnostic symptoms are associated with anatomic site of their cancer and whether the association varies by age at CRC diagnosis. Methods: We ascertained CRC patients' experienced symptoms and screening through medical abstractions from an ongoing population-based study of CRC patients identified through a SEER cancer registry (N = 626). We used logistic regression to estimate odds ratios and 95% confidence intervals for the association between symptoms and CRC anatomic site. Additional analyses were stratified by age at diagnosis. Early-onset was defined as less than 50 years of age at CRC diagnosis. Results: Participants who experienced blood in stool were more likely (odds ratio (95% confidence interval)) to have rectal (vs. colon) cancer (4.37 (3.02, 6.33)), as were patients who experienced changes to stool (1.78 (1.21, 2.60)). Patients diagnosed with colon cancer were more likely to present with abdominal pain (0.30 (0.19, 0.47)), anemia (0.40 (0.21, 0.75)), other symptoms (0.33 (0.19, 0.55)) and no symptoms (0.68 (0.44, 1.04)). When stratifying by age at diagnosis, we found that the association between blood in stool and rectal tumor location was particularly pronounced for patients with early-onset CRC (6.48 (2.73, 15.41)). Conclusions: Common pre-diagnostic red flag symptoms are associated with CRC anatomic site. These findings can inform best practices for gastroenterologist triage of care and early evaluation of CRC and are of key importance given the rise of early-onset (pre-screening age) CRC. Trial Registration: Not applicable to this study and analysis. Answer: The effectiveness of the "red flag" referral pathway in diagnosing colorectal carcinoma (CRC) appears to be variable based on the evidence from different studies. One study from Northern Ireland found that the "red flag" referral system was not particularly effective in identifying patients with CRC. Out of 200 patients referred through this system, only three were diagnosed with CRC, and less than half of the referrals adhered to the guidelines. The authors suggested that a review of the system should be undertaken (PUBMED:23620610). In contrast, a study from Madrid reported that a rapid referral pathway for suspected CRC, which is similar to the "red flag" system, was effective in reducing waiting times for colonoscopy and overall waiting time to final treatment. It also appeared to be an effective strategy for diagnosing CRC in its early stages, with a higher percentage of Stage A CRC diagnosed compared to the standard pathway (PUBMED:21976660). A retrospective cohort study in England found that delays in referral from primary care were associated with a later stage of cancer at diagnosis and worse prognosis. This suggests that timely referral, which is a key component of the "red flag" pathway, is crucial for better outcomes (PUBMED:32540874). Another study highlighted that lack of awareness of referral guidelines among GPs and concerns about over-referral are barriers to early diagnosis of CRC. This indicates that the effectiveness of the "red flag" pathway may be compromised by these factors (PUBMED:30564682). Furthermore, a study analyzing survival rates for patients diagnosed with CRC by mode of referral found that patients referred through the "two-week wait" pathway, which is a type of "red flag" referral, had shorter survival times than those referred by a routine pathway. This suggests that the pathway may not be associated with improved survival for CRC patients (PUBMED:28600231). In summary, while some studies suggest that rapid referral pathways can be effective in diagnosing CRC early and reducing waiting times, other studies indicate that the "red flag" referral pathway may not be as effective as intended, with issues such as adherence to guidelines, GP awareness, and concerns about over-referral affecting its success.
Instruction: Stochastic Predictions of Cell Kill During Stereotactic Ablative Radiation Therapy: Do Hypoxia and Reoxygenation Really Matter? Abstracts: abstract_id: PUBMED:27212197 Stochastic Predictions of Cell Kill During Stereotactic Ablative Radiation Therapy: Do Hypoxia and Reoxygenation Really Matter? Purpose: To simulate stereotactic ablative radiation therapy on hypoxic and well-oxygenated in silico tumors, incorporating probabilistic parameter distributions and linear-quadratic versus linear-quadratic-cubic methodology and the evaluation of optimal fractionation schemes using biological effective dose (BEDα/β=10 or 3) comparisons. Methods And Materials: A temporal tumor growth and radiation therapy algorithm simulated high-dose external beam radiation therapy using stochastic methods. Realistic biological proliferative cellular hierarchy and pO2 histograms were incorporated into the 10(8)-cell tumor model, with randomized radiation therapy applied during continual cell proliferation and volume-based gradual tumor reoxygenation. Dose fractions ranged from 6-35 Gy, with predictive outcomes presented in terms of the total doses (converted to BED) required to eliminate all cells that could potentially regenerate the tumor. Results: Well-oxygenated tumor control BED10 outcomes were not significantly different for high-dose versus conventional radiation therapy (BED10: 79-84 Gy; Equivalent Dose in 2 Gy fractions with α/β of 10: 66-70 Gy); however, total treatment times decreased from 7 down to 1-3 weeks. For hypoxic tumors, an additional 28 Gy (51 Gy BED10) was required, with BED10 increasing with dose per fraction due to wasted dose in the final fraction. Fractions of 9 Gy compromised well for total treatment time and BED, with BED10:BED3 of 84:176 Gy for oxic and 132:278 Gy for non-reoxygenating hypoxic tumors. Initial doses of 12 Gy followed by 6 Gy further increased the therapeutic ratio. When delivering ≥9 Gy per fraction, applying reoxygenation and/or linear-quadratic-cubic cell survival both affected tumor control doses by a significant 1-2 fractions. Conclusions: The complex temporal dynamics of tumor oxygenation combined with probabilistic cell kinetics in the modeling of radiation therapy requires sophisticated stochastic modeling to predict tumor cell kill. For stereotactic ablative radiation therapy, high doses in the first week followed by doses that are more moderate may be beneficial because a high percentage of hypoxic cells could be eradicated early while keeping the required BED10 relatively low and BED3 toxicity to tolerable levels. abstract_id: PUBMED:24860988 Radiobiology of ablative dose in stereotactic irradiation: data update Stereotactic radiotherapy is a radiation technique, which is becoming more and more available and applicable for physicians. A good efficacy and safety are observed in clinical practice. However, the radiobiology of ablative radiation is still under question. The radiobiological principles of the 5R have to be discussed. The roles of hypoxia and vascularization, more specifically, angiogenesis and vasculogenesis seem to be dominating. abstract_id: PUBMED:32742453 Radiobiology of stereotactic ablative radiotherapy (SABR): perspectives of clinical oncologists. Stereotactic ablative radiotherapy (SABR) is a novel radiation treatment method that delivers an intense dose of radiation to the treatment targets with high accuracy. The excellent local control and tolerance profile of SABR have made it become an important modality in cancer treatment. The radiobiology of SABR is a key factor in understanding and further optimizing the benefits of SABR. In this review, we have addressed several issues in the radiobiology of SABR from the perspective of clinical oncologists. The appropriateness of the linear-quadratic (LQ) model for SABR is controversial based on preclinical data, but it is a reliable tool from the perspective of clinical application because the biological effective dose (BED) calculated with it can represent the tumor control probability (TCP). Hypoxia is a common phenomenon in SABR in spite of the relatively small tumor size and has a negative effect on the efficacy of SABR. Preliminary studies indicate that a hypoxic radiosensitizer combined with SABR may be a feasible strategy, but so far there is not adequate evidence to support its application in routine practice. The vascular change of endothelial apoptosis and blood perfusion reduction in SABR may enhance the response of tumor cells to radiation. Combination of SABR with anti-angiogenesis therapy has shown promising efficacy and good tolerance in advanced cancers. SABR is more powerful in enhancing antitumor immunity and works better with immune checkpoint inhibitors (ICIs) than conventional fractionation radiotherapy. Combination of SABR with ICIs has become a practical option for cancer patients with metastases. abstract_id: PUBMED:26756026 Radiobiological mechanisms of stereotactic body radiation therapy and stereotactic radiation surgery. Despite the increasing use of stereotactic body radiation therapy (SBRT) and stereotactic radiation surgery (SRS) in recent years, the biological base of these high-dose hypo-fractionated radiotherapy modalities has been elusive. Given that most human tumors contain radioresistant hypoxic tumor cells, the radiobiological principles for the conventional multiple-fractionated radiotherapy cannot account for the high efficacy of SBRT and SRS. Recent emerging evidence strongly indicates that SBRT and SRS not only directly kill tumor cells, but also destroy the tumor vascular beds, thereby deteriorating intratumor microenvironment leading to indirect tumor cell death. Furthermore, indications are that the massive release of tumor antigens from the tumor cells directly and indirectly killed by SBRT and SRS stimulate anti-tumor immunity, thereby suppressing recurrence and metastatic tumor growth. The reoxygenation, repair, repopulation, and redistribution, which are important components in the response of tumors to conventional fractionated radiotherapy, play relatively little role in SBRT and SRS. The linear-quadratic model, which accounts for only direct cell death has been suggested to overestimate the cell death by high dose per fraction irradiation. However, the model may in some clinical cases incidentally do not overestimate total cell death because high-dose irradiation causes additional cell death through indirect mechanisms. For the improvement of the efficacy of SBRT and SRS, further investigation is warranted to gain detailed insights into the mechanisms underlying the SBRT and SRS. abstract_id: PUBMED:25468743 Image-guided stereotactic ablative radiotherapy for the liver: a safe and effective treatment. Aims: Stereotactic ablative body radiotherapy (SABR) is a non-invasive treatment option for inoperable patients or patients with irresectable liver tumors. Outcome and toxicity were evaluated retrospectively in this single-institution patient cohort. Patients And Methods: Between 2010 and 2014, 39 lesions were irradiated in 33 consecutive patients (18 male, 15 female, median age of 68 years). All the lesions were liver metastases (n = 34) or primary hepatocellular carcinomas (n = 5). The patients had undergone four-dimensional respiration-correlated PET-CT for treatment simulation to capture tumor motion. We analyzed local control with a focus on CT-based response at three months, one year and two years after treatment, looking at overall survival and the progression pattern. Results: All patients were treated with hypofractionated image-guided stereotactic radiotherapy. The equivalent dose in 2 Gy fractions varied from 62.5 Gy to 150 Gy, delivered in 3-10 fractions (median dose 93.8 Gy, alpha/beta = 10). The CT-based regression pattern three months after radiotherapy revealed partial regression in 72.7% of patients with a complete remission in 27.3% of the cases. The site of first progression was predominantly distant. One- and two-year overall survival rates were 85.4% and 68.8%, respectively. No toxicity of grade 2 or higher according to the NCI Common Terminology Criteria for Adverse Events v4.0 was observed. Conclusion: SABR is a safe and efficient treatment for selected inoperable patients or irresectable tumors of the liver. Future studies should combine SABR with systemic treatment acting in synergy with radiation, such as immunological interventions or hypoxic cell radiosensitizers to prevent distant relapse. abstract_id: PUBMED:20832663 Stereotactic ablative radiotherapy should be combined with a hypoxic cell radiosensitizer. Purpose: To evaluate the effect of tumor hypoxia on the expected level of cell killing by regimens of stereotactic ablative radiotherapy (SABR) and to determine the extent to which the negative effect of hypoxia could be prevented using a clinically available hypoxic cell radiosensitizer. Results And Discussion: We have calculated the expected level of tumor cell killing from regimens of SABR, both with and without the assumption that 20% of the tumor cells are hypoxic, using the standard linear quadratic model and the universal survival curve modification. We compare the results obtained with our own clinical data for lung tumors of different sizes and with published data from other studies. We also have calculated the expected effect on cell survival of adding the hypoxic cell sensitizer etanidazole at clinically achievable drug concentrations. Modeling tumor cell killing with any of the currently used regimens of SABR produces results that are inconsistent with the majority of clinical findings if tumor hypoxia is not considered. However, with the assumption of tumor hypoxia, the expected level of cell killing is consistent with clinical data. For only some of the smallest tumors are the clinical data consistent with no tumor hypoxia, but there could be other reasons for the sensitivity of these tumors. The addition of etanidazole at clinically achievable tumor concentrations produces a large increase in the expected level of tumor cell killing from the large radiation doses used in SABR. Conclusions: The presence of tumor hypoxia is a major negative factor in limiting the curability of tumors by SABR at radiation doses that are tolerable to surrounding normal tissues. However, this negative effect of hypoxia could be overcome by the addition of clinically tolerable doses of the hypoxic cell radiosensitizer etanidazole. abstract_id: PUBMED:7919667 Influence of time-dependent stochastic heterogeneity on the radiation response of a cell population. A solid tumor is a cell population with extensive cellular heterogeneity, which severely complicates tumor treatment by therapeutic agents such as ionizing radiation. We model the response to ionizing radiation of a multicellular population whose cells have time-dependent stochastic radiosensitivity. A reaction-diffusion equation, obtained by assuming a random process with the radiation response of a cell partly determined by competition between repair and binary misrepair of DNA double-strand breaks, is used. By a suitable transformation, the equation is reduced to that of an Ornstein-Uhlenbeck process so explicit analytic solutions are available. Three consequences of the model's assumptions are that (1) response diversity within a population increases resistance to radiation, that is, the population surviving is greater than that anticipated from considering an average cell; (2) resistant cell subpopulations preferentially spared by the first part of a prolonged radiation protocol are driven biologically into more radiosensitive states as time increases, that is, resensitization occurs; (3) an inverse dose-rate effect, that is, an increase in cell killing as overall irradiation time is increased, occurs in those situations where resensitization dominates effects due to binary misrepair of repairable damage. The results are consistent with the classic results of Elkind and coworkers on extra cell killing attributed to cell-cycle redistribution and are in agreement with some recent results on in vitro and in vivo population radiosensitivity. They also generalize the therapeutic paradigm that low dose rate or fractionated radiation can help overcome hypoxic radioresistance in tumors. abstract_id: PUBMED:24688774 Exploiting sensitization windows of opportunity in hyper and hypo-fractionated radiation therapy. In contrast to the conventional radiotherapy/chemoradiotherapy paradigms used in the treatment of majority of cancer types, this review will describe two areas of radiobiology, hyperfractionated and hypofractionated radiation therapy, for cancer treatment focusing on application of novel concepts underlying these treatment modalities. The initial part of the review discusses the phenomenon of hyper-radiation sensitivity (HRS) at lower doses (0.1 to 0.6 Gy), describing the underlying mechanisms and how this could enhance the effects of chemotherapy, particularly, in hyperfractionated settings. The second part examines the radiobiological/physiological mechanisms underlying the effects of high-dose hypofractionated radiation therapy that can be exploited for tumor cure. These include abscopal/bystander effects, activation of immune system, endothelial cell death and effect of hypoxia with re-oxygenation. These biological properties along with targeted dose delivery and distribution to reduce normal tissue toxicity may make high-dose hypofractionation more effective than conventional radiation therapy for treatment of advanced cancers. The novel radiation physics based methods that take into consideration the tumor volume to be irradiated and normal tissue avoidance/tolerance can further improve treatment outcome and post-treatment quality of life. In conclusion, there is enough evidence to further explore novel avenues to exploit biological mechanisms from hyper-fractionation by enhancing the efficacy of chemotherapy and hypo-fractionated radiation therapy that could enhance tumor control and use imaging and technological advances to reduce toxicity. abstract_id: PUBMED:30836165 Biological Principles of Stereotactic Body Radiation Therapy (SBRT) and Stereotactic Radiation Surgery (SRS): Indirect Cell Death. Purpose: To review the radiobiological mechanisms of stereotactic body radiation therapy stereotactic body radiation therapy (SBRT) and stereotactic radiation surgery (SRS). Methods And Materials: We reviewed previous reports and recent observations on the effects of high-dose irradiation on tumor cell survival, tumor vasculature, and antitumor immunity. We then assessed the potential implications of these biological changes associated with SBRT and SRS. Results: Irradiation with doses higher than approximately 10 Gy/fraction causes significant vascular injury in tumors, leading to secondary tumor cell death. Irradiation of tumors with high doses has also been reported to increase the antitumor immunity, and various approaches are being investigated to further elevate antitumor immunity. The mechanism of normal tissue damage by high-dose irradiation needs to be further investigated. Conclusions: In addition to directly killing tumor cells, high-dose irradiation used in SBRT and SRS induces indirect tumor cell death via vascular damage and antitumor immunity. Further studies are warranted to better understand the biological mechanisms underlying the high efficacy of clinical SBRT and SRS and to further improve the efficacy of SBRT and SRS. abstract_id: PUBMED:27130790 Real-time Tumor Oxygenation Changes After Single High-dose Radiation Therapy in Orthotopic and Subcutaneous Lung Cancer in Mice: Clinical Implication for Stereotactic Ablative Radiation Therapy Schedule Optimization. Purpose: To investigate the serial changes of tumor hypoxia in response to single high-dose irradiation by various clinical and preclinical methods to propose an optimal fractionation schedule for stereotactic ablative radiation therapy. Methods And Materials: Syngeneic Lewis lung carcinomas were grown either orthotopically or subcutaneously in C57BL/6 mice and irradiated with a single dose of 15 Gy to mimic stereotactic ablative radiation therapy used in the clinic. Serial [(18)F]-misonidazole (F-MISO) positron emission tomography (PET) imaging, pimonidazole fluorescence-activated cell sorting analyses, hypoxia-responsive element-driven bioluminescence, and Hoechst 33342 perfusion were performed before irradiation (day -1), at 6 hours (day 0), and 2 (day 2) and 6 (day 6) days after irradiation for both subcutaneous and orthotopic lung tumors. For F-MISO, the tumor/brain ratio was analyzed. Results: Hypoxic signals were too low to quantitate for orthotopic tumors using F-MISO PET or hypoxia-responsive element-driven bioluminescence imaging. In subcutaneous tumors, the maximum tumor/brain ratio was 2.87 ± 0.483 at day -1, 1.67 ± 0.116 at day 0, 2.92 ± 0.334 at day 2, and 2.13 ± 0.385 at day 6, indicating that tumor hypoxia was decreased immediately after irradiation and had returned to the pretreatment levels at day 2, followed by a slight decrease by day 6 after radiation. Pimonidazole analysis also revealed similar patterns. Using Hoechst 33342 vascular perfusion dye, CD31, and cleaved caspase 3 co-immunostaining, we found a rapid and transient vascular collapse, which might have resulted in poor intratumor perfusion of F-MISO PET tracer or pimonidazole delivered at day 0, leading to decreased hypoxic signals at day 0 by PET or pimonidazole analyses. Conclusions: We found tumor hypoxia levels decreased immediately after delivery of a single dose of 15 Gy and had returned to the pretreatment levels 2 days after irradiation and had decreased slightly by day 6. Our results indicate that single high-dose irradiation can produce a rapid, but reversible, vascular collapse in tumors. Answer: Yes, hypoxia and reoxygenation do matter in the context of stereotactic ablative radiation therapy (SABR). The simulation study by PUBMED:27212197 indicates that hypoxic tumors require an additional 28 Gy (51 Gy BED10) compared to well-oxygenated tumors, and the total biological effective dose (BED) increases with dose per fraction due to wasted dose in the final fraction. The study also suggests that high doses in the first week followed by more moderate doses may be beneficial, as this approach could eradicate a high percentage of hypoxic cells while keeping the required BED10 relatively low and BED3 toxicity to tolerable levels. The importance of hypoxia in the radiobiology of ablative radiation is further supported by PUBMED:24860988, which states that the roles of hypoxia and vascularization, specifically angiogenesis and vasculogenesis, appear to be dominant factors in the efficacy of SABR. PUBMED:32742453 discusses the negative effect of hypoxia on the efficacy of SABR and suggests that combining SABR with a hypoxic radiosensitizer may be a feasible strategy, although there is not yet adequate evidence to support its routine application. PUBMED:26756026 highlights that SBRT and SRS, which are high-dose hypo-fractionated radiotherapy modalities, are effective not only in directly killing tumor cells but also in destroying tumor vascular beds, leading to an indirect tumor cell death. This suggests that the effects of reoxygenation, repair, repopulation, and redistribution, which are significant in conventional fractionated radiotherapy, play a relatively minor role in SBRT and SRS. PUBMED:20832663 specifically evaluates the effect of tumor hypoxia on the level of cell killing by SABR and concludes that tumor hypoxia is a major negative factor in limiting the curability of tumors by SABR. The study also suggests that the negative effect of hypoxia could be overcome by the addition of the hypoxic cell radiosensitizer etanidazole. In summary, the stochastic predictions of cell kill during SABR indicate that hypoxia and reoxygenation are critical factors that influence the efficacy of the treatment.
Instruction: Does stewardship make a difference in the quality of care? Abstracts: abstract_id: PUBMED:36801625 Quality improvement: Antimicrobial stewardship in pediatric primary care. Background: Antimicrobial resistance is the resistance of microorganisms to antibacterial, antiviral, antiparasitic, and antifungal medication resulting in increased healthcare costs with extended hospital stays in the United States. The goals of this quality improvement project were to increase the understanding and importance of antimicrobial stewardship by nurses and health care staff and increase pediatric parents'/guardians' knowledge of the proper use of antibiotics and differences between viruses and bacterial infections. Methods: A retrospective pre-post study was conducted in a midwestern clinic to determine if an antimicrobial stewardship teaching leaflet increased parent/guardian antimicrobial stewardship knowledge. The two interventions for patient education were a modified United States Center for Disease Control antimicrobial stewardship teaching leaflet and a poster regarding antimicrobial stewardship. Results: Seventy-six parents/guardians participated in the pre-intervention survey, with 56 being included in the post-intervention survey. There was a significant increase in knowledge between the pre-intervention survey and the post-intervention survey with a large effect size, p &lt; .001, d = 0.86. This effect was also seen when comparing parents/guardians with no college education, who had a mean knowledge increased change score of 0.62, to those parents/guardians with a college education, whose mean knowledge increase was 0.23, p &lt; .001 with a large effect size of 0.81. Health care staff thought the antimicrobial stewardship teaching leaflets and posters were beneficial. Practice Implications: The use of an antimicrobial stewardship teaching leaflet and a patient education poster may be effective interventions for improving healthcare staff's and pediatric parents'/guardians' knowledge of antimicrobial stewardship. abstract_id: PUBMED:24836515 Does stewardship make a difference in the quality of care? Evidence from clinics and pharmacies in Kenya and Ghana. Objective: To measure level and variation of healthcare quality provided by different types of healthcare facilities in Ghana and Kenya and which factors (including levels of government engagement with small private providers) are associated with improved quality. Design: Provider knowledge was assessed through responses to clinical vignettes. Associations between performance on vignettes and facility characteristics, provider characteristics and self-reported interaction with government were examined using descriptive statistics and multivariate regressions. Setting: Survey of 300 healthcare facilities each in Ghana and Kenya including hospitals, clinics, nursing homes, pharmacies and chemical shops. Private facilities were oversampled. Participants: Person who generally saw the most patients at each facility. Main Outcome Measure(s): Percent of items answered correctly, measured against clinical practice guidelines and World Health Organization's protocol. Results: Overall, average quality was low. Over 90% of facilities performed less than half of necessary items. Incorrect antibiotic use was frequent. Some evidence of positive association between government stewardship and quality among clinics, with the greatest effect (7% points increase, P = 0.03) for clinics reporting interactions with government across all six stewardship elements. No analogous association was found for pharmacies. No significant effect for any of the stewardship elements individually, nor according to type of engagement. Conclusions: Government stewardship appears to have some cumulative association with quality for clinics, suggesting that comprehensive engagement with providers may influence quality. However, our research indicates that continued medical education (CME) by itself is not associated with improved care. abstract_id: PUBMED:38053566 Medicines stewardship. Medicines stewardship refers to coordinated strategies and interventions to optimise medicines use, usually within a specific therapeutic area. Medicines stewardship programs can reduce variations in practice and improve patient outcomes. Therapeutic domains for medicines stewardship are chosen to address frequently used drug classes associated with a high risk of adverse outcomes. Some examples include antimicrobial, opioid analgesic, anticoagulation and psychotropic stewardship. Common elements of successful stewardship programs include multidisciplinary leadership, stakeholder engagement, tailored communication strategies, behavioural changes, implementation science methodologies, and ongoing program monitoring, evaluation and reporting. Medicines stewardship is a continual quality improvement process that requires ongoing support and resources, as well as clinician and consumer engagement, to remain sustainable. It is critical for hospital-based medicines stewardship programs to consider impacts on care in the community when making and communicating changes to patient therapy. This ensures that stewardship efforts are sustained across transitions of care. abstract_id: PUBMED:28806902 Monitoring, documenting and reporting the quality of antibiotic use in the Netherlands: a pilot study to establish a national antimicrobial stewardship registry. Background: The Dutch Working Party on Antibiotic Policy is developing a national antimicrobial stewardship registry. This registry will report both the quality of antibiotic use in hospitals in the Netherlands and the stewardship activities employed. It is currently unclear which aspects of the quality of antibiotic use are monitored by antimicrobial stewardship teams (A-teams) and can be used as indicators for the stewardship registry. In this pilot study we aimed to determine which stewardship objectives are eligible for the envisioned registry. Methods: We performed an observational pilot study among five Dutch hospitals. We assessed which of the 14 validated stewardship objectives (11 process of care recommendations and 3 structure of care recommendations) the A-teams monitored and documented in individual patients. They provided, where possible, data to compute quality indicator (QI) performance scores in line with recently developed QIs to measure appropriate antibiotic use in hospitalized adults for the period of January 2015 through December 2015 RESULTS: All hospitals had a local antibiotic guideline describing recommended antimicrobial use. All A-teams monitored the performance of bedside consultations in Staphylococcus aureus bacteremia and the prescription of restricted antimicrobials. Documentation and reporting were the best for the use of restricted antimicrobials: 80% of the A-teams could report data. Lack of time and the absence of an electronic medical record system enabling documentation during the daily work flow were the main barriers hindering documentation and reporting. Conclusions: Five out of 11 stewardship objectives were actively monitored by A-teams. Without extra effort, 4 A-teams could report on the quality of use of restricted antibiotics. Therefore, this aspect of antibiotic use should be the starting point of the national antimicrobial stewardship registry. Our registry is expected to become a powerful tool to evaluate progress and impact of antimicrobial stewardship programs in hospitals. abstract_id: PUBMED:33253924 What the COVID-19 Pandemic Can Teach Us About Resource Stewardship and Quality in Health Care. The coronavirus disease 2019 pandemic has forever changed how we view health care service delivery. Although there are undoubtedly some unintended consequences that will result from current health care service reallocation, it provides a unique opportunity to consider how to deliver quality care currently, and after the pandemic. In the context of lessons learned, moving forward some of what was previously routine could remain reserved for more exceptional circumstances. To determine what is "routine," what is "essential," and what is "exceptional," it is necessary to view medical decisions within the paradigm of high-quality care. The Institute for Healthcare Improvement definition of the dimensions of quality is based on whether the care is safe, effective, patient-centered, timely, efficient, and equitable. This type of stewardship has been applied to many interventions already deemed unnecessary by organizations such as the Choosing Wisely initiative, but the coronavirus disease 2019 pandemic provides a lens from which to consider other aspects of care. The following will provide examples from Allergy/Immunology that outline how we can reconsider what quality means in the post-coronavirus disease health care system. abstract_id: PUBMED:34602864 Quality Improvement Interventions and Implementation Strategies for Urine Culture Stewardship in the Acute Care Setting: Advances and Challenges. Purpose Of Review: The goal of this article is to highlight how and why urinalyses and urine cultures are misused, review quality improvement interventions to optimize urine culture utilization, and highlight how to implement successful, sustainable interventions to improve urine culture practices in the acute care setting. Recent Findings: Quality improvement initiatives aimed at reducing inappropriate treatment of asymptomatic bacteriuria often focus on optimizing urine test utilization (i.e., urine culture stewardship). Urine culture stewardship interventions in acute care hospitals span the spectrum of quality improvement initiatives, ranging from strong systems-based interventions like suppression of urine culture results to weaker interventions that focus on clinician education alone. While most urine culture stewardship interventions have met with some success, overall results are mixed, and implementation strategies to improve sustainability are not well understood. Summary: Successful diagnostic stewardship interventions are based on an assessment of underlying key drivers and focus on multifaceted and complementary approaches. Individual intervention components have varying impacts on effectiveness, provider autonomy, and sustainability. The best urine culture stewardship strategies ultimately include both technical and socio-adaptive components with long-term, iterative feedback required for sustainability. abstract_id: PUBMED:31988820 Attitudes and Perceptions amongst Critical Care Physicians towards Handshake Antimicrobial Stewardship Rounds. Rationale In an era of antimicrobial resistance, antimicrobial stewardship programs are tasked with reducing inappropriate use of antimicrobials in community and hospital settings. Intensive care units are unique, high-stakes environments where high usage of broad-spectrum antimicrobials is often seen. Handshake stewardship has emerged as an effective mode of prospective audit and feedback to help optimize antimicrobial usage, emphasizing an in-person approach to providing feedback. Objectives Six months following the implementation of handshake stewardship rounds in our intensive care unit, we performed a cross-sectional survey of critical care physicians to assess their attitudes and perceptions towards handshake stewardship rounds and preferred mode of delivery of antimicrobial stewardship prospective audit and feedback strategies. Methods A web-based survey was distributed to 22 critical care physicians working in our hospital and responses were collected over a two-week period. Measurements and Main Results Most critical care physicians believe that handshake stewardship rounds improve the quality of patient care (85.7%) and few believe that handshake stewardship rounds are an ineffective use of their time (14.3%). The majority of critical care physicians believe formal, scheduled rounds with face-to-face verbal interaction are very useful compared to providing written suggestions in the absence of face-to-face interaction (71.4% vs 0%). Conclusions Based upon our survey results, handshake stewardship is valued amongst the majority of critical care physicians. Antimicrobial stewardship prospective audit and feedback strategies emphasizing face-to-face interaction are favored amongst critical care physicians. abstract_id: PUBMED:30126585 Metrics of Antimicrobial Stewardship Programs. Appropriate metrics are needed to measure the quality, clinical, and financial impacts of antimicrobial stewardship programs. Metrics are typically categorized into antibiotic use measures, process measures, quality measures, costs, and clinical outcome measures. Traditionally, antimicrobial stewardship metrics have focused on antibiotic use, antibiotic costs, and process measures. With health care reform, practice should shift to focusing on clinical impact of stewardship programs over financial impact. This article reviews the various antimicrobial stewardship metrics that have been described in the literature, evidence to support these metrics, controversies surrounding metrics, and areas in which future research is necessary. abstract_id: PUBMED:31806238 Antimicrobial stewardship near the end of life in aged care homes. Background: The objective of this study was to understand how aged care home health professionals perceive antimicrobial use near the end of life and how they perceive potential antimicrobial stewardship activities near the end of life in aged care homes. Methods: Qualitative semi-structured interviews were undertaken with general practitioners, nurses, and pharmacists who provide routine care in aged care homes in Victoria, Australia. Interviews were coded using frameworks for understanding behavior change. Results: Themes were established within 14 interviews, and an additional 6 interviews were undertaken to ensure thematic saturation. Two major themes emerged: (i) Antimicrobial stewardship activities near the end of life in aged care homes need to enable aged care home nurses to make decisions substantiated by evidence-based clinical knowledge. Antimicrobial stewardship should clearly be part of an aged care home nurse's role, and accreditation standards provide powerful motivation for behavior change. (ii) Antimicrobial stewardship activities near the end of life in aged care homes must address family confidence in resident wellbeing. Antimicrobial stewardship activities should be inclusive of family involvement, and messages should highlight the point that antimicrobial stewardship improves care. Conclusions: Antimicrobial stewardship activities that reinforce evidence-based clinical decision-making by aged care home nurses and address family confidence in resident wellbeing are required near the end of life in aged care homes. Accreditation standards are important motivators for behavior change in aged care homes. abstract_id: PUBMED:35172363 Antimicrobial Stewardship in the ICU. Increasing rates of infection and multidrug-resistant pathogens, along with a high use of antimicrobial therapy, make the intensive care unit (ICU) an ideal setting for implementing and supporting antimicrobial stewardship efforts. Overuse of antimicrobial agents is common in the ICU, as practitioners are challenged daily with achieving early, appropriate empiric antimicrobial therapy to improve patient outcomes. While early antimicrobial stewardship programs focused on the financial implications of antimicrobial overuse, current goals of stewardship programs align closely with those of critical care providers-to optimize patient outcomes, reduce development of resistance, and minimize adverse outcomes associated with antibiotic overuse and misuse such as acute kidney injury and Clostridioides difficile-associated disease. Significant opportunities exist in the ICU for critical care clinicians to support stewardship practices at the bedside, including thoughtful and restrained initiation of antimicrobial therapy, use of biomarkers in addition to rapid diagnostics, Staphylococcus aureus screening, and traditional microbiologic culture and susceptibilities to guide antibiotic de-escalation, and use of the shortest duration of therapy that is clinically appropriate. Integration of critical care practitioners into the initiatives of antimicrobial stewardship programs is key to their success. This review summarizes key components of antimicrobial stewardship programs and mechanisms for critical care practitioners to share the responsibility for antimicrobial stewardship. Answer: Yes, stewardship does make a difference in the quality of care. Evidence from various studies and quality improvement projects indicates that stewardship programs, particularly those focused on antimicrobial use, can lead to increased knowledge among healthcare staff and patients, improved patient outcomes, and reduced variations in practice. For instance, a quality improvement project aimed at increasing understanding and importance of antimicrobial stewardship among nurses, healthcare staff, and pediatric parents/guardians showed a significant increase in knowledge following the intervention, indicating that educational materials such as leaflets and posters can be effective in improving knowledge of antimicrobial stewardship (PUBMED:36801625). In Kenya and Ghana, a study found that there was some evidence of a positive association between government stewardship and quality among clinics, suggesting that comprehensive engagement with providers may influence the quality of care. However, the study also noted that continued medical education alone was not associated with improved care, highlighting the need for multifaceted approaches (PUBMED:24836515). Medicines stewardship programs, which include antimicrobial stewardship, have been shown to reduce variations in practice and improve patient outcomes by implementing coordinated strategies and interventions to optimize medicine use (PUBMED:38053566). Furthermore, the development of a national antimicrobial stewardship registry in the Netherlands aimed to report the quality of antibiotic use in hospitals and stewardship activities employed, which is expected to become a powerful tool to evaluate the progress and impact of antimicrobial stewardship programs (PUBMED:28806902). The COVID-19 pandemic has also highlighted the importance of resource stewardship and quality in healthcare, with a focus on delivering care that is safe, effective, patient-centered, timely, efficient, and equitable (PUBMED:33253924). In the acute care setting, urine culture stewardship interventions have been reviewed, and successful diagnostic stewardship interventions are based on multifaceted and complementary approaches that include both technical and socio-adaptive components (PUBMED:34602864). Critical care physicians have expressed positive attitudes towards handshake antimicrobial stewardship rounds, valuing the improvement in the quality of patient care and preferring face-to-face interaction for antimicrobial stewardship prospective audit and feedback strategies (PUBMED:31988820). Lastly, antimicrobial stewardship in the ICU is crucial due to the high use of antimicrobial therapy and the challenge of achieving early, appropriate empiric antimicrobial therapy.
Instruction: Could the savory taste of snacks be a further risk factor for overweight in children? Abstracts: abstract_id: PUBMED:18367957 Could the savory taste of snacks be a further risk factor for overweight in children? Introduction: The quantity, type and composition of snack foods may play a role in the development and maintenance of obesity in children. A high consumption of energy-dense snacks may promote fat gain. Aims: To assess the type and number of snacks consumed weekly by a large sample of 8- to 10-year-old children, as well as to assess its relationship with body size. Results: The children consumed on average 4 snacks per day. There was no statistical difference in the number of servings per day between obese and nonobese children. However, the mean energy density of the foods consumed was significantly higher for obese and overweight children than for normal weight children [6.8 (0.3) kJ/g, 6.8 (0.16) kJ/g, and 6.3 (0.08) kJ/g, respectively; P &lt; 0.05]. Logistic regression analysis showed that the energy density of the snacks (kJ/g), their savory taste (servings/week), television viewing (hours/day) and sports activity (hours/week) independently contributed to predict obesity in children. However, when the parents' body mass index was included among the independent variables of the regression, only salty foods and sports activity showed an independent association with childhood obesity. Conclusions: Parents' eating habits and lifestyle influence those of their children, as suggested by the association between parents' obesity and their children's energy-dense food intake at snacktime, the savory taste of snacks and sedentary behavior. However, regardless of parents' body mass index, the preference for savory snacks seems to be associated with overweight in prepubertal children. abstract_id: PUBMED:24607656 Bitter taste phenotype and body weight predict children's selection of sweet and savory foods at a palatable test-meal. Previous studies show that children who are sensitive to the bitter taste of 6-n-propylthiouracil (PROP) report more frequent intake of sweets and less frequent intake of meats (savory fats) relative to children who are PROP insensitive. Laboratory studies are needed to confirm these findings. In this study, seventy-nine 4- to 6-year-olds from diverse ethnicities attended four laboratory sessions, the last of which included a palatable buffet consisting of savory-fats (e.g. pizza), sweet-fats (e.g. cookies, cakes), and sweets (e.g. juices, candies). PROP phenotype was classified by two methods: 1) a common screening procedure to divide children into tasters and nontasters, and 2) a three-concentration method used to approximate PROP thresholds. Height and weight were measured and saliva was collected for genotyping TAS2R38, a bitter taste receptor related to the PROP phenotype. Data were analyzed by General Linear Model ANOVA with intake from savory fats, sweet-fats, and sweets as dependent variables and PROP status as the independent variable. BMI z-score, sex, age, and ethnicity were included as covariates. Adjusted energy intake from the food group "sweets" at the test-meal was greater for tasters than for nontasters. PROP status did not influence children's adjusted intake of savory-fats, but BMI z-score did. The TAS2R38 genotype did not impact intake at the test-meal. At a palatable buffet, PROP taster children preferentially consumed more sweets than nontaster children, while heavier children consumed more savory fats. These findings may have implications for understanding differences in susceptibility to hyperphagia. abstract_id: PUBMED:28903111 Type 1 Taste Receptors in Taste and Metabolism. Our sense of taste allows us to evaluate the nutritive value of foods prior to ingesting them. Sweet taste signals the presence of sugars, and savory taste signals the presence of amino acids. The ability to identify these macronutrients in foods was likely crucial for the survival of our species when nourishing food sources were sparse. In modern, industrialized settings, taste perception continues to play an important role in human health as we attempt to prevent and treat conditions stemming from overnutrition. Recent research has revealed that type 1 taste receptors (T1Rs), which are largely responsible for sweet and umami taste, may also influence the absorption and metabolism of the foods we eat. Preliminary research shows that T1Rs contribute to intestinal glucose absorption, blood sugar and insulin regulation, and the body's responses to excessive energy intake. In light of these findings, T1Rs have come to be understood as nutrient sensors, among other roles, that facilitate the selection, digestion, and metabolism of foods. abstract_id: PUBMED:25891040 Taste and food reinforcement in non-overweight youth. Food reinforcement is related to increased energy intake, cross-sectionally related to obesity and prospectively related to weight gain in children, adolescents and adults. There is very limited research on how different characteristics of food are related to food reinforcement, and none on how foods from different taste categories (sweet, savory, salty) are related to food reinforcement. We tested differences in food reinforcement for favorite foods in these categories and used a reinforcing value questionnaire to assess how food reinforcement was related to energy intake in 198 non-overweight 8- to 12-year-old children. Results showed stronger food reinforcement for sweet foods in comparison to savory or salty foods. In multiple regression models, controlling for child sex, minority status and age, average reinforcing value was related to total energy and fat intake, and reinforcing value of savory foods was related to total energy and fat intake. Factor analysis showed one factor, the motivation to eat, rather than separate factors based on different taste categories. Liking ratings were unrelated to total energy intake. These results suggest that while there are differences in the reinforcing value of food by taste groups, there are no strong differences in the relationship between reinforcing value of food by taste groups and energy or macronutrient intake. abstract_id: PUBMED:32548966 Does eating in the absence of hunger extend to healthy snacks in children? Objectives: To assess if eating in the absence of hunger (EAH) extends to healthier snacks and examine the relationship between the home food environment and EAH in children with normal weight (NW) or overweight/obesity (OB) who are at low risk (LR) or high risk (HR) for obesity based on maternal obesity. Methods: EAH was assessed after lunch and dinner when children received either low energy dense fruit snacks or high energy dense sweet/savoury snacks. The availability of obesogenic foods in the home was assessed by the Home Food Inventory. Results: Data showed significant main effects of risk group (P=.0003) and snack type (P &lt; .001). EAH was significantly greater in HR-OB (284±8 kcal) than LR-NW (249±9 kcal) or HR-NW (251±8 kcal) children. Serving fruit rather than sweet/savoury snacks reduced energy intake, on average, by 60% (223 kcal) across risk groups. For each unit increase in the obesogenic home food environment, EAH of sweet/savoury snacks decreased by 1.83 calories. Conclusions: Offering low energy dense snacks after a meal can moderate EAH and increase children's intake of healthy foods. Increased access to obesogenic foods in the home may reduce the salience of high energy dense snacks when they become available in other settings. abstract_id: PUBMED:35942170 "I Like the One With Minions": The Influence of Marketing on Packages of Ultra-Processed Snacks on Children's Food Choices. Objective: This study aimed to assess the most consumed school snacks using the free listing and understand how marketing strategies on food labels influenced children's perceptions of snacks via focus groups. Design: The study design involved free lists and semi-structured focus group interviews. Setting: São Paulo, Brazil. Participants: A total of 69 children were involved in this study. Phenomenon Of Interest: Children's perceptions of food labels. Analysis: Food groups mentioned on the free lists were analyzed for their frequency and priority of occurrence. The focus groups were analyzed through content analysis. Results: Juices and chips were the most salient snacks, with availability and flavor as reasons for their consumption. Children found images on labels appealing, which created a desire for the food, although could be deceptive. Snacks perceived as healthy were encouraged by parents, and children could more easily convince them to buy snacks with health claims. Colors and brands were important to catch children's attention and make the snack recognizable. Television commercials and mascots reinforced marketing strategies on labels. Conclusions And Implications: Our results point to the need for public health strategies to deal with the obesity epidemic through creating and implementing specific legislation to regulate food labels to discourage the consumption of unhealthy snacks and prohibit food marketing targeted at children. abstract_id: PUBMED:28465182 From the children's perspective: What are candy, snacks, and meals? Objective: There remains a lack of consensus on what distinguishes candy (i.e. features sugar as a principal ingredient, also called sweets or lollies), snack foods, and foods served at meals; therefore, this study examined characteristics elementary-aged children use to distinguish between these food categories. Methods: Participants were children aged 5-8 years (N = 41). Children were given 39 cards, each containing an image of a common American food (e.g. ice cream, fruit). Children sorted each card into either a "snack" or "candy" pile followed by a semi-structured one-on-one interview to identify children's perceptions of candy, snack foods, and foods served at meals. Verbatim transcripts were coded using a grounded theory approach to derive major themes. Results: All children classified foods such as crackers and dry cereal as snacks; all children classified foods such as skittles and solid chocolate as candy. There was less agreement for "dessert like foods," such as cookies and ice cream, whereby some children classified these foods as candy and others as snacks. Specifically, more children categorized ice cream and chocolate chip cookies as candy (61% and 63%, respectively), than children who categorized these as snack foods (39% and 36%, respectively). Qualitative interviews revealed 4 overarching themes that distinguished among candy, snack foods, and food served at meals: (1) taste, texture, and type; (2) portion size; (3) perception of health; and (4) time of day. Conclusion: Children categorized a variety of foods as both a candy and a snack. Accurate measurement of candy and snack consumption is needed through the use of clear, consistent terminology and comprehensive diet assessment tools. Intervention messaging should clearly distinguish between candy, snack foods, and foods served at meals to improve children's eating behavior. abstract_id: PUBMED:37328005 Caregiver perceptions of snacks for young children: A thematic synthesis of qualitative research. Snacks are inconsistently defined in nutrition research and dietary guidelines for young children, challenging efforts to improve diet quality. Although some guidelines suggest that snacks include at least two food groups and fit into an overall health promoting dietary pattern, snacks high in added sugars and sodium are highly marketed and frequently consumed. Understanding how caregivers perceive "snacks" for young children may aid in development of effective nutrition communications and behaviourally-informed dietary interventions for obesity prevention. We aimed to synthesize caregivers' perceptions of snacks for young children across qualitative studies. Four databases were searched for peer-reviewed qualitative articles including caregiver perceptions of "snacks" for children ≤5 years. We conducted thematic synthesis of study findings, concluding with the development of analytical themes. Data synthesis of fifteen articles from ten studies, conducted in the U.S., Europe, and Australia, revealed six analytical themes that captured food type, hedonic value, purpose, location, portion size, and time. Caregivers perceived snacks as both "healthy" and "unhealthy" foods. Less healthy snacks were described as highly liked foods, which required restriction and were consumed outside the home. Caregivers used snacks to manage behavior and curb hunger. Snack portions were described as "small", although caregivers reported various methods to estimate child portion size. Caregivers' perceptions of snacks revealed opportunities for targeted nutrition messaging, especially supporting responsive feeding and nutrient-dense food choices. In high-income countries, expert recommendations should consider caregivers' perceptions of snacks, more clearly defining nutrient-dense snacks that are enjoyable, achieve dietary requirements, reduce hunger, and promote healthy weight. abstract_id: PUBMED:34389378 Occasions, purposes, and contexts for offering snacks to preschool-aged children: Schemas of caregivers with low-income backgrounds. Objective: Snacking among preschool aged children is nearly universal and has been associated with overconsumed nutrients, particularly solid fats and added sugars (SoFAS). This research examined caregivers' schemas, or cognitive frameworks, for offering snacks to preschool-aged children. Methods: A qualitative design utilizing card sort methods was employed. Participants were 59 Black, Hispanic, and White caregivers of children aged 3-5 years with low-income backgrounds. Caregivers sorted 63 cards with images of commonly consumed foods/beverages by preschool-aged children in three separate card sorts to characterize snacking occasions, purposes, and contexts. The mean SoFAS content (kcal/100 g) of foods/beverages was evaluated by snacking occasions (snacks vs. not-snacks), purposes, and contexts. Results: Just under two-thirds (38/63 food cards) of foods/beverages were classified as snacks with moderate to high agreement. Snacks were offered for non-nutritive (e.g., requests, rewards) and nutritive (e.g., hunger/thirst) purposes in routine (e.g., home, school) and social contexts (e.g., with grandparents). Snacks offered for non-nutritive purposes and in social contexts were higher in SoFAS than those offered for nutritive reasons and in routine contexts. Conclusions: Caregivers of young children offered various types of foods/beverages as snacks, with higher SoFAS snacks given for non-nutritive purposes and in social contexts. Understanding of caregivers' schemas for offering snacks to young children may inform targets for obesity prevention and anticipatory guidance to promote the development of healthful eating behaviors. abstract_id: PUBMED:19571216 Taste and weight: is there a link? Investigations of the relations between taste perception and obesity have concentrated largely on sweet and bitter tastes, with little work on the "savory" tastes-salt and glutamate-and very little work on sour taste. This article briefly reviews current understanding of the relations between the ability to taste different tastes (ie, taste threshold for sweet, bitter, sour, salt, and umami) and body mass. Obese children and adolescents show a disturbance in some tastes, with reported reductions in sweet and salt thresholds. Observations on relations between sweet taste threshold and obesity are contradictory; literature discrepancies may depend on the techniques used to evaluate taste. Obese women, however, report higher intensities of monosodium glutamate perception. Taste thresholds have been reported to be raised (bitter and sour), lowered (salt), or unchanged (sweet) in obese adults. Taste perceptual changes (threshold, intensity) in obesity are complex and may be different in obese men and women and in adults and children. Very little is currently known about the relations between savory tastes-salt and umami-and body weight, and these areas merit further study. Answer: Yes, the savory taste of snacks could be a further risk factor for overweight in children. Research has shown that the mean energy density of the foods consumed was significantly higher for obese and overweight children than for normal-weight children, and that the energy density of the snacks, their savory taste, television viewing, and sports activity independently contributed to predicting obesity in children. Even when parents' body mass index was included as a variable, salty foods and sports activity showed an independent association with childhood obesity (PUBMED:18367957). Additionally, heavier children consumed more savory fats at a palatable buffet, suggesting a preference that may contribute to hyperphagia and overweight (PUBMED:24607656). Furthermore, food reinforcement, which is related to increased energy intake and prospectively related to weight gain, showed stronger reinforcement for sweet foods compared to savory or salty foods, indicating that while there are differences in the reinforcing value of food by taste groups, there are no strong differences in the relationship between reinforcing value of food by taste groups and energy or macronutrient intake (PUBMED:25891040). Therefore, the preference for savory snacks and their reinforcing value could be associated with overweight in children.
Instruction: Do HMO market level factors lead to racial/ethnic disparities in colorectal cancer screening? Abstracts: abstract_id: PUBMED:16224303 Do HMO market level factors lead to racial/ethnic disparities in colorectal cancer screening? A comparison between high-risk Asian and Pacific Islander Americans and high-risk whites. Background: Few studies have explored health care market structure and colorectal cancer (CRC) screening test use, and little is known whether market factors contribute to racial/ethnic screening disparities. Objective: We investigated whether HMO market level factors, controlling for individual covariates, differentially impact Asian American and Pacific Islander (AAPI) subjects' access to CRC screening compared with white subjects. Research Design And Methods: We used random intercept hierarchical models to predict CRC test use. Individual-level survey data was linked to market data by metropolitan statistical areas from InterStudy. Subjects: Insured first-degree relatives, ages 40-80, of a random sample of colorectal cancer cases identified from the California Cancer Registry: 515 white subjects and 396 AAPI subjects residing in 36 metropolitan statistical areas (MSAs). Measures: Dependent variables were receipt of (1) annual fecal occult blood test only; (2) sigmoidoscopy in the past 5 years; (3) colonoscopy in the past 10 years; and (4) any of these tests over the recommended time interval. Market characteristics were HMO penetration, HMO competition, and proportion of staff/group/network HMOs. Findings: Market characteristics were as important as individual-level characteristics for AAPI but not for white subjects. Among AAPI subjects, a 10% increase in the percent of group/staff/network model HMO was associated with a reduction in colonoscopy use (28.9% to 20.5%) and in receipt of any of the CRC tests (53.2% to 45.4%). Conclusions: The prevailing organizational structure of a health care market confers a penalty on access to CRC test use among high-risk AAPI subjects but not among high-risk white subjects. Identifying the differential effect of market structure on race/ethnicity can potentially reduce the cancer burden among disadvantaged racial groups. abstract_id: PUBMED:18574089 Determinants of racial/ethnic colorectal cancer screening disparities. Background: The contributions of demographic, socioeconomic, access, language, and nativity factors to racial/ethnic colorectal cancer (CRC) screening disparities are uncertain. Methods: Using linked data from 22 973 respondents to the 2001-2005 Medical Expenditure Panel Survey and the 2000-2004 National Health Interview Survey, we modeled disparities in CRC screening (fecal occult blood testing [FOBT], endoscopy, and combined FOBT and endoscopy) between non-Hispanic whites and Asians, blacks, and Hispanics, sequentially adjusting for demographics, socioeconomic status, clinical and access variables, and race/ethnicity-related variables (language spoken at home and nativity). Results: With demographic adjustment, minorities reported less CRC screening (all measures) than non-Hispanic whites. Disparities were largest for combined screening in Asians (adjusted odds ratio [AOR], 0.40; 95% confidence interval [CI], 0.32-0.49) and Hispanics (AOR, 0.43; 95% CI, 0.39-0.48) and for endoscopic screening in Asians (AOR, 0.41; 95% CI, 0.33-0.50) and Hispanics (AOR, 0.43; 95% CI, 0.38-0.48). With full adjustment, all Hispanic/non-Hispanic white disparities and black/non-Hispanic white FOBT disparities were eliminated, whereas Asian/non-Hispanic white disparities remained significant (FOBT: AOR, 0.72 [95% CI, 0.52-1.00]; endoscopic screening: AOR, 0.63 [95% CI, 0.49-0.81]; and combined screening: AOR, 0.66 [95% CI, 0.52-0.84]). Conclusions: Determinants of racial/ethnic CRC screening disparities vary among minority groups, suggesting the need for different interventions to mitigate those disparities. Whereas socioeconomic, access, and language barriers seem to drive the CRC screening disparities experienced by blacks and Hispanics, additional factors may exacerbate the disparities experienced by Asians. abstract_id: PUBMED:25016140 Racial/ethnic disparities in human DNA methylation. The racial/ethnic disparities in DNA methylation patterns indicate that molecular markers may play a role in determining the individual susceptibility to diseases in different ethnic groups. Racial disparities in DNA methylation patterns have been identified in prostate cancer, breast cancer and colorectal cancer and are related to racial differences in cancer prognosis and survival. abstract_id: PUBMED:24512861 Understanding current racial/ethnic disparities in colorectal cancer screening in the United States: the contribution of socioeconomic status and access to care. Background: Prior studies have shown racial/ethnic disparities in colorectal cancer (CRC) screening but have not provided a full national picture of disparities across all major racial/ethnic groups. Purpose: To provide a more complete, up-to-date picture of racial/ethnic disparities in CRC screening and contributing socioeconomic and access barriers. Methods: Behavioral Risk Factor Surveillance System data from 2010 were analyzed in 2013. Hispanic/Latino participants were stratified by preferred language (Hispanic-English versus Hispanic-Spanish). Non-Hispanics were categorized as White, Black, Asian, Native Hawaiian/Pacific Islander, or American Indian/Alaska Native. Sequential regression models estimated adjusted relative risks (RRs) and the degree to which SES and access to care explained disparities. Results: Overall, 59.6% reported being up-to-date on CRC screening. Self-reported CRC screening was highest in the White (62.0%) racial/ethnic group; followed by Black (59.0%); Native Hawaiian/Pacific Islander (54.6%); Hispanic-English (52.5%); American Indian/Alaska Native (49.5%); Asian (47.2%); and Hispanic-Spanish (30.6%) groups. Adjustment for SES and access partially explained disparities between Whites and Hispanic-Spanish (final relative risk [RR]=0.76, 95% CI=0.69, 0.83); Hispanic-English (RR=0.94, 95% CI=0.91, 0.98); and American Indian/Alaska Native (RR=0.91, 95% CI=0.85, 0.97) groups. The RR of screening among Asians was unchanged after adjustment for SES and access (0.78, p&lt;0.001). After full adjustment, screening rates were not significantly different among Whites, Blacks, or Native Hawaiian/Pacific Islanders. Conclusions: Large racial/ethnic disparities in CRC screening persist, including substantial differences between English-speaking versus Spanish-speaking Hispanics. Disparities are only partially explained by SES and access to care. Future studies should explore the low rate of screening among Asians and how it varies by racial/ethnic subgroup and language. abstract_id: PUBMED:23678899 Factors explaining racial/ethnic disparities in rates of physician recommendation for colorectal cancer screening. Objectives: Physician recommendation plays a crucial role in receiving endoscopic screening for colorectal cancer (CRC). This study explored factors associated with racial/ethnic differences in rates of screening recommendation. Methods: Data on 5900 adults eligible for endoscopic screening were obtained from the National Health Interview Survey. Odds ratios of receiving an endoscopy recommendation were calculated for selected variables. Planned, sequenced logistic regressions were conducted to examine the extent to which socioeconomic and health care variables account for racial/ethnic disparities in recommendation rates. Results: Differential rates were observed for CRC screening and screening recommendations among racial/ethnic groups. Compared with Whites, Hispanics were 34% less likely (P &lt; .01) and Blacks were 26% less likely (P &lt; .05) to receive this recommendation. The main predictors that emerged in sequenced analysis were education for Hispanics and Blacks and income for Blacks. After accounting for the effects of usual source of care, insurance coverage, and education, the disparity reduced and became statistically insignificant. Conclusions: Socioeconomic status and access to health care may explain major racial/ethnic disparities in CRC screening recommendation rates. abstract_id: PUBMED:33641251 Racial/ethnic disparities in early-onset colorectal cancer: implications for a racial/ethnic-specific screening strategy. Introduction: Early-onset colorectal cancer (EO-CRC) is a public health concern. Starting screening at 45 years has been considered, but there is discrepancy in the recommendations. Racial disparities in EO-CRC incidence and survival are reported; however, racial/ethnic differences in EO-CRC features that could inform a racial/ethnic-tailored CRC screening strategy have not been reported. We compared features and survival among Non-Hispanic White (NHW), Non-Hispanic Black (NHB), and Hispanics with EO-CRC. Methods: CRC patients from SEER 1973-2010 database were identified, and EO-CRC was defined as CRC at &lt;50 years. Clinical/pathological features and survival were compared between NHW, NHB, and Hispanics. Cancer-specific survival (CSS) predictors were assessed in a multivariable Cox proportional hazard model. Results: Of 166,416 patients with CRC, 16,545 (9.9%) had EO-CRC. The EO-CRC frequencies in NHB and Hispanics were higher than NHW (12.7% vs. 16.5% vs. 8.7%, p &lt; 0.001). EO-CRC in NHB presents more frequently in females, with well/moderately differentiated, stage IV, and is less likely to present in locations targetable by sigmoidoscopy than NHW (54.6% vs. 67.7% OR:1.7, 95% p &lt; 0.001). 5-year CSS was lower in NHB (59.4% vs. 72.8%, HR: 1.7; 95% CI: 1.54-1.82) and Hispanics (66.4% vs. 72.8%, HR: 1.3; 95% CI: 1.16-1.39) than NHW. A regression model among patients with EO-CRC showed that being NHB or Hispanic were independent predictors for cancer-specific mortality, after adjusting for gender, grade, stage, and surgery. Conclusion: EO-CRC is more likely in NHB and Hispanics. Racial disparities in clinical/pathological features and CSS between NHB and NHW/Hispanics were evidenced. A racial/ethnic specific screening strategy could be considered as an alternative for patients younger than 50 years. abstract_id: PUBMED:30680047 Colorectal Cancer Survival Disparities among Puerto Rican Hispanics: A Comparison to Racial/Ethnic Groups in the United States. Purpose: Ethnic/racial disparities in colorectal cancer (CRC) survival have been well documented. However, there is limited information regarding CRC survival among Hispanic subgroups. This study reports the 5-year relative survival of Puerto Rican Hispanic (PRH) CRC patients and the relative risk of death compared to other racial/ethnic groups in the US. Methods: CRC incidence data from subjects ≥50 years was obtained from the Puerto Rico Central Cancer Registry and the Surveillance, Epidemiology and End Results (SEER) database from January 1, 2001 to December 31, 2003. Relative survival rates were calculated using the life tables from the population of PR and SEER. A Poisson regression model was used to assess relative risk of death by stage, sex, and age. Results: A total of 76,444 subjects with incident CRC were analyzed (non-Hispanic White (NHW) n=59,686; non-Hispanic black (NHB) n=7,700; US Hispanics (USH) n=5,699; PRH n=3,359). Overall and stage-specific five-year survival rates differed by race/ethnicity. When comparing PRH to the other racial/ethnic groups, PRH had the lowest survival rates in regional cancers and were the only racial/ethnic group where a marked 5-year survival advantage was observed among females (66.0%) compared to males (60.3%). A comparable and significantly higher relative risk of death of CRC was observed for PRH and NHB compared to NHW. Conclusions: Our findings establish baseline CRC survival data for PRH living in Puerto Rico. The gender and racial/ethnic disparities observed in PRH compared to US mainland racial/ethnic groups warrant further investigation of the risk factors affecting this Hispanic subgroup. abstract_id: PUBMED:27050413 Racial/Ethnic Disparities in Colorectal Cancer Screening Across Healthcare Systems. Introduction: Racial/ethnic disparities in colorectal cancer (CRC) screening and diagnostic testing present challenges to CRC prevention programs. Thus, it is important to understand how differences in CRC screening approaches between healthcare systems are associated with racial/ethnic disparities. Methods: This was a retrospective cohort study of patients aged 50-75 years who were members of the Population-based Research Optimizing Screening Through Personalized Regimens cohort from 2010 to 2012. Data on race/ethnicity, CRC screening, and diagnostic testing came from medical records. Data collection occurred in 2014 and analysis in 2015. Logistic regression models were used to calculate AORs and 95% CIs comparing completion of CRC screening between racial/ethnic groups. Analyses were stratified by healthcare system to assess differences between systems. Results: There were 1,746,714 participants across four healthcare systems. Compared with non-Hispanic whites (whites), odds of completing CRC screening were lower for non-Hispanic blacks (blacks) in healthcare systems with high screening rates (AOR=0.86, 95% CI=0.84, 0.88) but similar between blacks and whites in systems with lower screening rates (AOR=1.01, 95% CI=0.93, 1.09). Compared with whites, American Indian/Alaskan Natives had lower odds of completing CRC screening across all healthcare systems (AOR=0.76, 95% CI=0.72, 0.81). Hispanics had similar odds of CRC screening (AOR=0.99, 95% CI=0.98, 1.00) and Asian/Pacific Islanders had higher odds of CRC screening (AOR=1.16, 95% CI=1.15, 1.18) versus whites. Conclusions: Racial/ethnic differences in CRC screening vary across healthcare systems, particularly for blacks, and may be more pronounced in systems with intensive CRC screening approaches. abstract_id: PUBMED:23213159 Reducing racial and ethnic disparities in colorectal cancer screening is likely to require more than access to care. Colorectal endoscopy, an effective screening intervention for colorectal cancer, is recommended for people age fifty or older, or earlier for those at higher risk. Rates of colorectal endoscopy are still far below those recommended by the US Preventive Services Task Force. This study examined whether factors such as the supply of gastroenterologists and the proportion of the local population without health insurance coverage were related to the likelihood of having the procedure, and whether these factors explained racial and ethnic differences in colorectal endoscopy. We found evidence that improving access to health care at the county and individual levels through expanded health insurance coverage could improve colorectal endoscopy use but might not be sufficient to reduce racial and ethnic disparities in colorectal cancer screening. Policy action to address these disparities will need to consider other structural and cultural factors that may be inhibiting colorectal cancer screening. abstract_id: PUBMED:33898727 An examination of socioeconomic and racial/ethnic disparities in the awareness, knowledge and utilization of three colorectal cancer screening modalities. While colorectal cancer (CRC) mortality rates have been decreasing, disparities by socioeconomic status (SES) and race/ethnicity persist. CRC screening rates remain suboptimal among low SES and racial/ethnic minority populations, despite the availability of multiple screening modalities. Understanding awareness, knowledge, and utilization of common screening modalities within different racial/ethnic and SES groups is critical to inform efforts to improve population screening uptake and reduce disparities in CRC-related health outcomes. Through the theoretical lenses of diffusion of innovation and fundamental cause theory, we examined the associations of race/ethnicity and SES with awareness, knowledge, and utilization of three guideline recommended CRC screening strategies among individuals at average risk for CRC. Data were obtained from a survey of a nationally representative panel of US adults conducted in November 2019. The survey was completed by 31.3% of invited panelists (1595 of 5097). Analyses were focused on individuals at average risk for CRC, aged 45-75 for awareness and knowledge outcomes (n = 1062) and aged 50-75 for utilization outcomes (n = 858). Analyses revealed racial/ethnic and SES disparities among the three CRC screening modalities, with more racial/ethnic and SES differences observed in the awareness, knowledge, and utilization of screening colonoscopy and mt-sDNA than FIT/gFOBT. Patterns of disparities are consistent with previous research showing that inequities in social and economic resources are associated with an imbalanced adoption of medical innovations. Our findings demonstrate a need to increase awareness, knowledge, and access of various CRC screening modalities in specific populations defined by race/ethnicity or SES indicators. Efforts to increase CRC screening should be tailored to the needs and social-cultural context of populations. Interventions addressing inequalities in social and economic resources are also needed to achieve more equitable adoption of CRC screening modalities and reduce disparities in CRC-related health outcomes. Answer: Yes, HMO market level factors can lead to racial/ethnic disparities in colorectal cancer (CRC) screening. A study investigating the impact of HMO market level factors on access to CRC screening found that market characteristics were as important as individual-level characteristics for Asian American and Pacific Islander (AAPI) subjects but not for white subjects. Specifically, an increase in the percent of group/staff/network model HMOs was associated with a reduction in colonoscopy use and receipt of any CRC tests among AAPI subjects, suggesting that the organizational structure of a healthcare market can affect access to CRC screening differently for high-risk AAPI compared to high-risk white subjects (PUBMED:16224303). Additionally, other studies have indicated that racial/ethnic disparities in CRC screening are influenced by a variety of factors, including socioeconomic status, access to care, language barriers, and nativity, which vary among different minority groups (PUBMED:18574089; PUBMED:24512861; PUBMED:23678899). For instance, socioeconomic and access to health care factors were found to explain major racial/ethnic disparities in CRC screening recommendation rates (PUBMED:23678899). Moreover, racial/ethnic differences in CRC screening vary across healthcare systems, with disparities being more pronounced in systems with intensive CRC screening approaches (PUBMED:27050413). Therefore, while HMO market level factors do contribute to racial/ethnic disparities in CRC screening, they are part of a broader set of determinants that also include socioeconomic, healthcare access, and cultural factors. Addressing these disparities will likely require interventions that go beyond simply improving access to care and consider the structural and cultural factors that inhibit CRC screening among different racial/ethnic groups (PUBMED:23213159; PUBMED:33898727).
Instruction: Does ICA occlusion frequently have intracerebral hemorrhage after IV tissue plasminogen activator therapy for ischemic stroke? Abstracts: abstract_id: PUBMED:21464569 Does ICA occlusion frequently have intracerebral hemorrhage after IV tissue plasminogen activator therapy for ischemic stroke? Background/aims: The main predictors of intracerebral hemorrhage (ICH) are clinical stroke severity and large ischemic lesions. Therefore, ICA occlusion as severe stroke is thought to frequently have ICH after tissue plasminogen activator (t-PA) therapy. The aim of this study was to investigate whether ICA occlusion more frequently had ICH after t-PA therapy compared with other occluded arteries. Subjects And Methods: We prospectively studied consecutive stroke patients treated with t-PA within 3 h of onset. We investigated the frequency of ICH after t-PA therapy for each occluded artery. Results: 165 patients were enrolled. Initial MRA demonstrated ICA occlusion in 38 patients, M1 in 48, M2 in 28, and BA and PCA in 12. At 24 h after t-PA infusion, 113 (68.5%) patients (non-HT group) did not have hemorrhagic transformation, 37 (22.4%; HI group) had hemorrhagic cerebral infarction and 15 (9.1%; ICH group) had ICH. The ICH group most frequently had M2 occlusion, NIHSS ≥15, and ≥1/3 of the MCA territory among the three groups. The frequency of ICH was 2.6% in no occlusion, 10.5% in ICA occlusion, 6.3% in M1, 21.4% in M2, and 8.3% in PCA and BA (p = 0.1016). Conclusion: Patients with ICA occlusion did not have ICH more frequently after t-PA therapy in comparison to other occluded arteries. abstract_id: PUBMED:30924762 The randomized study of endovascular therapy with versus without intravenous tissue plasminogen activator in acute stroke with ICA and M1 occlusion (SKIP study). Rationale: Bridging therapy with endovascular therapy (EVT) and intravenous thrombolysis (IVT) has been reported to improve outcomes for acute stroke patients with large-vessel occlusion in the anterior circulation. While the IVT may increase the reperfusion rate, the risk of hemorrhagic complications increases. Whether EVT without IVT (direct EVT) is equally effective as bridging therapy in acute stroke remains unclear. Aim: This randomized study of endovascular therapy with versus without intravenous tissue plasminogen activator for acute stroke with ICA and M1 occlusion aims to clarify the efficacy and safety of direct EVT compared with bridging therapy. Methods And Design: This is an investigator-initiated, multicenter, prospective, randomized, open-treatment, blinded-endpoint clinical trial. The target patient number is 200, comprising 100 patients receiving direct EVT and 100 receiving bridging therapy. Study Outcome: The primary efficacy endpoint is a modified Rankin Scale score of 0-2 at 90 days. Safety outcome measures are any intracranial hemorrhage at 24 h. Discussion: This trial may help determine whether direct EVT should be recommended as a routine clinical strategy for ischemic stroke patients within 4.5 h from onset. Direct EVT would then become the choice of therapy in stroke centers with endovascular facilities. Trial Registration: UMIN000021488. abstract_id: PUBMED:22811456 Intravenous thrombolysis and endovascular therapy for acute ischemic stroke with internal carotid artery occlusion: a systematic review of clinical outcomes. Background And Purpose: Strokes secondary to acute internal carotid artery (ICA) occlusion are associated with extremely poor prognosis. The best treatment approach to acute stroke in this setting is unknown. We sought to determine clinical outcomes in patients with acute ischemic stroke attributable to ICA occlusion treated with intravenous (IV) systemic thrombolysis or intra-arterial endovascular therapy. Methods: Using the PubMed database, we searched for studies that included patients with acute ischemic stroke attributable to ICA occlusion who received treatment with IV thrombolysis or intra-arterial endovascular interventions. Studies providing data on functional outcomes beyond 30 days and mortality and symptomatic intracerebral hemorrhage (sICH) rates were included in our analysis. We compared the proportions of patients with favorable functional outcomes, sICH, and mortality rates in the 2 treatment groups by calculating χ(2) and confidence intervals for odds ratios. Results: We identified 28 studies with 385 patients in the IV thrombolysis group and 584 in the endovascular group. Rates of favorable outcomes and sICH were significantly higher in the endovascular group than the IV thrombolysis-only group (33.6% vs 24.9%, P=0.004 and 11.1% vs 4.9%, P=0.001, respectively). No significant difference in mortality rate was found between the groups (27.3% in the IV thrombolysis group vs 32.0% in the endovascular group; P=0.12). Conclusions: According to our systematic review, endovascular treatment of acute ICA occlusion results in improved clinical outcomes. A higher rate of sICH after endovascular treatment does not result in increased overall mortality rate. abstract_id: PUBMED:37258246 A Case of the Internal Carotid Artery Occlusion Immediately After Intravenous Recombinant Tissue Plasminogen Activator Treatment for Contralateral Middle Cerebral Artery Occlusion Early recurrent ischemic stroke (ERIS), as well as symptomatic intracranial hemorrhage (SICH) and progressive stroke (PS), causes early neurological deterioration. Here we report a case of a patient with right internal carotid artery (ICA) occlusion immediately after intravenous recombinant tissue plasminogen activator (rt-PA) treatment for left middle cerebral artery (MCA) occlusion. A 79-year-old woman with drowsiness, aphasia and right hemiparesis was brought to our hospital. MRI showed acute infarction in the left internal capsule and occlusion of the left middle cerebral artery. rt-PA was administered intravenously to the patient 2 hours after the onset of the event. Her consciousness disturbance and aphasia improved, but the right hemiparesis did not. We performed emergent endovascular thrombectomy, but the right ICA (cervical portion) was occluded during the surgery. Finally, the endovascular thrombectomy achieved the recanalization of the left MCA and right ICA. When performing intravenous thrombolysis, we should beware the possibility of re-occlusion and prepare for interventional treatment. abstract_id: PUBMED:27838178 Predictors of Symptomatic Intracranial Hemorrhage after Endovascular Therapy in Acute Ischemic Stroke with Large Vessel Occlusion. Background: The symptomatic intracranial hemorrhage (SICH) is a serious complication of endovascular therapy (EVT) in acute ischemic stroke (AIS) with large vessel occlusion. We aimed to clarify the predictors of SICH after EVT in patients with internal carotid artery (ICA) or proximal M1 segment of middle cerebral artery occlusions. Methods: Among 1442 AIS patients with large vessel occlusion admitted within 24 hours after onset between July 2010 and June 2011, 226 patients with ICA or proximal M1 occlusions were treated with EVT. SICH was defined as any type of intracranial hemorrhage with a decline in the National Institutes of Health Stroke Scale (NIHSS) score ≥4. Results: Of the 226 patients, 204 with sufficient data were analyzed. SICH was observed in 10 patients (4.9%). Baseline NIHSS score (22 versus 17), serum glucose level (206 mg/dL versus 140 mg/dL), and prior antiplatelet therapy (60.0% versus 21.7%) were significantly higher in patients with SICH than in those without (all P &lt; .01). With receiver operating characteristic analyses, the optimal cutoff values for predicting SICH were NIHSS score ≥19 and serum glucose ≥160 mg/dL. In multivariate analysis, glucose level ≥160 mg/dL (odds ratio: 11.89; 95% confidence interval [CI]: 2.79-65.08), prior antiplatelet therapy (odds ratio: 8.03; 95% CI: 1.83-41.70), and NIHSS score ≥19 (odds ratio: 7.78; 95% CI: 1.63-59.44) were independent predictors of SICH. Conclusion: Hyperglycemia, prior antiplatelet therapy, and high baseline NIHSS score were associated with SICH after EVT in AIS patients with ICA or proximal M1 occlusions. abstract_id: PUBMED:30091271 Outcomes of endovascular thrombectomy with and without bridging thrombolysis for acute large vessel occlusion ischaemic stroke. Background: Endovascular thrombectomy (EVT) for management of large vessel occlusion (LVO) acute ischaemic stroke is now current best practice. Aim: To determine if bridging intravenous (i.v.) alteplase therapy confers any clinical benefit. Methods: A retrospective study of patients treated with EVT for LVO was performed. Outcomes were compared between patients receiving thrombolysis and EVT with EVT alone. Primary end-points were reperfusion rate, 90-day functional outcome and mortality using the modified Rankin Scale (mRS) and symptomatic intracranial haemorrhage (sICH). Results: A total of 355 patients who underwent EVT was included: 210 with thrombolysis (59%) and 145 without (41%). The reperfusion rate was higher in the group receiving i.v. tissue plasminogen activator (tPA) (unadjusted odds ratio (OR) 2.2, 95% confidence interval (CI): 1.29-3.73, P = 0.004), although this effect was attenuated when all variables were considered (adjusted OR (AOR) 1.22, 95% CI: 0.60-2.5, P = 0.580). The percentage achieving functional independence (mRS 0-2) at 90 days was higher in patients who received bridging i.v. tPA (AOR 2.17, 95% CI: 1.06-4.44, P = 0.033). There was no significant difference in major complications, including sICH (AOR 1.4, 95% CI: 0.51-3.83, P = 0.512). There was lower 90-day mortality in the bridging i.v. tPA group (AOR 0.79, 95% CI: 0.36-1.74, P = 0.551). Fewer thrombectomy passes (2 versus 3, P = 0.012) were required to achieve successful reperfusion in the i.v. tPA group. Successful reperfusion (modified thrombolysis in cerebral infarction ≥2b) was the strongest predictor for 90-day functional independence (AOR 10.4, 95% CI:3.6-29.7, P &lt; 0.001). Conclusion: Our study supports the current practice of administering i.v. alteplase before endovascular therapy. abstract_id: PUBMED:27344361 Intravenous Thrombolysis for Acute Ischemic Stroke due to Cervical Internal Carotid Artery Occlusion. Background: Internal carotid artery (ICA) occlusions are poorly responsive to intravenous thrombolysis with tissue plasminogen activator (IV-tPA) in acute ischemic stroke (AIS). Most study populations have combined intracranial and extracranial ICA occlusions for analysis; few have studied purely cervical ICA occlusions. We evaluated AIS patients with acute cervical ICA occlusion treated with IV-tPA to identify predictors of outcomes. Methods: We studied 550 consecutive patients with AIS who received IV-tPA and identified 100 with pure acute cervical ICA occlusion. We evaluated the associations of vascular risk factors, National Institutes of Health Stroke Scale (NIHSS) score, and leptomeningeal collateral vessel status via 3 different grading systems, with functional recovery at 90 days, mortality, recanalization of the primary occlusion, and symptomatic intracranial hemorrhage (SICH). Modified Rankin Scale score 0-1 was defined as an excellent outcome. Results: The 100 patients had mean age of 67.8 (range 32-96) and median NIHSS score of 19 (range 4-33). Excellent outcomes were observed in 27% of the patients, SICH in 8%, and mortality in 21%. Up to 54% of the patients achieved recanalization at 24 hours. On ordinal regression, good collaterals showed a significant shift in favorable outcomes by Maas, Tan, or ASPECTS collateral grading systems. On multivariate analysis, good collaterals also showed reduced mortality (OR .721, 95% CI .588-.888, P = .002) and a trend to less SICH (OR .81, 95% CI .65-1.007, P = .058). Interestingly, faster treatment was also associated with favorable functional recovery (OR 1.028 per minute, 95% CI 1.010-1.047, P = .001). Conclusions: Improved outcomes are seen in patients with early acute cervical ICA occlusion and better collateral circulation. This could be a valuable biomarker for decision making. abstract_id: PUBMED:28424044 Emergent Carotid Thromboendarterectomy for Acute Symptomatic Occlusion of the Extracranial Internal Carotid Artery. Background: Strokes secondary to acute internal carotid artery (ICA) occlusion are associated with an extremely poor prognosis. The best treatment approach in this setting is still unknown. The aim of our study was to evaluate the efficacy, safety, and outcomes of emergent surgical revascularization of acute extracranial ICA occlusion in patients with minor to severe ischemic stroke. Methods: A retrospective analysis was performed using prospectively collected data of consecutive patients who underwent carotid thromboendarterectomy for symptomatic acute ICA occlusion during the period from January 2013 to December 2015. Primary outcomes were disability at 90 days assessed by the modified Rankin Scale (mRS) and neurological deficit at discharge assessed using the National Institute of Health Stroke Scale (NIHSS). Secondary outcomes were the recanalization rate, 30-day overall mortality, and any intracerebral bleeding. Results: During the study period, a total of 6 patients (5 men and 1 woman) with a median age of 64 years (range: 58-84 years) underwent emergent reconstruction for acute symptomatic ICA occlusion within a median of 5.4 hours (range: 2.9-12.0 hours) after symptoms onset. The median presenting NIHSS score was 10.5 points (range: 4-21). Before surgery, 4 patients (66.7%) had been treated by systemic recombinant tissue plasminogen activator lysis. The median time interval between initiation of intravenous thrombolysis and carotid thromboendarterectomy was 117.5 minutes (range: 65-140 minutes). Patency of the ICA was achieved in all patients. On discharge, the median NIHSS score was 2 points (range: 0-11 points). There was no postoperative intracerebral hemorrhage and zero 30-day mortality rate. At 3 months, 5 patients (83.3%) had a good clinical outcome (mRS ≤ 2). Conclusion: Patients presenting with minor to severe ischemic stroke syndromes due to isolated extracranial ICA occlusion may benefit from emergent carotid revascularization. Thorough preoperative neuroimaging is essential to aid in selecting eligible candidates for acute surgical intervention. abstract_id: PUBMED:22721823 Stroke outcomes of Japanese patients with major cerebral artery occlusion in the post-alteplase, pre-MERCI era. This study examined outcomes of patients with acute ischemic stroke (AIS) with major cerebral artery occlusion after the approval of intravenous recombinant tissue-type plasminogen activator (IV rt-PA) but before approval of the MERCI retriever. We retrospectively enrolled 1170 consecutive patients with AIS and major cerebral artery occlusion (496 women; mean age, 73.9 ± 12.3 years) who were admitted within 24 hours after the onset of symptoms to 12 Japanese stroke centers between October 2005 and June 2009. Cardioembolism was a leading cause of AIS in this group (68.2%). The occlusion sites of the major cerebral arteries included the common carotid artery and internal carotid artery (ICA; 29.6%), middle cerebral artery (52.2%), and basilar artery (7.6%). Recanalization therapy (RT) was performed in 32.0% of patients (IV rt-PA, 20.0%; neuroendovascular therapy, 9.4%; combined, 2.5%). Symptomatic intracerebral hemorrhage within 36 hours with a ≥ 1-point increase in the National Institutes of Health Stroke Scale score occurred in 5.3% of the patients. At 3 months (or at hospital discharge), 29.3% of the patients had a favorable outcome (based on a modified Rankin scale score of 0-2), 23.8% were bedridden, and 15.6% died. After multivariate adjustment, RT was positively associated with a favorable outcome and negatively associated with death, whereas age, baseline National Institutes of Health Stroke Scale score, and ICA occlusion were negatively associated with a favorable outcome and positively associated with death. One-third of the patients with AIS and major cerebral artery occlusion were treated with RT, which was independently associated with favorable outcomes and death. However, 40% of the patients became bedridden or died during the post-alteplase, pre-MERCI era in Japan. abstract_id: PUBMED:15625300 Aggressive mechanical clot disruption: a safe adjunct to thrombolytic therapy in acute stroke? Background And Purpose: This study evaluated the safety and efficacy of aggressive mechanical clot disruption (AMCD) in acute stroke patients with persisting middle cerebral artery (MCA) or internal carotid artery (ICA) occlusion after thrombolytic therapy. Methods: Retrospective case series were used from a prospectively collected stroke database on consecutive acute ischemic stroke patients treated with intra-arterial (IA) thrombolytics and mechanical clot disruption during a 5-year interval. Thrombolytic dosage, endovascular techniques, immediate and final recanalization rates, symptomatic hemorrhage, mortality, and outcome were determined. Results: Thirty-two patients received AMCD. Median baseline National Institutes of Health Stroke Scale (NIHSS) score was 18, and median time to initiation of IA treatment was 261 minutes from symptom onset. ICA occlusion was noted in 16 patients and MCA occlusion in 16 patients: 22 received combined IV/IA thrombolytics, 3 received IV thrombolytics, 6 received IA thrombolytics, and 1 patient received no thrombolytics before AMCD. No immediate periprocedural complications were noted. Immediate recanalization was achieved in 38% (50% MCA, 25% ICA) and final recanalization in 75% (88% MCA, 63% ICA) of patients. Favorable outcome occurred in 19 (59%) patients, symptomatic cerebral hemorrhage in 3 (9.4%) patients, and mortality in 4 (12.5%) patients. Conclusions: AMCD can be performed safely with comparable intracerebral hemorrhage and mortality rates to other IA therapies even after use of intravenous thrombolytics in selected patients. Early deployment of this technique leads to immediate recanalization in one third of patients. AMCD may potentially shorten the time to flow restoration and improve overall recanalization rates achieved with IA therapy. Answer: Intracerebral hemorrhage (ICH) is a known complication following intravenous tissue plasminogen activator (IV t-PA) therapy for ischemic stroke. However, the frequency of ICH after IV t-PA therapy in patients with internal carotid artery (ICA) occlusion does not appear to be higher compared to other occluded arteries. A study investigating the frequency of ICH after t-PA therapy for each occluded artery found that patients with ICA occlusion did not have ICH more frequently after t-PA therapy in comparison to other occluded arteries (PUBMED:21464569). Moreover, a systematic review of clinical outcomes in patients with acute ischemic stroke attributable to ICA occlusion treated with IV thrombolysis or intra-arterial endovascular therapy found that while the rates of symptomatic intracerebral hemorrhage (sICH) were significantly higher in the endovascular group than the IV thrombolysis-only group, no significant difference in mortality rate was found between the groups (PUBMED:22811456). This suggests that while endovascular treatment may lead to a higher rate of sICH, it does not necessarily result in increased overall mortality, and the frequency of ICH after IV t-PA therapy specifically for ICA occlusion is not clearly higher than for other treatments. In summary, based on the available evidence, ICA occlusion does not frequently have a higher incidence of intracerebral hemorrhage after IV tissue plasminogen activator therapy for ischemic stroke compared to occlusions in other arteries.
Instruction: Can permanent sinus arrhythmia in conscious dogs be suppressed with anesthesia? Abstracts: abstract_id: PUBMED:8705324 Can permanent sinus arrhythmia in conscious dogs be suppressed with anesthesia? Objective: Large prospective studies in dogs with healed myocardial infarction (MI) recently show a positive relation between heart rate variability (HRV) and sudden cardiac death. Methods: We have done similar experiments in dogs and we studied HRV, ECG and body surface potential maps (BSPM) in 26 normal mongrel dogs (10-15 kg) and 12 dogs with an experimental MI (ligation of LAD). A two-channel 8 hours lasting ECG recording was performed in all conscious dogs. The 2nd recording was done in Penthobarbital anaesthesia (30 mg/kg). Results: We have found sinus arrhythmia (SA) in all 26 conscious dogs. The anaesthesia suppressed the sinus arrhythmia and HRV via compensatory tachycardia and alterations in baroreflexes. The suppression of arrhythmia was also present in dogs with myocardial infarction. Conclusions: It is suggested, that HRV and SA in dogs depend on conscious state and anaesthesia. On the basis of our results we can anticipate that the most important for HRV is the present status of the sympathetic nerve. We suggest, that our results are an important finding for experimental arrhythmology. (Fig. 4, Ref. 24.) abstract_id: PUBMED:11299239 Augmentation of respiratory sinus arrhythmia in response to progressive hypercapnia in conscious dogs. Respiratory sinus arrhythmia (RSA) may serve to enhance pulmonary gas exchange efficiency by matching pulmonary blood flow with lung volume within each respiratory cycle. We examined the hypothesis that RSA is augmented as an active physiological response to hypercapnia. We measured electrocardiograms and arterial blood pressure during progressive hypercapnia in conscious dogs that were prepared with a permanent tracheostomy and an implanted blood pressure telemetry unit. The intensity of RSA was assessed continuously as the amplitude of respiratory fluctuation of heart rate using complex demodulation. In a total of 39 runs of hypercapnia in 3 dogs, RSA increased by 38 and 43% of the control level when minute ventilation reached 10 and 15 l/min, respectively (P &lt; 0.0001 for both), and heart rate and mean arterial pressure showed no significant change. The increases in RSA were significant even after adjustment for the effects of increased tidal volume, respiratory rate, and respiratory fluctuation of arterial blood pressure (P &lt; 0.001). These observations indicate that increased RSA during hypercapnia is not the consequence of altered autonomic balance or respiratory patterns and support the hypothesis that RSA is augmented as an active physiological response to hypercapnia. abstract_id: PUBMED:11045971 Impact of acute hypoxia on heart rate and blood pressure variability in conscious dogs. To examine whether the impacts of hypoxia on autonomic regulations involve the phasic modulations as well as tonic controls of cardiovascular variables, heart rate, blood pressure, and their variability during isocapnic progressive hypoxia were analyzed in trained conscious dogs prepared with a permanent tracheostomy and an implanted blood pressure telemetry unit. Data were obtained at baseline and when minute ventilation (VI) first reached 10 (VI10), 15 (VI15), and 20 (VI20) l/min during hypoxia. Time-dependent changes in the amplitudes of the high-frequency component of the R-R interval (RRIHF) and the low-frequency component of mean arterial pressure (MAPLF) were analyzed by complex demodulation. In a total of 47 progressive hypoxic runs in three dogs, RRIHF decreased at VI15 and VI20 and MAPLF increased at VI10 and VI15 but not at VI20, whereas heart rate and arterial pressure increased progressively with advancing hypoxia. We conclude that the autonomic responses to isocapnic progressive hypoxia involve tonic controls and phasic modulations of cardiovascular variables; the latter may be characterized by a progressive reduction in respiratory vagal modulation of heart rate and a transient augmentation in low-frequency sympathetic modulation of blood pressure. abstract_id: PUBMED:1114999 Potassium canrenoate in the treatment of long-term digoxin-induced arrhythmias in conscious dogs. The effects of potassium canrenoate on arrhythmias induced by long-term progressive digoxin toxicity were studied in eight conscious beagle dogs. Sinus bradycardia and sinoatrial block, as well as atrioventricular (A-V) conduction disturbances, were consistently alleviated by administration of potassium canrenoate. Premature supraventricular (including junctional) and ventricular depolarizations as well as ventricular tachycardias were also suppressed. Although potassium canrenoate always terminated the digitalis-induced arrhythmias, it usually converted the rhythm to sinus arrhythmia rather than to normal sinus rhythm. Equimolar sodium canrenoate, but not potassium chloride, had similar reversal effects on arrhythmias induced by long-term digoxin intoxication. These data indicate that canrenoate, a diuretic agent with reported positive inotropic effects, may be useful in the treatment of digitalis-induced arrhythmias in man. abstract_id: PUBMED:1621830 Effect of general anesthesia on cardiac vagal tone. Although it is accepted that general anesthetics alter cardiac vagal tone, the magnitude of this effect was difficult to quantify by noninvasive methods until recently. Twenty-eight mongrel dogs were anesthetized using one of four representative anesthetics: pentobarbital sodium (25 mg/kg iv), morphine (1 mg/kg sc) with alpha-chloralose (50 mg/kg iv), urethan (500 mg/kg iv), thiopental sodium (25 mg/kg iv), and halothane (2% inhalation). Heart period (R-R interval) was recorded, from which the amplitude of the respiratory sinus arrhythmias (RSAs; frequency 0.24-1.04 Hz) was determined by time-series analysis. Morphine with alpha-chloralose-urethan significantly (P less than 0.01) increased RSA (control 8.3 +/- 0.6 ln ms2, anesthesia 9.4 +/- 0.3 ln ms2), whereas thiopental and halothane both significantly (P less than 0.01) decreased RSA (control 8.7 +/- 0.4 ln ms2, thiopental 1.3 +/- 0.4 ln ms2; and control 8.5 +/- 0.6 ln ms2, halothane 3.6 +/- 0.8 ln ms2). Pentobarbital failed to elicit a consistent change in RSA. These data suggest that vagal tone was maintained during morphine with alpha-chloralose-urethan anesthesia but was reduced during thiopental and halothane anesthesia. abstract_id: PUBMED:7418135 Electrical activity from the sinus node region in conscious dogs. In an attempt to understand the way automatic cells in the sinus node (SN) control the cardiac rhythm, we studied extracellular electrograms recorded from the SN region in conscious dogs. A SN electrode, containing 48 silver terminals arranged 1.5 mm apart, was implanted over the node, and an indifferent electrode was implanted on the superior vena cava. Through terminals of the SN electrode paired with the vena caval electrode, "unipolar" electrograms were recorded at 100 microV/cm and with a time constant of 0.1 second. Low amplitude and low frequency deflections (dV/dt less than or equal to 20 mV/sec) which resulted from electrical activity of the node could be differentiated from the more rapid deflections due to atrial electrical activity. Electrical activity due to the inherent automaticity of what appeared to be groups of automatic cells was recognized as a slow negative-going diastolic slope followed by a slow negative-going, or negative and then positive-going, SN potential. Impulse propagation toward the SN electrode terminal in groups of automatic cells appeared as a slow positive-going deflection interrupting the diastolic slope. Adjacent groups of automatic cells located near the sites of earliest atrial activation discharged asynchronously before the earliest atrial activity; this suggests that multiple groups of automatic cells might initiate atrial activation. In addition to changes in rate and in location of the pacemaking groups of automatic cells, significant beat-to-beat variation in the sinoatrial interval contributed to the changes in atrial rate in "sinus arrhythmia." These studies provide a better understanding of SN function in conscious animals. abstract_id: PUBMED:18420278 Assessing QT prolongation in conscious dogs: validation of a beat-to-beat method. A model of sling-trained, conscious mongrel dogs instrumented with telemetric arterial pressure transmitters and ECG leads was validated for assessment of the QT-RR interval relationship at clinically used free and total plasma concentrations of positive and negative standards with known outcomes. The beat-to-beat technique for assessing the dynamic boundaries of the individual cardiac cycles was compared to the same data with typically used averaging techniques and corrections applied. Positive standards E-4031, cisapride, terodiline, and terfenadine showed increased sensitivity toward detection at clinically relevant levels when an outlier analysis of beats beyond the normal autonomic boundary is applied. Since methods to correct the QT interval for heart rate are often confounded with changes in autonomic state, a validation of the changes with reflex tachycardia induced by vasodilatation after nitroprusside and reflex bradycardia induced by sudden vasoconstriction with phenylephrine where shown to be differentiated from direct effects of repolarization with E-4031. These changes were also demonstrated to be identical to effects observed in humans after standing or challenged with a similar dose of phenylephrine. The conscious dog is also a sensitive model for studying the arrhythmia liability induced by beat-to-beat changes in cardiac ECG restitution (the relationship between QT and TQ intervals) and hysteresis. However, some caveats based on observations may need to be considered due to inherent differences in QT intervals and sinus arrhythmia between canines and humans. abstract_id: PUBMED:18938261 Assessing QT prolongation in conscious dogs: validation of a beat-to-beat method. A model of sling-trained, conscious mongrel dogs instrumented with telemetric arterial pressure transmitters and ECG leads was validated for assessment of the QT-RR interval relationship at clinically used free and total plasma concentrations of positive and negative standards with known outcomes. The beat-to-beat technique for assessing the dynamic boundaries of the individual cardiac cycles was compared to the same data with typically used averaging techniques and corrections applied. Positive standards E-4031, cisapride, terodiline, and terfenadine showed increased sensitivity toward detection at clinically relevant levels when an outlier analysis of beats beyond the normal autonomic boundary is applied. Since methods to correct the QT interval for heart rate are often confounded with changes in autonomic state, a validation of the changes with reflex tachycardia induced by vasodilatation after nitroprusside and reflex bradycardia induced by sudden vasoconstriction with phenylephrine where shown to be differentiated from direct effects of repolarization with E-4031. These changes were also demonstrated to be identical to effects observed in humans after standing or challenged with a similar dose of phenylephrine. The conscious dog is also a sensitive model for studying the arrhythmia liability induced by beat-to-beat changes in cardiac ECG restitution (the relationship between QT and TQ intervals) and hysteresis. However, some caveats based on observations may need to be considered due to inherent differences in QT intervals and sinus arrhythmia between canines and humans. abstract_id: PUBMED:11502051 Differential effects of hypoxia and hypercapnia on respiratory sinus arrhythmia in conscious dogs. To test the hypothesis that hypoxia and hypercapnia have different effects on the genesis of respiratory sinus arrhythmia (RSA), the magnitude of RSA to these stimuli was compared in 3 unanesthetized dogs. Respiration was continuously monitored through a permanent tracheostomy, and the electrocardiogram and blood pressure were also monitored. The magnitude of RSA was assessed as an instantaneous amplitude of the R-R interval oscillation in the high-frequency band of 0.15-0.80Hz by means of complex demodulation. In a total of 47 runs with hypoxia, heart rate, mean arterial pressure, respiratory rate and tidal volume increased, but RSA magnitude decreased even after adjusting for the effects of respiratory rate and tidal volume. In a total of 39 runs with hypercapnia, heart rate and mean arterial pressure did not change, despite the increased respiratory rate and tidal volume. In contrast to hypoxia, RSA magnitude increased even after adjusting for the effects of respiratory rate and tidal volume. The different effects of the two respiratory stimuli on RSA magnitude were noted at any level of ventilation and support the original hypothesis. abstract_id: PUBMED:3931516 Influence of nifedipine on xylazine-induced acute pressor response in halothane-anesthetized dogs. Effects and interaction of nifedipine (Ca channel blocker) and xylazine (mixed alpha agonist) during halothane anesthesia were examined in 6 dogs. After achievement of steady-state halothane (1.35%) anesthesia, blood pressure (BP) and heart rate (HR) were recorded in these dogs during 3-minute saline or nifedipine (20 micrograms/kg) infusion periods. Seven minutes after the end of saline or nifedipine infusion, xylazine (1.1 mg/kg of body weight) was infused over a 2-minute period. After saline pretreatment, xylazine administration increased diastolic BP (33.67 +/- 3.91 mm of Hg) and decreased HR. Nifedipine infusion induced a transient reduction in BP, accompanied by a more persistent increase in HR. Compared with saline pretreatment, nifedipine pretreatment significantly decreased the acute increase in diastolic BP (33.67 +/- 3.91 vs 14.00 +/- 2.94 mm of Hg) which occurred during xylazine injection. After saline and nifedipine infusions, xylazine administration decreased HR 30 +/- 15.02 and 36.5 +/- 10.36 beats/min, respectively. A pronounced sinus arrhythmia and/or 2nd-degree atrioventricular block developed in all dogs during xylazine injection after saline infusion. Arrhythmias were not observed in the dogs after nifedipine infusion. Nifedipine's Ca blocking action depressed xylazine-induced acute vasoconstriction and concomitant increase in diastolic BP. Because alpha 2-, but not alpha 1-adrenoceptor-mediated vasoconstriction is Ca-dependent, these results indicate that a portion of the acute pressor response induced by IV xylazine in halothane-anesthetized dogs may be alpha 2-mediated. Seemingly, nifedipine-induced hypotension and damping of xylazine-induced increases in BP attenuated xylazine's actions on cardiac rate and rhythm. Answer: Yes, permanent sinus arrhythmia in conscious dogs can be suppressed with anesthesia. A study involving 26 normal mongrel dogs and 12 dogs with experimental myocardial infarction found that sinus arrhythmia, which was present in all 26 conscious dogs, was suppressed by anesthesia. The anesthesia used was Penthobarbital at a dosage of 30 mg/kg. The suppression of sinus arrhythmia and heart rate variability (HRV) was attributed to compensatory tachycardia and alterations in baroreflexes, which occurred under the influence of anesthesia. This suppression was also observed in dogs with myocardial infarction, indicating that the conscious state and anesthesia significantly influence HRV and sinus arrhythmia in dogs (PUBMED:8705324).
Instruction: A multicenter analysis of distal pancreatectomy for adenocarcinoma: is laparoscopic resection appropriate? Abstracts: abstract_id: PUBMED:20421049 A multicenter analysis of distal pancreatectomy for adenocarcinoma: is laparoscopic resection appropriate? Background: As compared with open distal pancreatectomy (ODP), laparoscopic distal pancreatectomy (LDP) affords improved perioperative outcomes. The role of LDP for patients with pancreatic ductal adenocarcinoma (PDAC) is not defined. Study Design: Records from patients undergoing distal pancreatectomy (DP) for PDAC from 2000 to 2008 from 9 academic medical centers were reviewed. Short-term (node harvest and margin status) and long-term (survival) cancer outcomes were assessed. A 3:1 matched analysis was performed for ODP and LDP cases using age, American Society of Anesthesiologists (ASA) class, and tumor size. Results: There were 212 patients who underwent DP for PDAC; 23 (11%) of these were approached laparoscopically. For all 212 patients, 56 (26%) had positive margins. The mean number of nodes (+/- SD) examined was 12.6 +/-8.4 and 114 patients (54%) had at least 1 positive node. Median overall survival was 16 months. In the matched analysis there were no significant differences in positive margin rates, number of nodes examined, number of patients with at least 1 positive node, or overall survival. Logistic regression for all 212 patients demonstrated that advanced age, larger tumors, positive margins, and node positive disease were independently associated with worse survival; however, method of resection (ODP vs. LDP) was not. Hospital stay was 2 days shorter in the matched comparison, which approached significance (LDP, 7.4 days vs. ODP, 9.4 days, p = 0.06). Conclusions: LDP provides similar short- and long-term oncologic outcomes as compared with OD, with potentially shorter hospital stay. These results suggest that LDP is an acceptable approach for resection of PDAC of the left pancreas in selected patients. abstract_id: PUBMED:2447808 Is resection appropriate for adenocarcinoma of the pancreas? A cost-benefit analysis. Our data support the contention that biliary bypass combined with gastric bypass is the treatment of choice for the majority of patients with adenocarcinoma of the pancreas. Compared with resection, operative morbidity and mortality rates were lower, length of hospitalization was shorter, and the cost of treatment was lower. There was no significant difference in survival. In choosing candidates for resection, the surgeon must balance the meager chances for cure (less than 1 percent) with the considerable operative hazard and the risk of lethal, costly complications. In our view, resection should be considered only for physiologically young patients with small localized lesions. These patients should be referred to surgeons specializing in pancreatic surgery who have had operative mortality rates of less than 10 percent. Pancreatic resection must, therefore, be deprived of its appeal as a procedure to which every surgeon must aspire. abstract_id: PUBMED:30649607 Significance of neoadjuvant therapy for borderline resectable pancreatic cancer: a multicenter retrospective study. Purpose: Neoadjuvant therapy (NAT) is increasingly used to improve the prognosis of patients with borderline resectable pancreatic cancer (BRPC) albeit with little evidence of its advantage over upfront surgical resection. We analyzed the prognostic impact of NAT on patients with BRPC in a multicenter retrospective study. Methods: Medical data of 165 consecutive patients who underwent treatment for BRPC between January 2010 and December 2014 were collected from ten institutions. We defined BRPC according to the National Comprehensive Cancer Network guidelines, and subclassified patients according to venous invasion alone (BR-PV) and arterial invasion (BR-A). Results: The rates of NAT administration and resection were 35% and 79%, respectively. There were no significant differences in resection rates and prognoses between patients in the BR-PV and BR-A subgroups. NAT did not have a significant impact on prognosis according to intention-to-treat analysis. However, in patients who underwent surgical resection, NAT was independently associated with longer overall survival (OS). The median OS of patients who underwent resection after NAT (53.7 months) was significantly longer than that of patients who underwent upfront (17.8 months) or no resection (14.9 months). The rates of superior mesenteric or portal vein invasion, lymphatic invasion, venous invasion, and lymph node metastasis were significantly lower in patients who underwent resection after NAT than in those who underwent upfront resection despite similar baseline clinical profiles. Conclusions: Resection after NAT in patients with BRPC is associated with longer OS and lower rates of both invasion to the surrounding tissues and lymph node metastasis. abstract_id: PUBMED:33386555 Limited resection vs. pancreaticoduodenectomy for primary duodenal adenocarcinoma: a systematic review and meta-analysis. It is well known that surgery is the mainstay treatment for duodenal adenocarcinoma. However, the optimal extent of surgery is still under debate. We aimed to systematically review and perform a meta-analysis of limited resection (LR) and pancreatoduodenectomy for patients with duodenal adenocarcinoma. A systematic electronic database search of the literature was performed using PubMed and the Cochrane Library. All studies comparing LR and pancreatoduodenectomy for patients with duodenal adenocarcinoma were selected. Long-term overall survival was considered as the primary outcome, and perioperative morbidity and mortality as the secondary outcomes. Fifteen studies with a total of 3166 patients were analyzed; 995 and 1498 patients were treated with limited resection and pancreatoduodenectomy, respectively. Eight and 7 studies scored a low and intermediate risk of publication bias, respectively. The LR group had a more favorable result than the pancreatoduodenectomy group in overall morbidity (odd ratio [OR]: 0.33, 95% confidence interval [CI] 0.17-0.65) and postoperative pancreatic fistula (OR: 0.13, 95% CI 0.04-0.43). Mortality (OR: 0.96, 95% CI 0.70-1.33) and overall survival (OR: 0.61, 95% CI 0.33-1.13) were not significantly different between the two groups, although comparison of the two groups stratified by prognostic factors, such as T categories, was not possible due to a lack of detailed data. LR showed long-term outcomes equivalent to those of pancreatoduodenectomy, while the perioperative morbidity rates were lower. LR could be an option for selected duodenal adenocarcinoma patients with appropriate location or depth of invasion, although further studies are required. abstract_id: PUBMED:32819373 Multivisceral resection for adenocarcinoma of the pancreatic body and tail-a retrospective single-center analysis. Background: Adenocarcinoma of the pancreatic body and tail is associated with a dismal prognosis. As patients frequently present themselves with locally advanced tumors, extended surgery including multivisceral resection is often necessary in order to achieve tumor-free resection margins. The aim of this study was to identify prognostic factors for postoperative morbidity and mortality and to evaluate the influence of multivisceral resections on patient outcome. Methods: This is a retrospective analysis of 94 patients undergoing resection of adenocarcinoma located in the pancreatic body and/or tail between April 1995 and December 2016 at our institution. Uni- and multivariable Cox regression analysis was conducted to identify independent prognostic factors for postoperative survival. Results: Multivisceral resections, including partial resections of the liver, the large and small intestines, the stomach, the left kidney and adrenal gland, and major vessels, were carried out in 47 patients (50.0%). The median postoperative follow-up time was 12.90 (0.16-220.92) months. Median Kaplan-Meier survival after resection was 12.78 months with 1-, 3-, and 5-year survival rates of 53.2%, 15.8%, and 9.0%. Multivariable Cox regression identified coeliac trunk resection (p = 0.027), portal vein resection (p = 0.010), intraoperative blood transfusions (p = 0.005), and lymph node ratio in percentage (p = 0.001) as independent risk factors for survival. Although postoperative complications requiring surgical revision were observed more frequently after multivisceral resections (14.9 versus 2.1%; p = 0.029), postoperative survival was not significantly inferior when compared to patients undergoing standard distal or subtotal pancreatectomy (12.35 versus 13.87 months; p = 0.377). Conclusions: Our data indicates that multivisceral resection in cases of locally advanced pancreatic carcinoma of the body and/or tail is justified, as it is not associated with increased mortality and can even facilitate long-term survival, albeit with an increase in postoperative morbidity. Simultaneous resections of major vessels, however, should be considered carefully, as they are associated with inferior survival. abstract_id: PUBMED:26893222 Pancreatectomy with Mesenteric and Portal Vein Resection for Borderline Resectable Pancreatic Cancer: Multicenter Study of 406 Patients. Purpose: The role of pancreatectomy with en bloc venous resection and the prognostic impact of pathological venous invasion are still debated. The authors analyzed perioperative, survival results, and prognostic factors of pancreatectomy with en bloc portal (PV) or superior mesenteric vein (SMV) resection for borderline resectable pancreatic carcinoma, focusing on predictive factors of histological venous invasion and its prognostic role. Methods: A multicenter database of 406 patients submitted to pancreatectomy with en bloc SMV and/or PV resection for pancreatic adenocarcinoma was analyzed retrospectively. Univariate and multivariate analysis of factors related to histological venous invasion were performed using logistic regression model. Prognostic factors were analyzed with log-rank test and multivariate proportional hazard regression analysis. Results: Complications occurred in 51.9 % of patients and postoperative death in 7.1 %. Histological invasion of the resected vein was confirmed in 56.7 % of specimens. Five-year survival was 24.4 % with median survival of 24 months. Vein invasion at preoperative computed tomography (CT), N status, number of metastatic lymph nodes, preoperative serum albumin were related to pathological venous invasion at univariate analysis, and vein invasion at CT was independently related to venous invasion at multivariate analysis. Use of preoperative biliary drain was significantly associated with postoperative complications. Multivariate proportional hazard regression analysis demonstrated a significant correlation between overall survival and histological venous invasion and administration of adjuvant therapy. Conclusions: This study identifies predictive factors of pathological venous invasion and prognostic factors for overall survival, including pathological venous invasion, which may help with patients' selection for different treatment protocols. abstract_id: PUBMED:29778616 Routine portal vein resection for pancreatic adenocarcinoma shows no benefit in overall survival. Background: Extended pancreatic resections including resections of the portal (PV) may nowadays be performed safely. Limitations in distinguishing tumor involvement from inflammatory adhesions however lead to portal vein resections (PVR) without evidence of tumor infiltration in the final histopathological examination. The aim of this study was to analyze the impact of these "false negative" resections on operative outcome and long-term survival. Methods: 40 patients who underwent pancreatic resection with PVR for pancreatic adenocarcinoma (PA) without tumor infiltration of the PV (PVR-group) were identified. In a 1:3 match these patients were compared to 120 patients after standard pancreatic resection without PVR (SPR-group) with regard to operative outcome and overall survival. Results: Survival analysis revealed that median survival was significantly shorter in the PVR group (311 days) as compared to the SPR group (558 days), (p = 0.0011, hazard ratio 1.98, 95% CI: 1.31-2.98). Also postoperative complications ≥ Clavien III occurred significantly more often in the PVR group (37.5% vs. 20.8%). Conclusions: Radical resection affords the best chance for long-term survival in patients with PA. Based on the results of this study a routine resection of the PV as recently proposed may however not be recommended. abstract_id: PUBMED:36055170 Aorta to proper hepatic artery bypass with total pancreatectomy and celiac axis resection (TP-CAR) in a patient with locally advanced pancreas adenocarcinoma. Introduction: Total pancreatectomy with en-bloc celiac axis resection (TP-CAR) and interposition graft placement between the aorta and the proper hepatic artery is a technically demanding, very uncommonly performed operation, even in high-volume pancreatic centers. Presentation Of Case: We present, in clinical and technical detail, a patient with locally advanced adenocarcinoma of the pancreatic body and neck involving the celiac and common hepatic arteries and portal vein, who underwent neoadjuvant chemotherapy and radiation with very good response, followed by TP-CAR and aorto-proper hepatic artery bypass using saphenous vein graft. The patient had an uneventful intraoperative and postoperative course, short hospital stay, and histology consistent with a curative resection. Discussion: TP-CAR with common hepatic artery resection and proper hepatic artery reconstruction in patients with locally advanced pancreatic body cancer after appropriate neoadjuvant therapy can be performed safely and be potentially curative in centers with an established track record in advanced pancreatic surgery involving major peripancreatic vessels. Conclusion: TP-CAR with proper hepatic artery reconstruction is a rare but potentially curative operation for selected patients with otherwise unresectable pancreatic adenocarcinoma. abstract_id: PUBMED:24890182 Vascular resection during radical resection of pancreatic adenocarcinomas: evolution over the past 15 years. This literature review aimed to critically analyze oncological results of vascular resection during pancreatectomy for adenocarcinoma in the light of the concept evolution of locally advanced tumors and microscopic complete resection. The literature search was conducted in PubMed and Medline for the period June 1994 to December 2012, retaining English as the language of publication. The review of 12 publications indicated that mortality and morbidity rates were not significantly different for pancreatectomy with or without venous resection (VR). Six comparative studies showed worse long-term survival in the VR group, though one meta-analysis, albeit with a significant population heterogeneity, demonstrated that the overall survival between VR and the control group was similar (12% vs. 17%). The compilation of 13 comparative studies showed a significantly lower rate of complete microscopic resection in the VR patient group compared to controls (63% vs. 77%; P = 0.001). Concerning pancreatectomy combined to arterial resection, the literature review indicated a significantly greater mortality and morbidity rate and a lower survival rate compared to pancreatic resection alone. Conflicting results concerning the long-term outcome of VR was due to the heterogeneity of the patient population. Since the only chance to cure patients of pancreatic adenocarcinoma is to obtain free resection margins, VR is a valid therapeutic option. But combined arterial resection to pancreatic resection does not appear to be recommended. abstract_id: PUBMED:38021362 Arterial resection during operative management of pancreatic head and uncinate process adenocarcinoma: a systematic review. Background: Surgical resection is a part of the treatment pathways for the management of pancreatic cancer with arterial involvement. Arterial resection in this context is however not widely supported due to the paucity and diversity of the reported evidence in the literature. The aim of this systematic review is the presentation and analysis of the current evidence in the field. Methods: A systematic literature search of PubMed, MEDLINE and the Cochrane Library was performed for eligible studies, following the PRISMA guidelines. Information on baseline characteristics, peri-operative outcomes, survival outcomes and histopathological findings were extracted for pooling and analysis. Results: Eight studies with a total of 170 patients were included in the analysis. One hundred and thirty-five patients had a pancreaticoduodenectomy (PD) and 35 had a total pancreatectomy (TP) with arterial resection. Perioperative morbidity was 43.5% and mortality was 4.5%. Median overall survival (OS) was 12.7 months (range, 10.5-22.2 months). Overall 3- and 5-year survival for this cohort was reported at 6.6% (range, 0-42.4%) and 3.3% (range, 0-6.6%) respectively. Resection margins were clear (R0) in a median of 75% of patients. Only a median of 45% of patients received neo-adjuvant chemotherapy. Conclusions: Arterial resection can be performed with an acceptable peri-operative morbidity and mortality. However, survival outcomes are still not convincing and future efforts should concentrate on patient and disease biology selection. Answer: The multicenter analysis of distal pancreatectomy for adenocarcinoma assessed whether laparoscopic distal pancreatectomy (LDP) is an appropriate method for the resection of pancreatic ductal adenocarcinoma (PDAC) compared to open distal pancreatectomy (ODP). The study reviewed records from patients who underwent distal pancreatectomy for PDAC from 2000 to 2008 across nine academic medical centers. The analysis included short-term outcomes such as node harvest and margin status, as well as long-term outcomes like overall survival. A matched analysis was performed to compare ODP and LDP cases based on age, American Society of Anesthesiologists (ASA) class, and tumor size. The results showed that out of 212 patients who underwent distal pancreatectomy for PDAC, 23 (11%) were approached laparoscopically. There were no significant differences in positive margin rates, the number of nodes examined, the number of patients with at least one positive node, or overall survival between the matched ODP and LDP groups. Logistic regression for all 212 patients indicated that advanced age, larger tumors, positive margins, and node-positive disease were independently associated with worse survival, but the method of resection (ODP vs. LDP) was not a factor affecting survival. Additionally, the hospital stay was about 2 days shorter for the LDP group compared to the ODP group, which approached statistical significance. The conclusion drawn from this analysis is that LDP provides similar short- and long-term oncologic outcomes compared to ODP, with the potential benefit of a shorter hospital stay. Therefore, the study suggests that LDP is an acceptable approach for the resection of PDAC of the left pancreas in selected patients (PUBMED:20421049).
Instruction: Participatory health councils and good governance: healthy democracy in Brazil? Abstracts: abstract_id: PUBMED:25889170 Participatory health councils and good governance: healthy democracy in Brazil? Introduction: The Brazilian Government created Participatory Health Councils (PHCs) to allow citizen participation in the public health policy process. PHCs are advisory bodies that operate at all levels of government and that bring together different societal groups to monitor Brazil's health system. Today they are present in 98% of Brazilian cities, demonstrating their popularity and thus their potential to help ensure that health policies are in line with citizen preferences. Despite their expansive reach, their real impact on health policies and health outcomes for citizens is uncertain. We thus ask the following question: Do PHCs offer meaningful opportunities for open participation and influence in the public health policy process? Methods: Thirty-eight semi-structured interviews with health council members were conducted. Data from these interviews were analyzed using a qualitative interpretive content analysis approach. A quantitative analysis of PHC data from the Sistema de Acompanhamento dos Conselhos de Saude (SIACS) database was also conducted to corroborate findings from the interviews. Results: We learned that PHCs fall short in many of the categories of good governance. Government manipulation of the agenda and leadership of the PHCs, delays in the implementation of PHC decision making, a lack of training of council members on relevant technical issues, the largely narrow interests of council members, the lack of transparency and monitoring guidelines, a lack of government support, and a lack of inclusiveness are a few examples that highlight why PHCs are not as effective as they could be. Conclusions: Although PHCs are intended to be inclusive and participatory, in practice they seem to have little impact on the health policymaking process in Brazil. PHCs will only be able to fulfil their mandate when we see good governance largely present. This will require a rethinking of their governance structures, processes, membership, and oversight. If change is resisted, the PHCs will remain largely limited to a good idea in theory that is disappointing in practice. abstract_id: PUBMED:17147308 Power relations and democracy in health councils in Brazil: a case study Background: Social participation is fundamental for the consolidation of the Brazilian Health Reform, with the aim of promoting equity, universality and democratization of access to health, and the health councils are vital forums for its concretization. However, the authoritarian culture of our institutions makes effective participation in these organizations difficult. The objective of this work was to analyze the power relations which permeate the practices of a health council, seeking to understand discourse as a builder of participation in health. Methods: A qualitative case study in a municipal health council in a Brazilian town. The discourse analysis was carried out on the minutes of two management terms, legal documents, interviews and observation during meetings. Results: The quantitative presence of the user representatives does not correspond to the quality of their participation. The governmental sector uses most of the speaking turns, establishing monological relations, based on the a lack of symmetry determined by level of education, professional training, social status of the councilors and the relations of knowledge and power present in the health institutions. We identify resistances coming from the user sectors and the health professionals. However, these are scattered, fragile and not very important. Conclusions: The current practices that exist can, contrarily, go against democracy, for which is necessary to invest in the empowerment of councilors and users in the day-to-day reality of health care. abstract_id: PUBMED:27782831 Civil society participation in the health system: the case of Brazil's Health Councils. Background: Brazil created Health Councils to bring together civil society groups, heath professionals, and government officials in the discussion of health policies and health system resource allocation. However, several studies have concluded that Health Councils are not very influential on healthcare policy. This study probes this issue further by providing a descriptive account of some of the challenges civil society face within Brazil's Health Councils. Methods: Forty semi-structured interviews with Health Council Members at the municipal, state and national levels were conducted in June and July of 2013 and May of 2014. The geographical location of the interviewees covered all five regions of Brazil (North, Northeast, Midwest, Southeast, South) for a total of 5 different municipal Health Councils, 8 different state Health Councils, and the national Health Council in Brasilia. Interview data was analyzed using a thematic approach. Results: Health Councils are limited by a lack of legal authority, which limits their ability to hold the government accountable for its health service performance, and thus hinders their ability to fulfill their mandate. Equally important, their membership guidelines create a limited level of inclusivity that seems to benefit only well-organized civil society groups. There is a reported lack of support and recognition from the relevant government that negatively affects the degree to which Health Council deliberations are implemented. Other deficiencies include an insufficient amount of resources for Health Council operations, and a lack of training for Health Council members. Lastly, strong individual interests among Health Council members tend to influence how members participate in Health Council discussions. Conclusions: Brazil's Health Councils fall short in providing an effective forum through which civil society can actively participate in health policy and resource allocation decision-making processes. Restrictive membership guidelines, a lack of autonomy from the government, vulnerability to government manipulation, a lack of support and recognition from the government and insufficient training and operational budgets have made Health Council largely a forum for consultation. Our conclusions highlight, that among other issues, Health Councils need to have the legal authority to act independently to promote government accountability, membership guidelines need to be revised in order include members of marginalized groups, and better training of civil society representatives is required to help them make more informed decisions. abstract_id: PUBMED:23338491 Participatory potential and deliberative function: a debate on broadening the scope of democracy through the health councils This article reflects upon the relation between democracy and health councils. It seeks to analyze the councils as a space for broadening the scope of democracy. First, some characteristics and principles of the liberal democratic regime are presented, with an emphasis on the minimalist and procedural approach of decision-making. The fragilities of the representative model and the establishment of new relations between the Government and society are then discussed in light of the new social grammar and the complexity of the division between governmental and societal responsibilities. The principles of deliberative democracy and the idea of substantive democracy are subsequently presented. Broadening the scope of democracy is understood not only as the guarantee of civil and political rights, but also especially, of social rights. Lastly, based on discussion of the participation and deliberation categories, the health councils are analyzed as potential mechanisms for broadening the scope of democracy. abstract_id: PUBMED:25715302 The legitimacy of representation in forums with social participation: the case of the Bahia State Health Council, Brazil The electoral representation model is insufficient and inadequate for new participatory roles such as those played by members of health councils. This article analyzes representation and representativeness in the Bahia State Health Council, Brazil. The study included interviews with 20 current or former members of the State Health Council, analysis of the council minutes and bylaws, and observation of plenary meetings. Discourse analysis technique was used to analyze interventions by members. The article discusses the results in four analytical lines: the process by which various organizations name representatives to the Council; the relationship between Council members and their constituencies; interest representation in the Council; and criteria used by the plenary to take positions. The study reveals various problems with the representativeness of the Bahia State Health Council and discusses the peculiarities of representation in social participation forums and the characteristics that give legitimacy to representatives. abstract_id: PUBMED:33175048 Health councils and dissemination of SUS management instruments: an analysis of portals in Brazilian capitals. Coparticipants in the performance, planning, and control of public policies' implementation, Health Councils are public spaces aiming at the participation and social control of health actions concerning the community. Access to information is a crucial condition so that not only advisers but also civil society can propose, monitor, and evaluate the actions taken in health. Based on this understanding and the guidance provided by Law N° 141/2012 on the visibility of SUS management instruments, this study aimed to verify how the municipal portals of Brazilian capitals have disseminated their Health Councils and the necessary instruments for analyzing, monitoring, and following-up on the health policy. While recommended by law, the research showed that dissemination occurs differently between capitals. Only 14% of the investigated portals make SUS management instruments available on the council pages, and 33% do not disclose information about the council or management instruments. The lack of such content can weaken the council's institutionality and, ultimately, participatory democracy itself. abstract_id: PUBMED:31939553 Health councils and participatory effectiveness: a performance assessment study The article aims to analyze the results of a performance assessment model for health councils. The theoretical and methodological frame of reference was the spider graph method, adapted to the reality of health councils. The assessment matrix considered five dimensions with the greatest influence on participation: autonomy, organization, representativeness, community involvement, and political influence. Based on assessment of the indicators, we estimated the performance value for each dimension and located it on the five-axis graph. The matrix was applied to the Health Council in Vitória da Conquista, Bahia State, Brazil. We used document analysis, observation of meetings, and interviews with 18 council members as the data collection techniques. The results show an advanced level of the council's autonomy with adequate structural conditions, but with limitations in financial independence. The organizational dimension reached the maximum level of performance, with regular meetings, availability of information for council members, and functioning of thematic commissions. Representativeness was the dimension with the worst performance, displayed by the weak relationship between the representatives and the organizations. The community involvement dimension displayed an advanced level with high participation by council and non-council members in the meetings and action with numerous proposals. The political influence dimension showed intermediate performance. We observed greater influence by the social representatives on the decision-making process and low capacity for follow-up on policies. The matrix proved adequate and feasible for performance assessment of health councils. abstract_id: PUBMED:18041561 Decision-making process and health management councils: theoretical approaches With the institutionalization of participation in health, through conferences and management councils at national, state, municipal and local levels, a process of democratization is initiated in the health area. However, in relation to the health councils in particular, there is still much to be done, including improving the quality of the decision-making process. This work aims to place the decision-making process in its theoretical context in terms of participatory democracy, elements which make up, factors which influence its development, and finally, to explore some possibilities of this theoretical basis to analyze the practices of the health councils in the area of health. It is hoped that it will make a theoretical contribution to the analyses carried out in this area, in order to provide a decision-making process that is more inclusive in terms of participation. abstract_id: PUBMED:24184843 Using a participatory evaluation design to create an online data collection and monitoring system for New Mexico's Community Health Councils. We present the collaborative development of a web-based data collection and monitoring plan for thirty-two county councils within New Mexico's health council system. The monitoring plan, a key component in our multiyear participatory statewide evaluation process, was co-developed with the end users: representatives of the health councils. Guided by the Institute of Medicine's Community, Health Improvement Process framework, we first developed a logic model that delineated processes and intermediate systems-level outcomes in council development, planning, and community action. Through the online system, health councils reported data on intermediate outcomes, including policy changes and funds leveraged. The system captured data that were common across the health council system, yet was also flexible so that councils could report their unique accomplishments at the county level. A main benefit of the online system was that it provided the ability to assess intermediate, outcomes across the health council system. Developing the system was not without challenges, including creating processes to ensure participation across a large rural state; creating shared understanding of intermediate outcomes and indicators; and overcoming technological issues. Even through the challenges, however, the benefits of committing to using participatory processes far outweighed the challenges. abstract_id: PUBMED:34666755 Social participation in the unified health system of Brazil: an exploratory study on the adequacy of health councils to resolution 453/2012. Introduction: Social participation is one of the guidelines of the Brazilian health system. Health councils are collegiate instances of participation established by Law 8.142/90. The most recent legal regulation for council organization and functioning was established through Resolution 453/2021. The institution of health councils has a permanent and deliberative nature to act in the formulation, deliberation and control of health policy implementation, including in economic and financial aspects. Objective: To evaluate the compliance of health councils with the directives for the establishment, restructuring and operation of the councils from Brazil, based on Resolution 453/2012. Methods: An exploratory, descriptive study that used the Health Council Monitoring System as a data source. Qualitative variables were selected to identify the characteristics related to the councils' establishment (legal instruments for establishment), the strategies adopted for restructuring (budget allocation, existence of an executive secretariat, provision of a dedicated office) and the characteristics of the health councils' operation (frequency of regular meetings, existence of a board of directors, the election of the board of directors). Results: The study analyzed three groups of characteristics related to the constitution, strategies adopted for restructuring and the functioning of the councils. Regarding the constitution of the councils, the findings revealed that the vast majority was constituted in accordance with the legislation and, therefore, is in compliance with Resolution 453/2021. In the second group of characteristics that describe the restructuring of councils, the study found that less than half of registered councils are in compliance with the standard. And, finally, in the third group of characteristics, it was found that the boards have adopted different frequencies for regular meetings and approximately 50% of the boards studied have a board of directors. Conclusions: The councils still do not meet the minimum conditions necessary to fulfil their role in the Unified Health System (SUS), as stipulated in Resolution 453/2021. This situation requires monitoring by public oversight agencies. Despite the increase in popular participation with the creation of the health councils, this study demonstrated that most councils still do not meet the minimum conditions for monitoring public health policy. The improvement of the Health Councils Monitoring System (SIACS) to become an instrument for monitoring the councils, with the definition of goals and results, may contribute to the organization of the councils and, therefore, to the realization of social participation in Brazil. Answer: Participatory Health Councils (PHCs) in Brazil were established to involve citizens in the public health policy process, operating at various levels of government and bringing together different societal groups to oversee the health system. While they are widespread, being present in 98% of Brazilian cities, their actual impact on health policies and outcomes is uncertain (PUBMED:25889170). Research involving semi-structured interviews with health council members and quantitative analysis of PHC data has indicated that PHCs often fall short in many aspects of good governance. Issues such as government manipulation, delays in decision implementation, lack of technical training for council members, narrow interests of members, lack of transparency, insufficient government support, and lack of inclusiveness have been highlighted as reasons for their limited effectiveness (PUBMED:25889170). Moreover, the power dynamics within health councils can be problematic, with the governmental sector dominating discussions and creating asymmetries based on education, professional training, and social status. This imbalance can hinder effective participation and democracy within these councils (PUBMED:17147308). Health Councils also suffer from a lack of legal authority, which restricts their ability to hold the government accountable for health service performance. Membership guidelines limit inclusivity, often benefiting well-organized civil society groups over marginalized ones. Additionally, a lack of support and recognition from the government, insufficient resources, and inadequate training for members further weaken the councils' influence on health policy and resource allocation (PUBMED:27782831). The legitimacy of representation within these councils is another concern, as the traditional electoral representation model does not adequately address the participatory roles of health council members. Issues with the process of naming representatives, the relationship between members and their constituencies, and the criteria for decision-making have been identified as problematic (PUBMED:25715302). Despite these challenges, health councils are seen as potential mechanisms for broadening the scope of democracy, provided there is an investment in empowering councilors and users in the healthcare system (PUBMED:23338491). In summary, while PHCs in Brazil are intended to foster a healthy democracy by involving citizens in health policy decisions, various structural and operational challenges limit their effectiveness. For PHCs to truly contribute to good governance, there needs to be a rethinking of their governance structures, processes, membership, and oversight to ensure they can fulfill their mandate (PUBMED:25889170).
Instruction: Video-assisted mediastinoscopy compared with conventional mediastinoscopy: are we doing better? Abstracts: abstract_id: PUBMED:21601176 A comparative analysis of video-assisted mediastinoscopy and conventional mediastinoscopy. Background: The objective of this study was to compare outcomes of video-assisted mediastinoscopic lymph node biopsy in patients with non-small cell lung cancer (NSCLC) with outcomes of conventional mediastinoscopic lymph node biopsy in this same patient population. Methods: All mediastinoscopies at one medical center from January 2008 to December 2009 were analyzed. Numbers of lymph nodes dissected, stations biopsied, remnant lymph nodes when major lung resection was performed after mediastinoscopic lymph node biopsy, and complications were recorded. Results: Of 521 mediastinoscopies, 222 were in the conventional mediastinoscopic lymph node biopsy group (CM group) and 299 were in the video-assisted mediastinoscopic lymph node biopsy group (VAM group). Eleven complications (2.11%) occurred, with more occurring in the CM group (3.6%) than in the VAM group (1.6%; p=0.030). The total number of dissected nodes was higher in the VAM group (mean, 8.53±5.8) than in the CM group (mean, 7.13±4.9; p=0.004), and there was no statistically significant difference between the average number of stations sampled in the CM group (2.98±0.7) and in the VAM group (3.06±0.75; p=not significant). The number of remnant lymph nodes when major lung surgery was performed after mediastinoscopy was lower in the VAM group (mean, 5.05±4.5) than in the CM group (mean, 7.67±6.5; p&lt;0.001). Conclusions: This study found that video-assisted mediastinoscopic lymph node biopsy had fewer complications than did the conventional method. More lymph nodes were examined and fewer lymph nodes remained after mediastinoscopy by video-assisted mediastinoscopy (VAM) than by conventional mediastinoscopy. abstract_id: PUBMED:20417780 Video-assisted mediastinoscopy compared with conventional mediastinoscopy: are we doing better? Background: Conventional mediastinoscopy (CM) is recently being replaced by video-assisted mediastinoscopy (VAM), with potentially better yield and better safety profile for VAM. Methods: All 645 mediastinoscopies (505 CM, 140 VAM) performed between May 2004 and May 2008 were reviewed. Numbers of stations biopsied, total number of lymph nodes dissected, pathology results, and complications were recorded. Patients were divided into two groups: staging for lung cancer group (n = 500) and diagnostic group (n = 145). The staging group was further analyzed, using 304 patients who eventually underwent thoracotomy to evaluate accuracy and negative predictive value of mediastinoscopy, comparing between the two methods (233 CM, 71 VAM). Results: Average age was 65 years (range, 26 to 91), and 382 were male. There was no mortality. Eight complications (1.2%) occurred, more in the VAM group (3.8%) than in the CM group (0.8%; p = 0.04). The total number of dissected nodes was higher in the VAM group than in the CM group (7.0 +/- 3.2 versus 5.0 +/- 2.8, p &lt; 0.001), and so was the number of stations sampled (3.6 versus 2.6, p &lt; 0.01). Sensitivity was higher for VAM (95% versus 92.2%, p = not significant), and so was the negative predictive value (98.6% versus 95.7%, p = not significant). Most false negative biopsies (8 of 11, 73 %) occurred in station 7. Conclusions: Both methods are safe. More lymph nodes and stations were evaluated by VAM, with trend toward higher negative predictive value. The higher rate of minor complications seen with VAM might be related to a more aggressive and thorough dissection. abstract_id: PUBMED:11722067 Combined video-assisted mediastinoscopy and video-assisted thoracoscopy in the management of lung cancer. Background: This study seeks to assess the safety and usefulness of combined video-assisted mediastinoscopy and video-assisted thoracoscopy in the management of patients with lung cancer. Methods: Ten consecutive patients with lung neoplasms were evaluated. Indications for this combined approach included inconclusive findings from imaging techniques concerning locoregional extension and resectability; possible involvement of different structures not accessible to a single procedure; and failure to obtain histologic diagnosis by a single technique. RESULTS; Histologic diagnosis was obtained in 6 patients without preoperative histologic typing. In 3 patients, in contrast with preoperative imaging studies, combined thoracoscopy and mediastinoscopy showed the resectability of the primary tumor and the absence of metastatic mediastinal lymph nodes. These findings were confirmed at thoracotomy. In 3 other patients prevascular lymph nodes metastases were found. They underwent neoadjuvant chemotherapy; at subsequent operation, a complete resection was possible. In the remaining four cases combined exploration proved definitive contraindications for operation (recognition of oat-cell carcinoma, n = 2; T4 status, n = 1; T3N2, n = 1). Conclusions: Combined video-assisted mediastinoscopy and video-assisted thoracoscopy seems to be a safe and useful tool in the management of selected patients with lung neoplasms. Both the extent of primary tumor and the possible intrathoracic spread may be exhaustively evaluated. In patients with left lung cancer a complete exploration of the aortopulmonary window is possible. abstract_id: PUBMED:22159246 Does video-assisted mediastinoscopy have a better lymph node yield and safety profile than conventional mediastinoscopy? A best evidence topic was written according to a structured protocol. The question addressed was whether video-assisted mediastinoscopy (VAM) has a better lymph node yield and safety profile than the conventional mediastinoscopy (CM). A total of 194 papers were found, using the reported searches, of which five represented the best evidence to answer the clinical question. The authors, journal, date and country of publication, patient group studied, study type, relevant outcomes and results of these papers are tabulated. Two studies to date have directly compared CM and VAM with respect to lymph node yield, calculated diagnostics performance and complication rate. In both of these, lymph node yield is shown to be higher using VAM with better sensitivity, negative predictive value and accuracy rates. The favourable figures of lymph node sampling are found to be statistically significant in the single study providing such analysis. Complication rates using VAM are low, however, in the one instance where it is reported as higher than CM, the extensive lymph node dissection used in this technique may be a reasonable explanation for this finding. All studies described here exemplify VAM as a safe and useful tool in mediastinal staging, lymph node dissection and tissue diagnosis of mediastinal diseases given its superior visualization of surrounding structures and advantage of bimanual dissection. The future scope for diagnostic and therapeutic indications of cervical mediastinscopy is anticipated with recent advances and new techniques, such as video-assisted mediastinoscopic lymphadenectomy and virtual mediastinscopy. abstract_id: PUBMED:12842542 Video-assisted mediastinoscopy: experience from 240 consecutive cases. Background: We report our experience with video-assisted mediastinoscopy. Methods: We retrospectively reviewed clinical records of all patients who underwent video-assisted mediastinoscopy in a 26-month period. Video-assisted mediastinoscopy was performed in the presence of enlarged lymph nodes (short axis &gt; 1 cm) found at computed tomography scan. Data about operative time, node stations sampled, number of biopsies, and operative complications were collected. Results of the pathologic examination were recorded, as well as (when different) the definitive diagnosis. Results: Video-assisted mediastinoscopy was performed in 240 consecutive patients. In 2 patients, the technique was employed for resection of a mesothelial cyst. In the other cases, it was used for diagnosis of enlarged nodes or staging of lung cancer. Mean number of biopsies was 6.0; mean number of sampled nodal stations was 2.3. Mean operative time was 36.6 minutes. Two operative complications occurred: a pneumothorax not requiring drainage and an injury to the innominate artery requiring manubrial split and suture. In 192 patients, the definitive diagnosis was lung cancer (18 small-cell lung cancers). In the remaining 46 patients, video-assisted mediastinoscopy allowed establishment of the diagnosis (sarcoidosis, n = 22; reactive hyperplastic lympho-adenitis, n = 13; tuberculosis, n = 4; involvement by malignancies other than lung cancer, n = 7). Among the 174 patients with non-small cell lung cancer, mediastinal nodal involvement was recognized in 107 cases (N3, n = 28; N2, n = 79). Sixty-seven patients were staged N less than 2; 47 underwent thoracotomy. Postthoracotomy staging agreed with video-assisted mediastinoscopy staging in 44 cases (93.6%). Conclusions: Video-assisted mediastinoscopy proved to be safe and effective in nodal assessment of the mediastinum. abstract_id: PUBMED:19537118 The role of cervical mediastinoscopy and video assisted thoracoscopy in the diagnosis and staging of thoraco-mediastinal neoplastic diseases Introduction: In this work we evaluate the role of mediastinoscopy and video-assisted thoracoscopy in the diagnosis and staging of coin lesion of the lung and of mediastinal masses. Materials And Methods: 72 patients, 55 males and 17 females, affected by lung coin lesion without any previous histological diagnosis have been admitted to our Institution from 1997 to 2007. Mean age was 59.4 for males (range 29-82) and 57.2 for females (range 14-79). Results: Mediastinoscopy resulted to be diagnostic in 95% of cases. In just one case mediastinoscopy failed and video assisted thoracoscopy was performed, which permitted to obtain diagnosis. Video assisted thoracoscopy was able to lead to diagnosis in 98.1% of cases, as we observed only one failure. In this single case we converted the thoracoscopic approach to open, but although the conversion it was not possible to make diagnosis. Discussion: In these ten years, thanks to adequate indication for mediastinoscopy and video assisted thoracoscopy, the use of thoracotomy for diagnosis and staging of pulmonary neoplastic diseases has been reduced: thus we avoided 80% of unnecessary thoracotomies in patients affected by not resectable lung cancer, metastases (treated by atypical thoracoscopic resection) or benign diseases. Conclusion: The minimally invasive surgical exploration of mediastinum and thoracic cavity allows to obtain all necessary informations (in terms of histology and staging) to programme an adequate therapeutic protocol, reducing postoperative pain and hospital stay, in comparison to thoracotomy. abstract_id: PUBMED:27385137 The incidence of hoarseness after mediastinoscopy and outcome of video-assisted versus conventional mediastinoscopy in lung cancer staging. Objectives Theoretically, video-assisted mediastinoscopy (VM) should provide a decrease in the incidence of hoarseness in comparison with conventional mediastinoscopy (CM). Methods An investigation of 448 patients with the NSCLC who underwent mediastinoscopy (n = 261 VM, n = 187 CM) between 2006 and 2010. Results With VM, the mean number of sampled LNs and of stations per case were both significantly higher (n = 7.91 ± 1.97 and n = 4.29 ± 0.81) than they were for CM (n = 6.65 ± 1.79 and n = 4.14 ± 0.84) (p &lt; 0.001 and p = 0.06). Hoarseness was reported in 24 patients (5.4%) with VM procedures resulting in a higher incidence of hoarseness than did CM procedures (6.9% and 3.2%) (p = 0.08). The incidence of hoarseness was observed to be more frequent in patients with left-lung carcinoma who had undergone a mediastinoscopy (p = 0.03). Hoarseness developed in 6% of the patients sampled at station 4L, whereas this ratio was 0% in patients who were not sampled at 4L (p = 0.07). A multivariate analysis showed that the presence of a tumor in the left lung is the only independent risk factor indicating hoarseness (p = 0.09). The sensitivity, NPV, and accuracy of VM were calculated as to be 0.87, 0.95, and 0.96, respectively. The same staging values for CM were 0.83, 0.94, and 0.95, respectively. Conclusion VM, the presence of a tumor in the left-lung, and 4L sampling via mediastinoscopy are risk factors for subsequent hoarseness. Probably due to a wider area of dissection, VM can lead to more frequent hoarseness. abstract_id: PUBMED:18635564 Mediastinoscopy and video-assisted thoracoscopic surgery: anesthetic pitfalls and complications. Endoscopic evaluation of the thoracic cavity was first described in 1910 when Jacobaeus used a cystoscope for pleural examination. Significant advances in thoracoscopic surgery, including the use of high-definition videoscopy and refinements in surgical technique, have created a vast array of increasingly complex procedures that can be performed. The minimally invasive nature of video-assisted thoracoscopic surgery (VATS) makes it ideal for diagnostic and therapeutic procedures in ambulatory and critically ill patients. Mediastinoscopy is often performed immediately preceding VATS to permit sampling of mediastinal lymph nodes. As the indications for thoracoscopic surgery expand, the anesthesiologist must be familiar with common anesthetic and surgical complications, which occur in up to 9% of patients. abstract_id: PUBMED:22108943 Is video mediastinoscopy a safer and more effective procedure than conventional mediastinoscopy? A best evidence topic in cardiothoracic surgery was written according to a structured protocol. The question addressed was whether video-assisted mediastinoscopy (VAM) is a more effective procedure than conventional mediastinoscopy (CM). A total of 108 papers were identified using the search as discussed below. Of which, eight papers presented the best evidence to answer the clinical question as they included a sufficient number of patients to reach conclusions regarding the issues of interest for this review. Complications, complication rates, number of lymph nodes biopsies, number of stations sampled and training opportunities were included in the assessment. The author, journal, date and country of publication, patient group studied, study type, relevant outcomes, results and study weaknesses of the papers are tabulated. Literature search revealed that CM is a safe procedure associated with low mortality (0-0.05%) and morbidity (0-5.3%). CM has high levels of accuracy (83.8-97.2%) and negative predictive value (81-95.7%). Training in CM can be difficult as the limited vision means that the trainer cannot monitor directly the dissection and the areas biopsied by the trainee as one operator and effectively see at any time. VAM is also a safe procedure with comparable results to that of CM in term of mortality (0%), morbidity (0.83-2.9%), accuracy (87.9-98.9%) and negative predictive values (83-98.6%). The main advantage is higher number of biospsies taken (VAM, 6-8.5; CM, 5-7.13) and number of mediastinal lymph node stations sampled (VAM, 1.9-3.6; CM, 2.6-2.98). VAM can be associated with more aggressive dissecting and that can lead to more complications. The use of VAM can provide a better and safer training opportunity since both trainer and trainee can share the magnified image on the monitor. All studies available are comparing heterogeneous groups of non-matched group of patients which can bias the outcomes reported. There is a lack of comprehensive randomized studies to compare both procedures and to support any preference towards VAM over CM. We conclude that there is actually very little objective evidence of VAM superiority over CM. abstract_id: PUBMED:22173676 Comparison of video-assisted mediastinoscopy and video-assisted mediastinoscopic lymphadenectomy for lung cancer. Purpose: We compared the efficacy and complications of video-assisted mediastinoscopy (VAM) and video-assisted mediastinal lymphadenectomy (VAMLA) for mediastinal staging of lung cancer. Methods: Between March 2006 and July 2008, a total of 157 patients with non-small-cell lung cancer (NSCLC) underwent VAM (n = 113, 72%) or VAMLA (n = 44, 28%). We studied them retrospectively. Data for the operating time, node stations sampled/dissected, number of biopsies, and the patients who were pN0 by mediastinoscopy and underwent thoracotomy were collected. The false-negative rate was calculated. Demographics and operative complications were analyzed. Results: The overall complication rate was 5.7% (n = 9). The most common complication was hoarseness (n = 8). Complications were seen significantly more often after VAMLA than after VAM (11.3% vs. 2.6%, P = 0.04). There were no deaths. The mean number of removed lymph nodes (8.43 ± 1.08) and the station numbers (4.81 ± 0.44) per patient were higher with VAMLA than with VAM (7.65 ± 1.68, P = 0.008 and 4.38 ± 0.80, P = 0.001, respectively). The mean operating time was 44.8 ± 6.6 min for VAM and 82.0 ± 7.8 min for VAMLA. Patients diagnosed as pN2 numbered 9 in the VAMLA group and 27 in the VAM group. The patients diagnosed as pN0 with mediastinoscopy then underwent thoracotomy (VAM 77, VAMLA 32). When they were investigated for the presence of mediastinal lymph nodes, there were three (3.8%) false-negative results in the VAM group and five (15.6%) in the VAMLA group. Sensitivity, accuracy, and negative predictive values for VAM and VAMLA were 0.90/0.97/0.96 and 0.64/0.87/0.84, respectively. Conclusion: VAMLA was found to be superior to VAM with regard to the number of stations and lymph nodes. Complications after VAMLA were common. The sensitivity and NPV of VAM for mediastinal staging are significantly higher than those of VAMLA. Answer: The comparison between video-assisted mediastinoscopy (VAM) and conventional mediastinoscopy (CM) has been the subject of several studies, with the general consensus indicating that VAM may offer several advantages over CM. VAM has been associated with a higher total number of dissected lymph nodes and a higher number of stations sampled compared to CM, which suggests a potentially better diagnostic yield (PUBMED:20417780; PUBMED:21601176; PUBMED:22173676). Additionally, some studies have reported that VAM has a trend toward higher negative predictive value, although this has not always reached statistical significance (PUBMED:20417780). In terms of safety, the results are somewhat mixed. One study found that VAM had fewer complications than CM (PUBMED:21601176), while another reported a higher rate of minor complications with VAM, which might be attributed to a more aggressive and thorough dissection (PUBMED:20417780). However, both methods are generally considered safe (PUBMED:12842542; PUBMED:19537118). VAM also appears to be a useful tool in the management of lung neoplasms, allowing for a comprehensive evaluation of the primary tumor and intrathoracic spread (PUBMED:11722067). It has been suggested that VAM provides superior visualization of surrounding structures and the advantage of bimanual dissection, which could be beneficial for both diagnostic and therapeutic indications (PUBMED:22159246). Despite these advantages, VAM has been associated with a higher incidence of hoarseness compared to CM, particularly in patients with left-lung carcinoma and those sampled at station 4L (PUBMED:27385137). This could be due to a wider area of dissection with VAM. Overall, while VAM seems to offer a better lymph node yield and may have a comparable or slightly improved safety profile compared to CM, the evidence is not entirely conclusive. Some studies have called for more comprehensive randomized studies to compare both procedures and to support a preference for VAM over CM (PUBMED:22108943). Nonetheless, the current literature suggests that VAM is a valuable technique in the staging and management of thoracic diseases, with potential advantages in terms of diagnostic performance and training opportunities for surgeons (PUBMED:22108943; PUBMED:22159246).
Instruction: Is protected specimen brush a reproducible method to diagnose ICU-acquired pneumonia? Abstracts: abstract_id: PUBMED:8325050 Is protected specimen brush a reproducible method to diagnose ICU-acquired pneumonia? Unlabelled: Protected specimen brush (PSB) is considered to be one of the standard methods for the diagnosis of ventilator-associated pneumonia, but to our knowledge, intraindividual variability in results has not been reported previously. Purpose: To compare the results of two PSB performed in the same subsegment on patients with suspected ICU-acquired pneumonia (IAP). Study Design: Between October 1991 and April 1992, each mechanically ventilated patient with suspected IAP underwent bronchoscopy with two successive PSB in the lung segment identified as abnormal on radiographs. Results of the two PSB cultures were compared using 10(3) cfu/ml cutoff for a positive result. Four definite diagnoses were established during the follow up: definite pneumonia, probable pneumonia, excluded pneumonia, and uncertain pneumonia. Population: Forty-two episodes in 26 patients were studied; 60 percent of patients received prior antibiotic therapy. Thirty-two microorganisms were isolated from 24 pairs of PSB. Definite diagnosis was definite pneumonia in 7, probable pneumonia in 8, excluded pneumonia in 17, and uncertain pneumonia in 10 cases. Results: The PSB recovered the same microorganisms and argued for a good qualitative reproducibility. The distinction of positive and negative results on the basis of the 10(3) cfu/ml classic threshold was less reproducible. For 24 percent of the microorganisms recovered and in 16.7 percent of episodes of suspected IAP, the two consecutive samples gave results spread out on each side of the 10(3) cfu/ml cutoff. Discordance was higher when definite diagnosis was certain or probable than when diagnosis was excluded (p = 0.015). There was no statistical effect of the order of samples between the two specimens for bacterial index and microorganism concentrations. Conclusion: These findings argue for the poor repeatability of PSB in suspected IAP and question the yield of the 10(3) cfu/ml threshold. In attempting to diagnose IAP, the results of PSB must be interpreted with caution considering the intraindividual variability. abstract_id: PUBMED:7606959 A comparison of bronchoscopic vs blind protected specimen brush sampling in patients with suspected ventilator-associated pneumonia. Background: Pneumonia is a common complication in patients undergoing mechanical ventilation and increases ICU mortality. The clinical diagnosis of ventilator-associated, however, pneumonia is unreliable, and many consider bronchoscopic-directed protected specimen brush sampling and quantitative culture the diagnostic method of choice. Bronchoscopy, however, is expensive and not readily available in many ICUs. Objective: To test the hypothesis that "blind" protected specimen brush (PSB) sampling may produce results similar to that of bronchoscopic-directed sampling. Setting: The medical ICU of a university-affiliated teaching hospital. Intervention: Patients with suspected ventilator-associated pneumonia (VAP) who had not received antibiotics for at least 48 h underwent "blind" and bronchoscopic-directed PSB sampling with quantitative culture. Results: Fifty-five paired PSB specimens were obtained from 53 patients. There was an 85% quantitative agreement between the blind and bronchoscopic-directed specimens. The agreement was independent of the bronchopulmonary segment from which the bronchoscopic sampling was directed. Conclusion: The results of this study are consistent with the notion that blind PSB sampling and quantitative culture may prove to be a useful, cost-effective, and minimally invasive method of diagnosing VAP. abstract_id: PUBMED:3631729 Protected transbronchial needle aspiration and protected specimen brush in the diagnosis of pneumonia. Protected transbronchial needle aspiration (PTBNA) of pneumonic lung theoretically could bypass dislodged upper respiratory tract flora, a potential source of contamination of protected specimen brush (PSB) cultures. To evaluate the usefulness of PSB and PTBNA in establishing the etiology of pneumonia, we prospectively studied 20 patients with acute bacterial pneumonia not receiving antibiotics. After informed consent, patients had fiberoptic bronchoscopy under fluoroscopy to localize the pneumonia, and specimens were obtained by the PSB. The protective plug of a specially devised needle for PTBNA was pneumatically dislodged and aspiration was performed within the infiltrate under fluoroscopy. Quantitative cultures were plated immediately for aerobes, anaerobes, and Legionella. Greater than 4 X 10(3) organisms/brush or 1 X 10(4) organisms/ml needle aspirate were considered to be consistent with infection. The results using PSB and PTBNA were compared in 15 of 20 patients in whom a definitive diagnosis (positive blood or pleural fluid culture) or presumptive diagnosis (expectorated sputum culture, clinical characteristics, and response to specific therapy) was established. The PSB and PTBNA cultures on uninfected control subjects (n = 5) being bronchoscoped for other reasons were negative. The PSB and PTBNA were each diagnostic in 2 of the 5 patients with definitive diagnoses. In the group with a presumptive diagnosis (n = 10), PSB was diagnostic in 7 of 10 and PTBNA in 9 of 10. The overall (definitive plus presumptive) diagnostic yield was 60% for PSB and 73% for PTBNA. Multiple organisms were isolated in high concentrations in 53% of the patients. The most common organisms recovered in addition to the primary pathogen was alpha hemolytic streptococci.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:8564144 Reliability of quantitative cultures of protected specimen brush after freezing. Protected specimen brush (PSB) is considered to be one of the standard methods for diagnosing mechanical ventilator-acquired pneumonia at a threshold value &gt; or = 10(3) cfu/ml. Nevertheless, this procedure requires immediate cultures which are not always possible 24 h per day. We therefore wanted to appreciate the diagnostic value of delayed quantitative cultures after specimen freezing. PSB was performed by fiberoptic bronchoscopy on 43 mechanically ventilated patients with suspicion of nosocomial bronchopneumonia. After PSB procedure, two aliquots of 0.5 ml were prepared. One aliquot was plated immediately on different culture media (Group 1). A second aliquot was frozen at -80 degrees C for 24 h, then plated on the same culture media as Group 1 (Group 2). All samples were incubated for 48 h. The diagnostic value threshold of PSB was 10(3) cfu/ml. A total of 47 samples were performed on 43 patients. In Group 1, cultures from PSB were positive in 26 samples and revealed 41 species yielding &gt; or = 10(3) cfu/ml. In Group 2, PSB cultures were positive in 24 samples and revealed 36 species yielding &gt; or = 10(3) cfu/ml. Despite a mean decrease in bacterial count of 1.00 +/- 1.44 log 10 (p &lt; 0.001), most important for Streptococcus pneumoniae and Escherichia coli (respectively 3.22 +/- 2.21 log10 and 2.41 +/- 0.52 log 10), sensitivity and specificity of quantitative cultures after specimen freezing, compared with immediate cultures, were 88% and 100% respectively. We concluded that specimens from PSB could be frozen at -80 degrees C with good reliability except for S. pneumoniae and E. coli, enabling PSB procedure to be performed around the clock. abstract_id: PUBMED:7065510 Bacteriologic diagnosis of nosocomial pneumonia in primates. Usefulness of the protected specimen brush. We evaluated the usefulness of a protected specimen brush (PSB) in obtaining uncontaminated lower respiratory tract material for bacteriologic examination in a primate model of oleic acid-induced acute diffuse lung injury and naturally occurring "nosocomial pneumonia." The bacterial cause of each pneumonia was established by either an immediate postmortem lung aspirate and/or antemortem blood culture. Bacterial pneumonia occurred in 12 of the 15 animals studied. The PSB cultures were sterile in 11 normal, intubated baboons and in 7 animals with non-pneumonic infiltrates by chest radiography. In each of these instances contamination of the specimen by proximal airway flora was avoided with the PSB. Among 10 baboons with bacteriologically documented pneumonias, the PSB cultures correctly identified the causative pathogen in 7 animals despite the presence of diffuse lung infiltrates radiographically, and multiple pathogenic bacteria in proximal airway secretions. Only 1 of 10 (10%) PSB specimens in these animals was contaminated with a possibly unrelated pathogen. The PSB largely avoided contamination of the culture specimen by proximal airway flora, and therefore should be useful in differentiating bacterial colonization of the airways from pneumonia in the presence of diffuse pulmonary infiltrates. abstract_id: PUBMED:10378562 Diagnosis of nosocomial pneumonia in cancer patients undergoing mechanical ventilation: a prospective comparison of the plugged telescoping catheter with the protected specimen brush. Study Objectives: Quantitative culture of protected samples of lower respiratory tract secretions obtained by a fiberoptic protected specimen brush (PSB) is widely accepted for the diagnosis of ventilator-associated pneumonia (VAP), but this diagnostic procedure is time consuming, expensive, and may give rise to iatrogenic complications, especially in cancer patients who often present with thrombocytopenia. The plugged telescoping catheter (PTC) could be a satisfactory alternative to the PSB in this setting. The aim of the present study was to evaluate the interest of the PTC to diagnose VAP in ventilated cancer patients. Design: A prospective observational study. Setting: A 15-bed medical-surgical ICU in a comprehensive cancer center. Patients And Interventions: Over a 9-month period, 42 patients suspected of having bacterial VAP during mechanical ventilation underwent 69 bronchial samplings: a blinded PTC and a fiberoptic PSB were performed successively in each case. A positive culture for both sampling procedures was defined as the recovery of &gt; or = 10(3) cfu/mL of at least one potential pathogen. The PSB result was taken as the reference standard. Measurements And Results: The overall agreement between the techniques was 87% (60/69). PTC had a sensitivity of 67%, a specificity of 93%, a positive predictive value of 71%, and a negative predictive value of 91%. Conclusions: We conclude that the accuracy of the blinded PTC compares well with that of the PSB for the diagnosis of VAP in cancer patients. The sensitivity of the PTC observed herein, which is slightly lower than that described in previous studies, may be due to the blinded nature of the method: the indications for initial or secondary coupling with a directed sampling method in patients with suspicion of localized pneumonia remain to be determined. abstract_id: PUBMED:3802934 Use of the protected specimen brush in patients with endotracheal or tracheostomy tubes. Twenty-one patients on mechanical ventilators for greater than 48 hours who had new localized infiltrates were evaluated using a quantitative culture technique of the involved lung compared to the non-involved lung. Based on the clinical course, response to antibiotics, or subsequent analysis of pathologic specimens, eight patients were felt to have acute bacterial pneumonia, while the remaining 13 were felt to have an alternative cause of their infiltrate. Cultures of the protected brush specimen of the involved lung in all eight cases of bacterial pneumonia had one or more organisms grown at a greater than 100 colony forming units (cfu) per ml while only one of the 13 cases of non-pneumonia had a culture from the involved area having greater than 100 cfu per ml (p less than 0.001). The non-involved area always grew fewer organisms than the involved area, and in 16 cases, there was no growth from the specimen obtained from the non-involved area. abstract_id: PUBMED:7497774 Reappraisal of distal diagnostic testing in the diagnosis of ICU-acquired pneumonia. Background: The thresholds of the diagnostic procedures performed to diagnose ICU-acquired pneumonia (IAP) are either speculated or incompletely tested. Purpose: To evaluate the best threshold of protected specimen brush (PSB), plugged telescoping catheter (PTC), BAL culture (BAL C), and direct examination of cytocentrifugated lavage fluid (BAL D) to diagnose IAP. Each mechanically ventilated patient with suspected IAP underwent bronchoscopy successively with PSB, PTC, and BAL in the lung segment identified radiographically. Population: One hundred twenty-two episodes of suspected IAP (occurring in 26% of all mechanically ventilated patients) were studied. Forty-five patients had definite IAP, and 58 had no IAP. Diagnosis was uncertain in 19 cases. Results: Using the classic thresholds, sensitivity was 67% for PSB, 54% for PTC, 59% for BAL D, and 77% for BAL C. Specificity was 88% for PSB, 77% for PTC, 98% for BAL D, and 77% for BAL C. We used receiver operating characteristics methods to reappraise thresholds. Decreasing the thresholds to 500 cfu/mL for PSB, 10(2) cfu/mL for PTC, 2% cells containing bacteria for BAL D, 4 x 10(3) cfu/mL for BAL C increased the sensitivities (plus 14%, 23%, 25%, 10%, respectively) and moderately decreased the specificities (minus 4%, 9%, 2%, 4%, respectively) of the four examinations. The association of PSB with a 500 cfu/mL threshold and BAL D with a 2% threshold recovered all but one episode of pneumonia (SE 96 +/- 4%) with a 84 +/- 10% specificity. For a similar ICU population, these "best" thresholds increased negative predictive value with a minimal decrease of positive predictive value. They need to be confirmed in multiple ICU settings in prospective fashion. abstract_id: PUBMED:2371075 Prospective evaluation of the protected specimen brush for the diagnosis of pulmonary infections in ventilated newborns. The precise diagnosis of lower respiratory tract infection in the critically ill newborn remains a difficult challenge. The bronchoscopic protected specimen brush (PSB) is a reliable method in intubated adults. Because the bronchoscopic procedure is not generally available for young children, Zucker proposed a blind technique for introducing the PSB into the distal airways. His results were promising but were not compared with any bacteriologic reference method. Therefore, we wanted to evaluate this technique in comparison with the open lung biopsy (OLB) when it could be ethically accomplished. Eleven PSB were collected simultaneously with an OLB. The sensitivity of the PSB procedure was 100%, its specificity 88%, its positive predictive value 66%, and its negative predictive value 100%. There were no complications secondary to the PSB procedure. In this short study, the PSB procedure using a blind technique is safe and feasible to obtain uncontaminated specimens in intubated and ventilated newborns, and is largely accurate in identifying the bacterial etiologic agent of lower respiratory tract infection. abstract_id: PUBMED:3177397 Diagnosis of nosocomial bacterial pneumonia in intubated patients undergoing ventilation: comparison of the usefulness of bronchoalveolar lavage and the protected specimen brush. Purpose: To compare the usefulness of specimens recovered using a protected specimen brush and those recovered by bronchoalveolar lavage in the diagnosis of nosocomial pneumonia occurring in intubated patients undergoing ventilation, we performed both procedures in patients suspected of having pneumonia because of the presence of a new pulmonary infiltrate and purulent tracheal secretions. Patients And Methods: Twenty-one patients (16 men and five women) with an average age of 57 +/- 12 years were studied. They had been receiving mechanical ventilation for 8 +/- 6 days before inclusion in the trial. The clinical suspicion for nosocomial bacterial pneumonia was high in these patients. Fiberoptic bronchoscopy was performed in each patient. Bronchoscopy specimens were obtained by a protected specimen brush and by bronchoalveolar lavage, and were then processed for quantitative bacterial and fungal culture using standard methods. Total cell counts were performed on an aliquot of resuspended original lavage fluid. Differential cell counts were made on at least 500 cells. In addition, 300 cells were examined at high-power magnification and the percentage of cells containing intracellular microorganisms and the average number of extracellular organisms per oil-immersion field were determined. Results: Quantitative culture of specimens recovered using the protected specimen brush were positive (more than 10(3) colony-forming units [cfu]/ml) in five of five patients with subsequently confirmed pneumonia, and negative (less than 10(3) cfu/ml) in 13 of 13 patients without bacterial pneumonia, but results were not available until 24 to 48 hours after the procedure. Quantification of intracellular organisms in cells recovered by lavage was also useful in distinguishing patients with pneumonia (more than 25 percent of cells with intracellular organisms in five of five patients) from those without pneumonia (less than 15 percent of cells with intracellular organisms in all cases), and results were available immediately. In contrast, quantitative culture of lavage fluid and differential cell counts were of little value in identifying infected patients. Conclusion: The protected specimen brush and microscopic identification of intracellular organisms in cells recovered by lavage yield useful and complementary information, and together permit rapid and specific treatment of most patients with nosocomial pneumonia. Answer: The protected specimen brush (PSB) is considered one of the standard methods for diagnosing ventilator-associated pneumonia (VAP) in ICU patients. However, its reproducibility has been questioned in several studies. One study found that while PSB could recover the same microorganisms, indicating good qualitative reproducibility, the quantitative distinction between positive and negative results based on the 10^3 cfu/ml threshold was less reproducible. In 24% of the microorganisms recovered and in 16.7% of episodes of suspected ICU-acquired pneumonia (IAP), consecutive samples yielded results on either side of the 10^3 cfu/ml cutoff, suggesting poor repeatability of PSB in suspected IAP (PUBMED:8325050). Another study compared "blind" PSB sampling to bronchoscopic-directed sampling and found an 85% quantitative agreement between the two methods, suggesting that blind PSB sampling could be a useful, cost-effective, and minimally invasive method for diagnosing VAP (PUBMED:7606959). A further study evaluated the reliability of quantitative cultures of PSB after freezing and concluded that specimens could be frozen at -80 degrees C with good reliability, except for certain bacteria like Streptococcus pneumoniae and Escherichia coli, allowing for PSB procedures to be performed around the clock (PUBMED:8564144). In contrast, another study found that the diagnostic yield of PSB was 60%, while that of protected transbronchial needle aspiration (PTBNA) was 73%, suggesting that PSB may not be as reliable as other methods (PUBMED:3631729). Additionally, a study comparing the plugged telescoping catheter (PTC) with PSB in cancer patients undergoing mechanical ventilation found that the overall agreement between the techniques was 87%, with PTC having a sensitivity of 67% and a specificity of 93% when compared to PSB (PUBMED:10378562). Lastly, a study that reappraised the thresholds for diagnosing IAP found that decreasing the thresholds for PSB increased sensitivity and only moderately decreased specificity, suggesting that the classic thresholds might not be optimal and that the results of PSB must be interpreted with caution (PUBMED:7497774).
Instruction: CT findings of persistent pure ground glass opacity: can we predict the invasiveness? Abstracts: abstract_id: PUBMED:37180668 CT-based three-dimensional invasiveness analysis of adenocarcinoma presenting as pure ground-glass nodules. Background: We invest computed tomography (CT) image differences between non-invasive adenocarcinomas (NIAs) and invasive adenocarcinomas (IAs) presenting as pure ground glass nodules (GGNs). Methods: From 2013 to 2019, 48 pure GGNs were surgically resected in 45 patients. Of these, 40 were pathologically diagnosed as non-small cell lung cancers (NSCLCs). We assessed them using the Synapse Vincent (Fujifilm Co., Ltd., Tokyo, Japan) three-dimensional (3D) analysis system; we drew histograms of the CT densities. We calculated the maximum, minimum, means, and standard deviations of the densities. The proportions of GGNs of high CT density were compared between the two groups. The diagnostic performance was investigated via receiver operating curve (ROC) analysis. Results: Of the 40 pure GGNs, 20 were NIAs (4 adenocarcinomas in situ and 16 minimally IAs) and 20 IAs. Significant correlations were evident between histological invasiveness and the maximum and mean CT densities and the standard deviation. Neither the nodule volume nor the minimum CT density significantly predicted invasiveness. A CT volume density proportion &gt;-300 Hounsfield units optimally predicted the invasiveness of pure GGNs; the cutoff was 5.41% with a sensitivity of 85% and a specificity of 95%. Conclusions: CT density reflected the invasiveness of pure GGNs. A CT volume proportion density &gt;-300 Hounsfield units may significantly predict histological invasiveness. abstract_id: PUBMED:32348187 CT Characteristics for Predicting Invasiveness in Pulmonary Pure Ground-Glass Nodules. OBJECTIVE. The objective of our study was to investigate the differences in the CT features of atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive adenocarcinoma (IA) manifesting as a pure ground-glass nodule (pGGN) with the aim of determining parameters predictive of invasiveness. MATERIALS AND METHODS. A total of 161 patients with 172 pGGNs (14 AAHs, 59 AISs, 68 MIAs, and 31 IAs) were retrospectively enrolled. The following CT features of each histopathologic subtype of nodule were analyzed and compared: lesion location, diameter, area, shape, attenuation, uniformity of density, margin, nodule-lung interface, and internal and surrounding changes. RESULTS. ROC curves revealed that nodule diameter and area (cutoff value, 10.5 mm and 86.5 mm2; sensitivity, 87.1% and 87.1%; specificity, 70.9% and 65.2%) were significantly larger in IAs than in AAHs, AISs, and MIAs (p &lt; 0.001), whereas the latter three were similar in size (p &gt; 0.050). CT attenuation higher than -632 HU in pGGNs indicated invasiveness (sensitivity, 78.8%; specificity, 59.8%). As opposed to noninvasive pGGNs (AAHs and AISs), invasive pGGNs (MIAs and IAs) usually had heterogeneous density, irregular shape, coarse margin, lobulation, spiculation, pleural indentation, and dilated or distorted vessels (each, p &lt; 0.050). Multivariate analysis showed that mean CT attenuation and presence of lobulation were predictors for invasive pGGNs (p ≤ 0.001). CONCLUSION. The likelihood of invasiveness is greater in pGGNs with larger size (&gt; 10.5 mm or &gt; 86.5 mm2), higher attenuation (&gt; -632 HU), heterogeneous density, irregular shape, coarse margin, spiculation, lobulation, pleural indentation, and dilated or distorted vessels. abstract_id: PUBMED:28267551 Quantitative CT analysis of pulmonary pure ground-glass nodule predicts histological invasiveness. Objective: To assess whether quantitative computed tomography (CT) can help predict histological invasiveness of pulmonary adenocarcinoma appearing as pure ground glass nodules (pGGNs). Methods: A total of 110 pulmonary pGGNs were retrospectively evaluated, and pathologically classified as pre-invasive lesions, minimally invasive adenocarcinoma (MIA) and invasive pulmonary adenocarcinoma (IPA). Maximum nodule diameters, largest cross-sectional areas, volumes, mean CT values, weights, and CT attenuation values at the 0th,2th,5th, 25th, 50th,75th, 95th, 98th and100th percentiles on histogram, as well as 2th to 98th, 5th to 95th, 25th to 75th,and 0th to 100thslopes, respectively, were compared among the three groups. Results: Of the 110 pGGNs, 50, 28, and 32 were pre-invasive lesions, MIA, and IPA, respectively. Maximum nodule diameters, largest cross-sectional areas, andmass weights were significantly larger in the IPA group than in pre-invasive lesions. The 95th, 98th, 100th percentiles, and 2th to 98th, 25th to 75th, and 0th to 100thslopes were significantly different between pre-invasive lesions and MIA or IPA. Logistic regression analysis showed that the maximum nodule diameter (OR=1.21, 95%CI: 1.071-1.366, p&lt;0.01) and 100th percentile on histogram (OR=1.02, 95%CI: 1.009-1.032, p&lt;0.001) independently predicted histological invasiveness. Conclusions: Quantitative analysis of CT imaging can predict histological invasiveness of pGGNs, especiallythe maximum nodule diameter and 100th percentile on CT number histogram; this can instruct the long-term follow-up and selective surgical management. abstract_id: PUBMED:25773846 CT findings of persistent pure ground glass opacity: can we predict the invasiveness? Background: To investigate whether CT findings can predict the invasiveness of persistent cancerous pure ground glass opacity (pGGO) by correlating the CT imaging features of persistent pGGO with pathological changes. Materials And Methods: Ninety five patients with persistent pGGOs were included. Three radiologists evaluated the morphologic features of these pGGOs at high resolution CT (HRCT). Binary logistic regression was used to assess the association between CT findings and histopathological classification (pre-invasive and invasive groups). Receiver operating characteristic (ROC) curve analysis was performed to evaluate the diagnostic performance of diameters. Results: A total of 105 pGGOs were identified. Between pre-invasive (atypical adenomatous hyperplasia, AAH, and adenocarcinoma in situ, AIS) and invasive group (minimally invasive adenocarcinoma, MIA and invasive lung adenocarcinomas, ILA), there were significant differences in diameter, spiculation and vessel dilatation (p&lt;0.05). No difference was found in air-bronchogram, bubble- lucency, lobulated-margin, pleural indentation or vascular convergence (p&gt;0.05). The optimal threshold value of the diameters to predict the invasiveness of pGGO was 12.50mm. Conclusions: HRCT features can predict the invasiveness of persistent pGGO. The pGGO with a diameter more than 12.50mm, presences of spiculation and vessel dilatation are important factors to differentiate invasive adenocarcinoma from pre-invasive cancerous lesions. abstract_id: PUBMED:28616268 Tumor invasiveness defined by IASLC/ATS/ERS classification of ground-glass nodules can be predicted by quantitative CT parameters. Background: To investigate the potential value of CT parameters to differentiate ground-glass nodules between noninvasive adenocarcinoma and invasive pulmonary adenocarcinoma (IPA) as defined by IASLC/ATS/ERS classification. Methods: We retrospectively reviewed 211 patients with pathologically proved stage 0-IA lung adenocarcinoma which appeared as subsolid nodules, from January 2012 to January 2013 including 137 pure ground glass nodules (pGGNs) and 74 part-solid nodules (PSNs). Pathological data was classified under the 2011 IASLC/ATS/ERS classification. Both quantitative and qualitative CT parameters were used to determine the tumor invasiveness between noninvasive adenocarcinomas and IPAs. Results: There were 154 noninvasive adenocarcinomas and 57 IPAs. In pGGNs, CT size and area, one-dimensional mean CT value and bubble lucency were significantly different between noninvasive adenocarcinomas and IPAs on univariate analysis. Multivariate regression and ROC analysis revealed that CT size and one-dimensional mean CT value were predictive of noninvasive adenocarcinomas compared to IPAs. Optimal cutoff value was 13.60 mm (sensitivity, 75.0%; specificity, 99.6%), and -583.60 HU (sensitivity, 68.8%; specificity, 66.9%). In PSNs, there were significant differences in CT size and area, solid component area, solid proportion, one-dimensional mean and maximum CT value, three-dimensional (3D) mean CT value between noninvasive adenocarcinomas and IPAs on univariate analysis. Multivariate and ROC analysis showed that CT size and 3D mean CT value were significantly differentiators. Optimal cutoff value was 19.64 mm (sensitivity, 53.7%; specificity, 93.9%), -571.63 HU (sensitivity, 85.4%; specificity, 75.8%). Conclusions: For pGGNs, CT size and one-dimensional mean CT value are determinants for tumor invasiveness. For PSNs, tumor invasiveness can be predicted by CT size and 3D mean CT value. abstract_id: PUBMED:32676312 Determining the invasiveness of pure ground-glass nodules using dual-energy spectral computed tomography. Background: The present work aimed to investigate the clinical application of using quantitative parameters generated in the unenhanced phase (UP) and venous phase (VP) in dual-energy spectral CT for differentiating the invasiveness of pure ground-glass nodule (pGGN). Methods: Sixty-two patients with 66 pGGNs who underwent preoperative dual-energy spectral CT in UP and VP were evaluated retrospectively. Nodules were divided into three groups based on pathology: adenocarcinoma in situ (AIS, n=19), minimally invasive adenocarcinoma (MIA, n=22) (both in the preinvasive lesion group) and invasive adenocarcinoma (IA, n=25). The iodine concentration (IC) and water content (WC) in nodules were measured in material decomposition images. The nodule CT numbers and slopes(k) were measured on monochromatic images. All measurements, including the maximum diameter of nodules were statistically compared between the AIS-MIA group and IA group. Results: There were significant differences of WC in VP between AIS-MIA group and IA group (P&lt;0.05). The CT attenuation values of the 40-140 keV monochromatic images in UP and VP were significantly higher for the invasive nodules. Logistic regression analysis showed that the maximum nodule diameter [odd ratio (OR) =1.21, 95% CI: 1.050-1.400, P&lt;0.01] and CT number in 130 keV images in venous phase (OR =1.03, 95% CI: 1.014-1.047, P&lt;0.001) independently predicted histological invasiveness. Conclusions: The quantitative parameters in dual-energy spectral CT in the unenhanced phase and venous phase provide useful information in differentiating preinvasive lesion group from IA group of pGGN, especially the maximum nodule diameter and CT number in the 130 keV images in the venous phase. abstract_id: PUBMED:37328382 Meta-analysis of the correlation between CT-based features and invasive properties of pure ground-glass nodules. Several studies have revealed that computed tomography (CT) features can make a distinction in the invasive properties of pure ground-glass nodules (pGGNs). However, imaging parameters related to the invasive properties of pGGNs are unclear. This meta-analysis was designed to decipher the correlation between the invasiveness of pGGNs and CT-based features, and ultimately to be conducive to making rational clinical decisions. We searched a series of databases, including PubMed, Embase, Web of Science, Cochrane Library, Scopus, wanfang, CNKI, VIP, as well as CBM databases, until September 20, 2022, for the eligible publications only in Chinese or English. This meta-analysis was implemented with the Stata 16.0 software. Ultimately, 17 studies published between 2017 and 2022 were included. According to the meta-analysis, we observed a larger maximum size of lesions in invasive adenocarcinoma (IAC) versus that in preinvasive lesions (PIL) [SMD = 1.37, 95% CI (1.07-1.68), P &lt; 0.05]. Meanwhile, there were also increased mean CT values of IAC [SMD = 0.71, 95% CI (0.35, 1.07), P &lt; 0.05], the incidence of pleural traction sign [OR = 1.94, 95% CI (1.24, 3.03), P &lt; 0.05], the incidence of IAC spiculation [OR = 1.55, 95% CI (1.05, 2.29), P &lt; 0.05] in comparison to those of PIL. Nevertheless, IAC and PIL exhibited no significant differences in vacuole sign, air bronchogram, regular shape, lobulation and vascular convergence sign (all P &gt; 0.05). Therefore, IAC and PIL manifested different CT features of pGGNs. The maximum diameter of lesions, mean CT value, pleural traction sign and spiculation are important indicators to distinguish IAC and PIL. Reasonable use of these features can be helpful to the treatment of pGGNs. abstract_id: PUBMED:30976552 Quantitative features can predict further growth of persistent pure ground-glass nodule. Background: To evaluate whether quantitative features of persistent pure ground-glass nodules (PGGN) on the initial computed tomography (CT) scans can predict further nodule growth. Methods: This retrospective study included 59 patients with 101 PGGNs from 2011 to 2012, who received regular CT follow-up for lung nodule surveillance. Nineteen quantitative image features consisting of 8 volumetric and 11 histogram parameters were calculated to detect lung nodule growth. For the extraction of the quantitative features, semi-automatic GrowCut segmentation was implemented on chest CT images in 3D slicer platform. Univariate and multivariate analyses were performed to identify risk factors for nodule growth. Results: With a median follow-up of 52 months, nodule growth was detected in 10 nodules by radiological assessment and in 16 nodules by quantitative features. In univariate analysis, 3D maximum diameter (MD), volume, mass, surface area, 90% percentile, and standard deviation value (SD) of PGGN on the initial CT scan were significantly different between stable nodules and nodules with further growth. In multivariate analysis, MD [hazard ratio (HR), 3.75; 95% confidence interval (CI), 2.14-6.55] and SD (HR, 2.06; 95% CI, 1.35-3.14) were independent predictors of further nodule growth. Also, the area under the curve was 0.896 (95% CI: 0.820-0.948) and 0.813 (95% CI: 0.723-0.883) for MD with a cut-off value of 10.2mm and SD of 50.0 Hounsfield Unit (HU). Besides, the growth rate was 55.6% (n=15) of PGGNs with MD &gt;10.2 mm and SD &gt;50.0 HU. Conclusions: Based on the initial CT scan, the quantitative features can predict PGGN growth more precisely. PGGN with MD &gt;10.2 mm and SD &gt;50.0 HU may require close follow-up or surgical intervention for the high incidence of growth. abstract_id: PUBMED:32711984 Computed tomography density is not associated with pathological tumor invasion for pure ground-glass nodules. Objective: Pure ground-glass nodules are considered to be radiologically noninvasive in lung adenocarcinoma. However, some pure ground-glass nodules are found to be invasive adenocarcinoma pathologically. This study aims to identify the computed tomography parameters distinguishing invasive adenocarcinoma from adenocarcinoma in situ and minimally invasive adenocarcinoma. Methods: From May 2011 to December 2015, patients with completely resected adenocarcinoma appearing as pure ground-glass nodules were reviewed. To evaluate the association between computed tomography features and the invasiveness of pure ground-glass nodules, logistic regression analyses were conducted. Results: Among 432 enrolled patients, 118 (27.3%) were classified as adenocarcinoma in situ, 213 (49.3%) were classified as minimally invasive adenocarcinoma, 101 (23.4%) were classified as invasive adenocarcinoma. There was no postoperative recurrence for patients with pure ground-glass nodules. Logistic regression analyses demonstrated that computed tomography size was the only independent radiographic factor associated with adenocarcinoma in situ (odds ratio, 47.165; 95% confidence interval, 19.279-115.390; P &lt; .001), whereas computed tomography density was not (odds ratio, 1.002; 95% confidence interval, 0.999-1.005; P = .127). Further analyses revealed that there was no distributional difference in computed tomography density among 3 groups (P = .173). Even after propensity score matching for adenocarcinoma in situ/minimally invasive adenocarcinoma and invasive adenocarcinoma, no significant difference in computed tomography density was observed (P = .741). The subanalyses for pure ground-glass nodules with 1 cm or more in size also indicated similar results. Conclusions: In patients with pure ground-glass nodules, computed tomography size was the only radiographic parameter associated with tumor invasion. Measuring computed tomography density provided no advantage in differentiating invasive adenocarcinoma from adenocarcinoma in situ and minimally invasive adenocarcinoma. abstract_id: PUBMED:32334177 Mediastinal lymph node metastases in lung cancer presenting as pure ground-glass nodules: A surgical case report. Introduction: Generally, lung cancer representing as Ground-glass nodules is associated with an early stage and good prognosis. However, we herein report a rare case of pure ground-glass nodules with mediastinal lymph node metastases. Presentation Of Case: A 69-year-old man underwent video assisted thoracic surgery right upper lobectomy with mediastinal lymph node dissection due to multifocal Ground-glass nodules in the right upper lobe of the lung. Histopathologically, six lung adenocarcinomas were present simultaneously. Furthermore, we detected mediastinal lymph nodes metastases that contain micropapillary component. The lung lesion containing micropapillary component was a pure Ground-glass nodule that adjoining pulmonary bulla on CT findings. Discussion: Generally, lung cancer presenting as pure ground-glass nodules is associated with an early stage and good prognosis. However, the necessity of evaluating the mediastinal lymph nodes in pure ground-glass nodules is controversial. It is reported that lung cancer adjoining the wall of a bulla tends to have a poor prognosis, even when small in size. Therefore, Ground-glass nodules with metastases might have a different pathogenesis than other nodules. Conclusion: Adenocarcinoma appearing as pure Ground-glass nodules is associated with early stage lung cancer and a good prognosis. However, the findings in our patient indicate the importance and necessity of evaluating the mediastinal lymph nodes for metastases intraoperatively. Answer: CT findings can indeed help predict the invasiveness of persistent pure ground glass opacities (pGGOs). Several studies have identified CT features that correlate with the histological invasiveness of pGGOs, which can be used to differentiate between pre-invasive and invasive adenocarcinomas. For instance, a study found that the maximum diameter, spiculation, and vessel dilatation on high-resolution CT (HRCT) were significantly different between pre-invasive and invasive groups, with a diameter threshold of 12.50mm being optimal for predicting invasiveness (PUBMED:25773846). Another study reported that quantitative CT analysis could predict histological invasiveness, with maximum nodule diameter and the 100th percentile on the CT number histogram being significant predictors (PUBMED:28267551). Furthermore, a meta-analysis revealed that larger maximum size of lesions, increased mean CT values, the incidence of pleural traction sign, and the incidence of spiculation were more common in invasive adenocarcinoma compared to preinvasive lesions (PUBMED:37328382). Additionally, quantitative features such as the 3D maximum diameter and standard deviation value of pGGOs on initial CT scans were identified as independent predictors of further nodule growth, which could indicate invasiveness (PUBMED:30976552). However, it is important to note that one study found that CT density was not associated with pathological tumor invasion for pure ground-glass nodules, suggesting that CT size may be the only radiographic parameter associated with tumor invasion (PUBMED:32711984). In summary, while CT features such as nodule size, shape, density, and associated signs like spiculation and pleural traction can be indicative of invasiveness in pGGOs, the predictive value of these features can vary, and not all CT parameters may be equally useful in assessing invasiveness.
Instruction: NHS Trusts' clinical research activity and overall CQC performance - Is there a correlation? Abstracts: abstract_id: PUBMED:38286268 Genetic research on Nance-Horan syndrome caused by a novel mutation in the NHS gene. N/A abstract_id: PUBMED:22229851 Phenotype-genotype correlation in potential female carriers of X-linked developmental cataract (Nance-Horan syndrome). Purpose: To correlate clinical examination with underlying genotype in asymptomatic females who are potential carriers of X-linked developmental cataract (Nance-Horan syndrome). Methods: An ophthalmologist blind to the pedigree performed comprehensive ophthalmic examination for 16 available family members (two affected and six asymptomatic females, five affected and three asymptomatic males). Facial features were also noted. Venous blood was collected for sequencing of the gene NHS. Results: All seven affected family members had congenital or infantile cataract and facial dysmorphism (long face, bulbous nose, abnormal dentition). The six asymptomatic females ranged in age from 4-35 years old. Four had posterior Y-suture centered lens opacities; these four also exhibited the facial dysmorphism of the seven affected family members. The fifth asymptomatic girl had scattered fine punctate lens opacities (not centered on the Y-suture) while the sixth had clear lenses, and neither exhibited the facial dysmorphism. A novel NHS mutation (p.Lys744AsnfsX15 [c.2232delG]) was found in the seven patients with congenital or infantile cataract. This mutation was also present in the four asymptomatic girls with Y-centered lens opacities but not in the other two asymptomatic girls or in the three asymptomatic males (who had clear lenses). Conclusions: Lens opacities centered around the posterior Y-suture in the context of certain facial features were sensitive and specific clinical signs of carrier status for NHS mutation in asymptomatic females. Lens opacities that did not have this characteristic morphology in a suspected female carrier were not a carrier sign, even in the context of her affected family members. abstract_id: PUBMED:31755796 Great clinical variability of Nance Horan syndrome due to deleterious NHS mutations in two unrelated Spanish families. Background: Nance-Horan syndrome (NHS) is an X-linked rare congenital disorder caused by mutations in the NHS gene. Clinical manifestations include congenital cataracts, facial and dental dysmorphism and, in some cases, intellectual disability. The aim of the present work was to identify the genetic cause of this disease in two unrelated Spanish NHS families and to determine the relative involvement of this gene in the pathogenesis.Materials and methods: Four members of a two-generation family, three males and one female (Family 1), and seven members of a three-generation family, two males and five females (Family 2) were recruited and their index cases were screened for mutations in the NHS gene and 26 genes related with ocular congenital anomalies by NGS (Next Generation Sequencing).Results: Two pathogenic variants were found in the NHS gene: a nonsense mutation (p.Arg373X) and a frameshift mutation (p.His669ProfsX5). These mutations were found in the two unrelated NHS families with different clinical manifestations.Conclusions: In the present study, we identified two truncation mutations (one of them novel) in the NHS gene, associated with NHS. Given the wide clinical variability of this syndrome, NHS may be difficult to detect in individuals with subtle clinical manifestations or when congenital cataracts are the primary clinical manifestation which makes us suspect that it can be underdiagnosed. Combination of genetic studies and clinical examinations are essential for the clinical diagnosis optimization. abstract_id: PUBMED:37221585 Nance-Horan Syndrome: characterization of dental, clinical and molecular features in three new families. Background: Nance-Horan syndrome (NHS; MIM 302,350) is an extremely rare X-linked dominant disease characterized by ocular and dental anomalies, intellectual disability, and facial dysmorphic features. Case Presentation: We report on five affected males and three carrier females from three unrelated NHS families. In Family 1, index (P1) showing bilateral cataracts, iris heterochromia, microcornea, mild intellectual disability, and dental findings including Hutchinson incisors, supernumerary teeth, bud-shaped molars received clinical diagnosis of NHS and targeted NHS gene sequencing revealed a novel pathogenic variant, c.2416 C &gt; T; p.(Gln806*). In Family 2, index (P2) presenting with global developmental delay, microphthalmia, cataracts, and ventricular septal defect underwent SNP array testing and a novel deletion encompassing 22 genes including the NHS gene was detected. In Family 3, two half-brothers (P3 and P4) and maternal uncle (P5) had congenital cataracts and mild to moderate intellectual deficiency. P3 also had autistic and psychobehavioral features. Dental findings included notched incisors, bud-shaped permanent molars, and supernumerary molars. Duo-WES analysis on half-brothers showed a hemizygous novel deletion, c.1867delC; p.(Gln623ArgfsTer26). Conclusions: Dental professionals can be the first-line specialists involved in the diagnosis of NHS due to its distinct dental findings. Our findings broaden the spectrum of genetic etiopathogenesis associated with NHS and aim to raise awareness among dental professionals. abstract_id: PUBMED:23566852 A Turkish family with Nance-Horan Syndrome due to a novel mutation. Nance-Horan Syndrome (NHS) is a rare X-linked syndrome characterized by congenital cataract which leads to profound vision loss, characteristic dysmorphic features and specific dental anomalies. Microcornea, microphthalmia and mild or moderate mental retardation may accompany these features. Heterozygous females often manifest similarly but with less severe features than affected males. We describe two brothers who have the NHS phenotype and their carrier mother who had microcornea but not cataract. We identified a previously unreported frameshift mutation (c.558insA) in exon 1 of the NHS gene in these patients and their mother which is predicted to result in the incorporation of 11 aberrant amino acids prior to a stop codon (p.E186Efs11X). We also discussed genotype-phenotype correlation according to relevant literature. abstract_id: PUBMED:32303606 Low grade mosaicism in hereditary haemorrhagic telangiectasia identified by bidirectional whole genome sequencing reads through the 100,000 Genomes Project clinical diagnostic pipeline. N/A abstract_id: PUBMED:17451191 Prenatal detection of congenital bilateral cataract leading to the diagnosis of Nance-Horan syndrome in the extended family. Objectives: To describe a family in which it was possible to perform prenatal diagnosis of Nance-Horan Syndrome (NHS). Methods: The fetus was evaluated by 2nd trimester ultrasound. The family underwent genetic counseling and ophthalmologic evaluation. The NHS gene was sequenced. Results: Ultrasound demonstrated fetal bilateral congenital cataract. Clinical evaluation revealed other family members with cataract, leading to the diagnosis of NHS in the family. Sequencing confirmed a frameshift mutation (3908del11bp) in the NHS gene. Conclusion: Evaluation of prenatally diagnosed congenital cataract should include a multidisciplinary approach, combining experience and input from sonographer, clinical geneticist, ophthalmologist, and molecular geneticist. abstract_id: PUBMED:25091991 Identification of a novel mutation in a Chinese family with Nance-Horan syndrome by whole exome sequencing. Objective: Nance-Horan syndrome (NHS) is a rare X-linked disorder characterized by congenital nuclear cataracts, dental anomalies, and craniofacial dysmorphisms. Mental retardation was present in about 30% of the reported cases. The purpose of this study was to investigate the genetic and clinical features of NHS in a Chinese family. Methods: Whole exome sequencing analysis was performed on DNA from an affected male to scan for candidate mutations on the X-chromosome. Sanger sequencing was used to verify these candidate mutations in the whole family. Clinical and ophthalmological examinations were performed on all members of the family. Results: A combination of exome sequencing and Sanger sequencing revealed a nonsense mutation c.322G&gt;T (E108X) in exon 1 of NHS gene, co-segregating with the disease in the family. The nonsense mutation led to the conversion of glutamic acid to a stop codon (E108X), resulting in truncation of the NHS protein. Multiple sequence alignments showed that codon 108, where the mutation (c.322G&gt;T) occurred, was located within a phylogenetically conserved region. The clinical features in all affected males and female carriers are described in detail. Conclusions: We report a nonsense mutation c.322G&gt;T (E108X) in a Chinese family with NHS. Our findings broaden the spectrum of NHS mutations and provide molecular insight into future NHS clinical genetic diagnosis. abstract_id: PUBMED:29402928 A novel small deletion in the NHS gene associated with Nance-Horan syndrome. Nance-Horan syndrome is a rare X-linked recessive inherited disease with clinical features including severe bilateral congenital cataracts, characteristic facial and dental abnormalities. Data from Chinese Nance-Horan syndrome patients are limited. We assessed the clinical manifestations of a Chinese Nance-Horan syndrome pedigree and identified the genetic defect. Genetic analysis showed that 3 affected males carried a novel small deletion in NHS gene, c.263_266delCGTC (p.Ala89TrpfsTer106), and 2 female carriers were heterozygous for the same variant. All 3 affected males presented with typical Nance-Horan syndrome features. One female carrier displayed lens opacities centered on the posterior Y-suture in both eyes, as well as mild dental abnormalities. We recorded the clinical features of a Chinese Nance-Horan syndrome family and broadened the spectrum of mutations in the NHS gene. abstract_id: PUBMED:28922055 Nance-Horan syndrome in females due to a balanced X;1 translocation that disrupts the NHS gene: Familial case report and review of the literature. The Nance-Horan syndrome is an X-linked disorder characterized by congenital cataract, facial features, microcornea, microphthalmia, and dental anomalies; most of the cases are due to NHS gene mutations on Xp22.13. Heterozygous carrier females generally present less severe features, and up to 30% of the affected males have intellectual disability. We describe two patients, mother and daughter, manifesting Nance-Horan syndrome. The cytogenetic and molecular analyses demonstrated a 46,X,t(X;1)(p22.13;q22) karyotype in each of them. No copy-number genomic imbalances were detected by high-density microarray analysis. The mother had a preferential inactivation of the normal X chromosome; expression analysis did not detect any mRNA isoform of NHS. This is the first report of Nance-Horan syndrome due to a skewed X chromosome inactivation resulting from a balanced translocation t(X;1) that disrupts the NHS gene expression, with important implications for clinical presentation and genetic counseling. Answer: The provided abstracts do not contain any information regarding NHS Trusts' clinical research activity and overall Care Quality Commission (CQC) performance or their potential correlation. The abstracts focus on genetic research and clinical findings related to Nance-Horan Syndrome (NHS), which is a rare genetic disorder characterized by congenital cataracts, dental anomalies, and facial dysmorphisms, among other features. They discuss various mutations in the NHS gene, clinical variability, and the importance of genetic studies for diagnosis (PUBMED:38286268, PUBMED:22229851, PUBMED:31755796, PUBMED:37221585, PUBMED:23566852, PUBMED:17451191, PUBMED:25091991, PUBMED:29402928, PUBMED:28922055). To answer the question about the correlation between NHS Trusts' clinical research activity and overall CQC performance, one would need to look at studies or reports that specifically evaluate the performance of NHS Trusts in the UK, their engagement in clinical research, and the assessments made by the CQC, which is the independent regulator of health and social care in England. Such information is not provided in the abstracts given.
Instruction: Genital mycoplasma infections among women in an urban community of northern Nigeria: do we need to search for them? Abstracts: abstract_id: PUBMED:18788259 Genital mycoplasma infections among women in an urban community of northern Nigeria: do we need to search for them? Methods: To determine the incidence of genital Mycoplasma infection among females in Jos. High vaginal swab (HVS) and or Endocervical swab (ECS) samples were obtained from 476 females undergoing vaginal examinations along with other females who volunteered to enroll in the study Samples were processed using standard laboratory procedures for the isolation of Mycoplasma species while information such as age, marital status, occupation and other clinical data were obtained using a questionnaire. The results obtained were analysed using SPSS 11.0 statistical methods and P values = or &lt; 0.05 were considered significant. Results: The overall incidence of genital Mycoplasma infection was found to be 29.6% (n=141); M. hominis, 12.1% (n=57); U. urealyticum 9.4% (n=45); mixed infection, 6.7% (n=32), and other Mycoplasmas, 1.4% (n= 7). Majority of the isolates were from those aged 20-35 years old (most sexually active group); 83% (n=52) of those who presented with vaginal discharge were infected with Mycoplasma spp. (P&lt; 0.05); also, the incidence of infection among the separated/divorce/widowed group was significantly higher than the married group (P&lt;0.05). Conclusion: Mycoplasmas are common genital organisms, hence should be sought out for from ECS probably on routine basis for suspected genital tract infections. abstract_id: PUBMED:28299057 Genital mycoplasmas in women attending the Yaoundé University Teaching Hospital in Cameroon. Genital mycoplasmas are implicated in pelvic inflammatory diseases, puerperal infection, septic abortions, low birth weight, nongonococcal urethritis and prostatitis as well as spontaneous abortion and infertility in women. There is paucity of data on colonisation of genital mycoplasma in women and their drug sensitivity patterns. The aim of our study was to determine the prevalence of genital mycoplasmas (Ureaplasma urealiticum and Mycoplasma hominis) infection and their drug sensitivity patterns in women. A mycofast kit was used for biochemical determination of mycoplasma infection in 100 randomly selected female patients aged 19-57 years, attending the University of Yaoundé Teaching Hospital (UYTH) from March to June 2010. Informed consent was sought and gained before samples were collected. Genital mycoplasmas were found in 65 patients (65%) [95% CI=55.7-74.3%] and distributed as 41 (41%) [95% CI=31.4-50.6%] for U. urealiticum and 4 (4%) [95% CI=0.20- 7.8%] for M. hominis while there was co-infection in 20 women (20%) [95% CI=12.16-27.84%]. In our study, 57 (57%) [95% CI=47.3-67%] had other organisms, which included C. albicans (19 [19%]), G. vaginalis (35 [35%]) and T. vaginalis (3 [3%]). Among the 65 women with genital mycoplasma, the highest co-infection was with G. vaginalis (33.8%). Pristinamycine was the most effective antibiotic (92%) and sulfamethoxazole the most resistant (8%) antibiotic to genital mycoplasmas. We conclude that genital mycoplasma is a problem in Cameroon and infected women should be treated together with their partners. abstract_id: PUBMED:1394388 Epidemic investigation of genital mycoplasma hominis infection in women The genital mycoplasma hominis infection in women was investigated at Xiaguan district, Nanjin in June, 1990. Leucorrheas from 722 women were tested with ELISA for the antigen of M. hominis. The prevalences of genital M. hominis were not statistically different among age and occupation groups. Women having pregnancy more than three times had a significantly higher infection rate, it is suggested that induced abortion might provide chances of the infection. The prevalence was significantly different among groups of various contraceptive means, it is lower in the group of contraceptive condom and higher in the group of intrauterine device. abstract_id: PUBMED:31389367 An update on prevalence, diagnosis, treatment and emerging issues of genital mycoplasma infection in Indian women: A narrative review. Despite adequate treatment of reproductive tract infection, there is persistence of symptoms in some patients. This raises the possibility of existence of other silent microbes with pathogenic potential. Apart from the common sexually transmitted organisms such as Chlamydia trachomatis and Neisseria gonorrhoeae, there are other silent and emerging pathogens, like genital mycoplasma, which have been associated with cervicitis, pelvic inflammatory disease, infertility, and pregnancy-related complications in women. Although these organisms were identified decades ago, they are still overlooked or ignored. There is a need to understand the role played by these organisms in Asian populations and their susceptibility to the standard line of treatment. Data on genital mycoplasma infections in Indian women is heterogeneous, with limited evidence of pathogenicity. Although known for their wide spectrum of reproductive morbidities in western counterparts, these microorganisms are yet to gain the attention of Indian clinicians and microbiologists. There is paucity of adequate information in India regarding these infections, so Indian literature was compiled to get an overview of these pathogens, their association with reproductive morbidities, and their response to treatment. Thus, there is a need to explore genital mycoplasma infections in Indian women, especially in the arena of antimicrobial resistance among genital mycoplasma, which has the potential to become a major problem. A literature search with keywords focusing on "genital mycoplasma", "sexually transmitted infections India", "sexually transmitted mycoplasma", and "characteristic of mycoplasma" was carried out through computerized databases like PubMed, MEDLINE, Embase, and Google Scholar. abstract_id: PUBMED:23834822 Mycoplasma genitalium and preterm delivery at an urban community health center. Objective: To determine the prepartum prevalence of cervical Mycoplasma genitalium colonization and evaluate prospectively whether colonization is associated with preterm delivery among women from a racial/ethnic minority background with a high risk of delivering a low birth weight newborn and a high prevalence of sexually transmitted infections. Methods: In a prospective cohort study at an urban community health center in Roxbury, MA, USA, 100 women receiving routine prenatal care for singleton pregnancies were enrolled between August 2010 and December 2011. Endocervical samples were tested for M. genitalium, and delivery data were collected. Results: The prevalence of M. genitalium colonization at the first prenatal visit was 8.4%. The incidence of low birth weight was 16.7%. The incidence of preterm delivery among women who were known to have a live birth was 16.7%. The incidence of preterm delivery did not differ with respect to M. genitalium colonization. The crude odds ratio for preterm delivery among women with M. genitalium colonization versus those without was 1.27 (95% confidence interval, 0.02-14.78). Conclusion: M. genitalium colonization was not associated with preterm delivery among women with a high incidence of low birth weight newborns and preterm delivery, and a high prevalence of sexually transmitted infections. abstract_id: PUBMED:18277432 Vaginal colonization by genital mycoplasmas in pregnant and non-pregnant women To compare vaginal colonization by genital micoplasmas in pregnant and non pregnant women and to determine the association between pregnancy and colonization by these microorganisms, samples of exocervix an endocervix from pregnant (n = 80) and non pregnant (n = 65) women, from two health centers of Maracaibo, Zulia State, Venezuela were processed. The Mycoplasma-Lyo kit (bioMérieux laboratories) was used for the culture and identification of genital micoplasmas. In pregnant women, prevalences of 10% for M. hominis and 26.25% for Ureaplasma spp. were found; 35.38% for M. hominis and 20% for Ureaplasma spp. in non-pregnant, were obtained. Among the pregnant, Ureaplasma spp. was the most frequently isolated micoplasma, in symptomatic and asymptomatic; while in the non pregnant group, M. hominis was more common among the symptomatic patients; only one case (1.54%) was an asymptomatic carrier of Ureaplasma spp. The highest positivity percentages were obtained in primigravidas (48.71%) and during the second gestational trimester (34.21%). No statistically significant differences were found between vaginal colonization by genital micoplasmas according to age, number of pregnancy and gestational trimester; but they were found between the presented symptomatology and vaginal colonization by genital micoplasmas. Genital micoplasmas were isolated from gravid women at approximately the same recovery rate as in non-pregnant women; being M. hominis the most frequently isolated in non-pregnant women and Ureaplasma spp. in the pregnant group. abstract_id: PUBMED:31536699 The micro-ecology of genital tracts in women with infertility of chlamydia nature. The microbiocenosis of genital tracts was examined in 78 women with infertility of chlamydia etiology and in 33 women with inflammatory diseases of organs of reproductive system. In women of examined groups differences were established between indices of contamination of genital tract with various infection agents. In female patients with infertility of chlamydia etiology increasing of rate of registration of obligate anaerobic microflora, manifestations of ureaplasmosis and mycoplasmosis were observed. In women with infertility the chlamydia infection is accompanied by development of pathological microbiocenosis which in most of female patients (39.7%) manifests as vaginosis. In the examined female patients with chronic inflammatory diseases of organs of small pelvis the disbiotic alterations of genital organs manifested as non-specific vaginitis (45.5%). The condition of micro-ecology of genital tracts in examined women with chronic inflammatory diseases is characterized by increasing of level of contamination of mucous membrane with facultative anaerobic opportunistic microorganisms with pathogenic characteristics and significant decreasing of qualitative indices of isolation rate of lactobacilli. abstract_id: PUBMED:28295282 Genital Mycoplasma infection among Mexican women with systemic lupus erythematosus. Objective: To assess the prevalence of genital Mycoplasma spp. among women with systemic lupus erythematosus (SLE) and to identify factors associated with such infection. Methods: A cross-sectional study was conducted among patients with SLE and healthy women who attended a hospital in Puebla, Mexico, between July 29, 2014, and January 4, 2015. All participants were aged 18 years or older and sexually active. A structured interview assessed sociodemographic, obstetric, gynecologic, and clinical characteristics. Disease activity was evaluated using the Mexican SLE Disease Activity Index. Polymerase chain reaction was used to detect the presence of Mycoplasma spp. in genital samples. Results: Ureaplasma urealyticum was the only genital mycoplasma detected; it was present in 32 (24.6%) of 130 patients with SLE and 12 (12.8%) of 94 healthy women. Patients with SLE had increased odds of infection (odds ratio 2.120, 95% confidence interval 1.046-4.296). Among patients with SLE, multiparity was more common in those with U. urealyticum infection (P=0.043). Conclusion: One-quarter of women with SLE had genital infection with U. urealyticum. An association was found between infection and multiparity among women with SLE. abstract_id: PUBMED:4039970 Prevalence of Chlamydia trachomatis and genital mycoplasmas in asymptomatic women. To establish the prevalence of Chlamydia trachomatis, Mycoplasma hominis and Ureaplasma urealyticum in women attending a family planning and a prenatal clinic in Halifax, cervical swabs were obtained at the time of the first visit from 491 women who had no symptoms of genital infection. Among the women attending the family planning clinic M. hominis occurred in combination with C. trachomatis more frequently than expected (p less than 0.05). It occurred in the absence of U. urealyticum in only a few cases (13% of the occurrences in the family planning clinic and 6% of those in the prenatal clinic). C. trachomatis was significantly more prevalent in women under 25 years of age (p less than 0.04). However, mycoplasmas were as prevalent in women over 30 years as in those under 30. There were no significant differences in the infection rates of the organisms by trimester among pregnant women. More research is necessary for a proper understanding of the role of M. hominis and U. urealyticum in genitourinary infections and pregnancy outcomes. abstract_id: PUBMED:420275 Genital mycoplasmas and birth weight in offspring of primigravid women. The relationship between colonization of pregnant women by Ureaplasma and Mycoplasma and the outcome of pregnancy in a study of 104 women is reported. There were eight abortions but no multiple births or stillbirths. Excluding abortions, the mean gestational length was 39.6 weeks and the corrected mean birth weight was 3.28 kilograms. There was no relationship between maternal colonization by genital mycoplasmas and reduction in birth weight of the offspring, but five of the eight women who aborted harbored Ureaplasma, suggesting an increased trend toward abortion in women harboring Ureaplasma. Answer: The study conducted in an urban community of northern Nigeria found that the overall incidence of genital Mycoplasma infection among females was 29.6%, with M. hominis at 12.1%, U. urealyticum at 9.4%, mixed infection at 6.7%, and other Mycoplasmas at 1.4%. The majority of the isolates were from women aged 20-35 years old, which is the most sexually active group. Additionally, 83% of those who presented with vaginal discharge were infected with Mycoplasma spp., and the incidence of infection among the separated/divorce/widowed group was significantly higher than the married group. These findings suggest that Mycoplasmas are common genital organisms and should be sought out from endocervical swabs, possibly on a routine basis for suspected genital tract infections (PUBMED:18788259). Given the prevalence and potential health implications of genital Mycoplasma infections, it seems prudent to search for them, especially in populations with a high incidence of symptoms or in groups with higher observed rates of infection. The study's conclusion supports the routine screening for these infections to better manage and treat affected individuals.
Instruction: Is a lack of disgust something to fear? Abstracts: abstract_id: PUBMED:30265075 Effects of vicarious disgust learning on the development of fear, disgust, and attentional biases in children. Fear and disgust are defensive emotions that have evolved to protect us from harm. Whereas fear is thought to elicit an instinctive response to deal with immediate threat, disgust elicits immediate sensory rejection to avoid contamination. One mechanism through which disgust and fear may be linked is via attentional bias toward threat. Attentional bias is a well-established feature of anxiety disorders and is known to increase following vicarious fear learning. However, the contribution of vicarious learning to the development of disgust-related attentional biases is currently unknown. Furthermore, the influence of individual differences in disgust propensity and disgust sensitivity on fear and disgust responses has not been investigated in the context of vicarious learning. Therefore, 53 children aged 7-9 years were randomly assigned to receive either fear vicarious learning or disgust vicarious learning. Children's fear beliefs, disgust beliefs, avoidance preferences, and attentional bias were measured at baseline and postlearning. Findings demonstrated increased fear and disgust responding to stimuli following disgust and fear vicarious learning. Crucially, the study provided the first evidence that disgust vicarious learning can create an attentional bias for threat in children similar to that created via fear vicarious learning. However, there was no relationship between disgust propensity and sensitivity and vicariously acquired increases in fear, disgust, and attention. In conclusion, both fear and disgust vicarious learning can create attentional bias, allowing rapid detection of potentially harmful stimuli. This effect could contribute to fear development and is found even in children who are not particularly high in disgust proneness. (PsycINFO Database Record (c) 2019 APA, all rights reserved). abstract_id: PUBMED:37270955 Can immorality be contracted? Appraisals of moral disgust and contamination fear. While extant research underlines the role of disgust in obsessive-compulsive disorder (OCD) with contamination fear, less research attention has been devoted to moral disgust. This study endeavored to examine the types of appraisals that are elicited by moral disgust in comparison to core disgust, and to examine their associations with both contact and mental contamination symptoms. In a within-participants design, 148 undergraduate students were exposed to core disgust, moral disgust, and anxiety control elicitors via vignettes, and provided appraisal ratings of sympathetic magic, thought-action fusion and mental contamination, as well as compulsive urges. Measures of both contact and mental contamination symptoms were administered. Mixed modeling analyses indicated that core disgust and moral disgust elicitors both provoked greater appraisals of sympathetic magic and compulsive urges than anxiety control elicitors. Further, moral disgust elicitors elicited greater thought-action fusion and mental contamination appraisals than all other elicitors. Overall, these effects were greater in those with higher contamination fear. This study demonstrates how a range of contagion beliefs are evoked by the presence of 'moral contaminants', and that such beliefs are positively associated with contamination concerns. These results shed light on moral disgust as an important target in the treatment of contamination fear. abstract_id: PUBMED:34078243 Mechanisms underlying memory enhancement for disgust over fear. Disgust is remembered better than fear, despite both emotions being highly negative and arousing. But the mechanisms underlying this effect are not well-understood. Therefore, we compared two proposed mechanisms underlying superior memory for disgust. According to the memory consolidation mechanism, it is harder (but crucial) to remember potentially contaminating vs. threatening stimuli. Hence, disgust elicits additional memory consolidation processes to fear. According to the attention mechanism, it takes longer to establish if disgust (relative to fear) stimuli are dangerous. Hence, people pay more attention to disgust during encoding. Both mechanisms could boost memory for disgust. Ninety-eight participants encoded disgust, fear, and neutral images whilst completing a simple task to measure attention. After 10- or 45-min delay, participants freely recalled the images. We found enhanced memory for disgust relative to fear after 10- and 45-min delay, but this effect was larger after 45-min. Participants paid more attention to disgust than fear images during encoding. However, mixed effect models showed increased attention did not contribute to enhanced memory for disgust. Our results therefore support the memory consolidation mechanism. abstract_id: PUBMED:34244571 Generalization gradients for fear and disgust in human associative learning. Previous research indicates that excessive fear is a critical feature in anxiety disorders; however, recent studies suggest that disgust may also contribute to the etiology and maintenance of some anxiety disorders. It remains unclear if differences exist between these two threat-related emotions in conditioning and generalization. Evaluating different patterns of fear and disgust learning would facilitate a deeper understanding of how anxiety disorders develop. In this study, 32 college students completed threat conditioning tasks, including conditioned stimuli paired with frightening or disgusting images. Fear and disgust were divided into two randomly ordered blocks to examine differences by recording subjective US expectancy ratings and eye movements in the conditioning and generalization process. During conditioning, differing US expectancy ratings (fear vs. disgust) were found only on CS-, which may demonstrated that fear is associated with inferior discrimination learning. During the generalization test, participants exhibited greater US expectancy ratings to fear-related GS1 (generalized stimulus) and GS2 relative to disgust GS1 and GS2. Fear led to longer reaction times than disgust in both phases, and the pupil size and fixation duration for fear stimuli were larger than for disgust stimuli, suggesting that disgust generalization has a steeper gradient than fear generalization. These findings provide preliminary evidence for differences between fear- and disgust-related stimuli in conditioning and generalization, and suggest insights into treatment for anxiety and other fear- or disgust-related disorders. abstract_id: PUBMED:35073109 Horror, fear, and moral disgust are differentially elicited by different types of harm. Witnessing or experiencing extreme and incomprehensible harm elicits an intense emotional response that is often called "horror." Although traditional emotion taxonomies have categorized horror as a subtype of fear and/or disgust, recent empirical work has indicated that horror is a distinct emotion category (Cowen &amp; Keltner, 2017). However, exactly how horror is different from fear and disgust has remained unclear. The current studies represent the first empirical attempt to clarify how horror is distinct from fear and moral disgust. Results indicated that these emotions are elicited by different aspects of harm: horror is a response to the severity or abnormality of harm, fear to the self-relevance of harm, and moral disgust to the harm's causal agent. In a survey of personal experiences of emotions (Study 1), participants reported having felt horror in response to the actual occurrence of extreme or abnormal harm, but felt fear and moral disgust in response to events involving no harm or only mild harm. Participants also reported greater cognitive disruption (e.g., disbelief, schema-incongruence) during horror than during fear or moral disgust. Experiments testing the effects of different aspects of harm on emotion ratings indicated that horror was differentially increased by harm that was abnormal (vs. common) and had already occurred (vs. potential threat), whereas fear was differentially increased by harm that had high (vs. low) self-relevance (Study 2). Further, extreme (vs. mild) harm differentially increased horror, but the presence (vs. absence) of a blameworthy agent differentially increased moral disgust (Study 3). (PsycInfo Database Record (c) 2022 APA, all rights reserved). abstract_id: PUBMED:33578215 Behavioral avoidance tasks for eliciting disgust and anxiety in contamination fear: An examination of a test for a combined disgust and fear reaction. While research supports the role of disgust in contamination OCD, there is also an overlap with fear in motivating avoidance. The "heebie-jeebies" is an emotional response associated with fear and disgust that motivates avoidance of contact with skin-transmitted pathogens (e.g., parasites). This motivation aligns with characteristics of contamination OCD. From a screening of undergraduate students (N = 188), contamination fearful (n = 14), high trait-anxious (n = 14), and low trait-anxious (n = 18) groups were created. Participants engaged in disgust, fear, and "heebie-jeebies" behavioral avoidance tasks. Participants rated "heebie-jeebies" emotion, physical sensations, and behavioral urges. Duration or refusal of task was recorded. A significant interaction effect was found for disgust and anxiety. Participants with higher disgust reported higher "heebie-jeebies" emotion at high, but not low, levels of anxiety. Exploratory analyses revealed that many contamination fearful and high trait-anxious participants refused to complete the task. The interaction of disgust and anxiety significantly predicted the probability of refusal. Participants with higher disgust and anxiety were more likely to refuse to complete the task. Results suggest that the "heebie-jeebies" motivates avoidance of skin-transmitted pathogens. Future research is warranted to further investigate the "heebie-jeebies" and how it relates to contamination concerns. abstract_id: PUBMED:31143154 Snakes Represent Emotionally Salient Stimuli That May Evoke Both Fear and Disgust. Humans perceive snakes as threatening stimuli, resulting in fast emotional and behavioral responses. However, snake species differ in their true level of danger and are highly variable in appearance despite the uniform legless form. Different snakes may evoke fear or disgust in humans, or even both emotions simultaneously. We designed three-step-selection experiments to identify prototypical snake species evoking exclusively fear or disgust. First, two independent groups of respondents evaluated 45 images covering most of the natural variability of snakes and rated responses to either perceived fear (n = 175) or disgust (n = 167). Snakes rated as the most fear-evoking were from the family Viperidae (Crotalinae, Viperinae, and Azemiopinae), while the ones rated as the most disgusting were from the group of blind snakes called Typhlopoidea (Xenotyphlopinae, Typhlopinae, and Anomalepidinae). We then identified the specific traits contributing to the perception of fear (large body size, expressive scales with contrasting patterns, and bright coloration) and disgust (thin body, smooth texture, small eyes, and dull coloration). Second, to create stimuli evoking a discrete emotional response, we developed a picture set consisting of 40 snakes with exclusively fear-eliciting and 40 snakes with disgust-eliciting features. Another set of respondents (n = 172) sorted the set, once according to perceived fear and the second time according to perceived disgust. The results showed that the fear-evoking and disgust-evoking snakes fit mainly into their respective groups. Third, we randomly selected 20 species (10 fear-evoking and 10 disgust-evoking) out of the previous set and had them professionally illustrated. A new set of subjects (n = 104) sorted these snakes and confirmed that the illustrated snakes evoked the same discrete emotions as their photographic counterparts. These illustrations are included in the study and may be freely used as a standardized assessment tool when investigating the role of fear and disgust in human emotional response to snakes. abstract_id: PUBMED:27939702 Fear and disgust in women: Differentiation of cardiovascular regulation patterns. Both fear and disgust facilitate avoidance of threat. From a functional view, however, cardiovascular responses to fear and disgust should differ as they prepare for appropriate behavior to protect from injury and infection, respectively. Therefore, we examined the cardiovascular responses to fear and contamination-related disgust in comparison to an emotionally neutral state induced with auditory scripts and film clips in female participants. Ten emotion and motivation self-reports and ninecardiovascular response factors derived from 23 cardiovascular variables served as dependent variables. Self-reports confirmed the specific induction of fear and disgust. In addition, fear and disgust differed in their cardiovascular response patterning. For fear, we observed specific increases in factors indicating vasoconstriction and cardiac pump function. For disgust, we found specific increases in vagal cardiac control and decreases in myocardial contractility. These findings provide support for the cardiovascular specificity of fear and disgust and are discussed in terms of a basic emotions approach. abstract_id: PUBMED:34867581 Is Acquired Disgust More Difficult to Extinguish Than Acquired Fear? an Event-Related Potential Study. This study used the classical conditioned acquisition and extinction paradigm to compare which of the two emotions, acquired disgust and acquired fear, was more difficult to extinguish, based on behavioral assessments and the event-related potential (ERP) technique. Behavioral assessments revealed that, following successful conditioned extinction, acquired disgust was more difficult to extinguish. The ERP results showed that, at the early stage of P1, the amplitude of conditioned fear was significantly smaller than that of conditioned disgust, and both were significantly different from the amplitude under neutral conditions; at the middle stage of N2, the difference between the amplitudes of conditioned disgust and conditioned fear disappeared, but they were still significantly different from the amplitudes of conditioned neutral stimuli; at the late stage of P3, the difference between conditioned disgust and conditioned neutral stimuli disappeared, but the difference between conditioned fear and neutral stimuli remained, suggesting that acquired fear was more difficult to extinguish than acquired disgust in terms of how the brain works. abstract_id: PUBMED:38239483 Human emotional evaluation of ancestral and modern threats: fear, disgust, and anger. Introduction: Animal and human ancestors developed complex physiological and behavioral response systems to cope with two types of threats: immediate physical harm from predators or conspecifics, triggering fear, and the risk of infections from parasites and pathogens leading to the evolution of the behavioral immune system with disgust as the key emotion. Integration of the evolutionary concepts of the fear module and behavioral immune systems has been infrequent, despite the significant survival advantages of disgust in various contexts. Studies comparing attention to ancestral and modern threats accompanied by fear have yielded ambiguous results and what qualifies as salient modern disgusting stimuli remains unclear. We do not know whether disgust or the behavioral immune system, as inherent aspects of human psychology, have adapted to safeguard us from pandemic risks or poisoning by modern toxic substances. Methods: To test these effects, we have developed a survey comprised of 60 short vignettes describing threats evoking fear and disgust belonging to one of the three main categories of threats: (1) ancestral (phylogenetic), (2) modern (ontogenetic), and (3) pandemics of airborne disease. Each vignette was evaluated on a 7-point Likert scale based on fear, disgust, and anger. In total, 660 respondents completed the survey. The data were analysed using a factor analysis and general linear model with the respondent as a random factor. Results: The results show that the strongest fear is triggered by modern threats (electricity, car accidents), while the highest disgust is evoked by ancient threats (body waste products, worms, etc.). Interestingly, disgust does not respond to modern threat stimuli such as toxic substances or radioactivity as these evoke mainly fear and anger. Finally, a distinct response pattern was found for pandemic threats, in which both fear (e.g., of disease and death) and disgust (e.g., of used face masks) are employed. Discussion: Our study offers valuable insights into the emotional responses to ancestral and modern threats and their adaptation to pandemic challenges. Ancestral threats are not always more powerful stimuli than adequate threats of the modern type, but they function specifically. Thus, snakes and heights as fear-inducing ancestral threats form separate factors in a multivariate analysis, whereas all ancestral disgust stimuli group together. The threat of a pandemic forms a specific category and people process it emotionally and cognitively. These insights contribute to our understanding of human psychology and behavior in an ever-changing world. Answer: The relationship between disgust and fear is complex, and the literature suggests that both emotions are defensive mechanisms that have evolved to protect us from harm. Disgust typically elicits sensory rejection to avoid contamination, while fear elicits an instinctive response to deal with immediate threats (PUBMED:30265075). Disgust has been shown to be remembered better than fear, which suggests that it plays a crucial role in avoiding potentially contaminating stimuli (PUBMED:34078243). Moreover, disgust can contribute to the development of attentional biases in children, allowing for the rapid detection of potentially harmful stimuli (PUBMED:30265075). However, the question of whether a lack of disgust is something to fear is not directly addressed in the provided abstracts. Disgust and fear can be elicited by different types of harm, with horror responding to the severity or abnormality of harm, fear to the self-relevance of harm, and moral disgust to the harm's causal agent (PUBMED:35073109). Disgust is also associated with contamination fears in OCD and can be linked to the "heebie-jeebies," an emotional response that motivates avoidance of contact with skin-transmitted pathogens (PUBMED:33578215). While the abstracts do not explicitly state that a lack of disgust is something to fear, they do suggest that disgust serves an important protective function. For example, disgust responses to ancestral threats such as body waste products and worms are strong, whereas modern threats like toxic substances evoke more fear and anger (PUBMED:38239483). This indicates that disgust has a specific role in responding to certain types of threats that have been relevant throughout human evolution. In summary, while the abstracts do not directly answer the question, they imply that disgust is an important emotion for avoiding harm and contamination. A lack of disgust could potentially reduce an individual's ability to avoid these threats, which might be a concern. However, whether this lack is something to fear would depend on the context and the potential consequences of not experiencing disgust in situations where it would typically serve a protective function.
Instruction: Do vascular compartments differ in the development of chronic rejection? Abstracts: abstract_id: PUBMED:11595564 Do vascular compartments differ in the development of chronic rejection? AT(1) blocker candesartan versus Ace blocker enalapril in an experimental heart transplant model. Background: Accelerated coronary artery disease (ACAD), a serious consequence after heart transplantation, is characterized by diffuse, concentric myointimal proliferation in the arteries. Increasing evidence supports the existence of a local renin-angiotensin system and the role of angiotensin-II in smooth muscle cell proliferation. We investigated the effect of angiotensin-II blocker candesartan and angiotensin-converting enzyme (ACE) inhibitor enalapril on experimental ACAD in a rat model. Methods: After heterotopic cardiac transplantation (Fisher to Lewis), recipients received 20 mg/kg/day candesartan or 40 mg/kg/day enalapril per os. Two groups of animals received additional pre-treatment with candesartan or enalapril 7 days before transplantation, and treatment was continued after grafting. All study groups including the controls received 3 mg/kg/day of sub-cutaneous cyclosporine for immunosuppression. A syngeneic group (Lewis to Lewis), serving as extra control, did not receive any treatment. Eighty days after grafting, we assessed the extent of ACAD in large and small arteries, using digitizing morphometry and expressed as mean vascular occlusion (MVO). Results: In enalapril and candesartan pre- and post-treated animals, we observed significant reduction of MVO of intramyocardial arteries compared with the cyclosporine group (p &lt; 0.005), to levels similar to the syngeneic transplants. MVO of epicardial arteries in enalapril and candesartan pre- or posttreated animals did not significantly differ from cyclosporine controls (p &gt; 0.05). Conclusion: Our results support the hypothesis of 2 proliferative compartments in the development of ACAD, with differing receptor or enzyme distribution: the compartment of small, intramyocardial arteries in which ACAD can be reduced by ACE or AT(1) blockade, and that of large, epicardial arteries in which inhibition fails. abstract_id: PUBMED:11250314 Do vascular compartments differ in the development of chronic rejection? N/A abstract_id: PUBMED:10500447 Effect of estrogens on vascular proliferation Studies on the effect of oestrogen on the circulatory apparatus have shown changes in vascular reactivity and structural alterations of blood vessels that participate in vascular growth and remodelling, whether physiological or pathological (atherosclerosis, ischaemia, restenosis). Direct vascular effects of oestradiol are mediated by functional steroid receptors, ER alpha and ER beta. ER alpha is predominantly found in arterial smooth muscle cells. During the menstrual cycle and pregnancy, endometrial vascular growth is required to allow embryo implantation and the development of the blood supply for fetal growth; oestradiol, in association with progesterone, promotes the growth of endometrial arteries, via ER and unknown mechanisms which probably involve the production of growth factors; oestradiol also induces endometrial angiogenesis, via the production of vascular endothelial growth factor (VEGF) by epithelial cells and fibroblasts. Oestradiol inhibits the proliferation of smooth muscle cells in the arterial wall (except in the genital tract), explaining in part the protective role of oestrogen against restenosis and chronic graft rejection. Further studies are required to determine the molecular mechanisms of these actions and the respective role of ER alpha and ER beta. abstract_id: PUBMED:18852667 Late onset antibody-mediated rejection and endothelial localization of vascular endothelial growth factor are associated with development of cardiac allograft vasculopathy. Background: Improvements in cardiac transplant practice and immunosuppressive treatment have done much to curb the incidence of acute cellular rejection (ACR); however, antibody-mediated rejection (AMR) and cardiac allograft vasculopathy (CAV) remain prevalent. Recent studies have shown that allograft rejection is governed by both allogeneic and nonallogeneic factors such as inflammation. Initial studies have suggested that vascular endothelial growth factor (VEGF), a leukocyte mitogen produced by activated endothelial cells and leukocytes, may play a specific role in not only leukocyte trafficking, but also in the augmentation of ACR and development of CAV. Methods: We investigated the localization of VEGF protein using immunohistochemistry in a cohort of 76 heart transplant patients during periods of ACR and AMR and assessed the development of CAV. Results: We showed a significant correlation between lymphocytic localization of VEGF protein and severe ACR (P&lt;0.001). Antibody-mediated rejection positive biopsies taken at 12 months posttransplantation showed significantly greater endothelial localization of VEGF than time-matched AMR negative biopsies (P=0.006). Diffuse endothelial expression of VEGF was also associated with a 2.5-fold increase in the risk of developing CAV (P=0.001). Conclusions: These results show that localization of VEGF protein to the vascular endothelium during AMR is significantly increased in patients who develop CAV. This study also highlights the potential pathogenic role of the endothelial cell in late onset AMR and the development of CAV. abstract_id: PUBMED:7573019 Role of the eosinophil in chronic vascular rejection of renal allografts. Obiliterative arteriopathy in chronic renal allograft rejection is caused by intimal smooth muscle proliferation accompanied by infiltration of lymphocytes, monocytes, and eosinophils. We investigated the role of the eosinophil in chronic rejection. Twenty-four allograft nephrectomies were examined for the presence of eosinophils on hematoxylin-eosin-stained sections and using epifluorescence on Fisher-Giemsa-stained sections. Among 15 cases with chronic rejection, eosinophils were detected in 14 cases (93%) with epifluorescence compared with only six cases (40%) with hematoxylin-eosin staining (P = 0.005). With epifluorescence, eosinophils were identified in the intimal, adventitial, and tubulointerstitial compartments in 73%, 80%, and 87% of cases, respectively. To examine the pathogenic relevance of the eosinophils in the vessel wall, we investigated the effect of eosinophil-conditioned medium on DNA synthesis in cultured vascular smooth muscle cells. Autofluorescent eosinophils were isolated from atopic human donors using a fluorescence-activated cell sorter. Supernatant was collected from eosinophils (1 x 10(6)/mL) cultured overnight in medium with 0.5% fetal bovine serum. Incorporation of 3H-thymidine into DNA was measured in rat and human vascular smooth muscle cells treated for 24 hours with eosinophil-conditioned medium at 1:20, 1:10, 1:5, and 1:2 dilutions. Eosinophil-conditioned medium had a significant dose-dependent stimulatory effect on DNA synthesis in both cell lines. Our results indicate that eosinophil involvement in chronic renal allograft rejection is more common than previously recognized. The stimulatory effect of eosinophil-conditioned medium on vascular smooth muscle cell DNA synthesis suggests that eosinophils may be involved in the pathogenesis of the obliterative arteriopathy characteristically seen in chronic vascular rejection of renal allografts. abstract_id: PUBMED:18411633 Renal xenotransplant. Acute vascular rejection Introduction And Objectives: Organ transplant is nowadays a usual and succesful practice, although with limited application due to the lack of organs. Yearly thousands of patients get access to the waiting list and finally will death while they are waiting for an organ. In the U.S.A., 2005 waiting list for kidneys, heart, liver lung and pancreas was around 94.419. Number of transplants performed was 27.966 and died patients while waiting for an organ, 41.392 (1). Pig xenotransplant is one of the possibilities to ameliorate the lack of organs for transplant. Arrangement of pigs with different genetic modifications generated great expectatives on the use of these organs in clinics. Although preclinical experimental studies with kidneys reached prolonged survivals, these are really insufficient to go on with the clinical appliance. Hyperacute rejection produces destruction of the organ immediately. This problem could be pharmacologically precluded in xeno-transplant. However, acute rejection or vascular rejection usually produces the lost of the implant. New inmunosuppresive schedules delay significantly rejection, but not definitively. Xenotransplant as a therapeutic option introduces important scientific problems, as well as ethical and social. This paper reports a summary of our experience in renal xenotransplant and the management of acute rejection. Material And Methods: Twenty xenotransplants from transgenic pig (hDAF) as donor to babuine as receptor. Average weight of the animals ranged 11.4-75 kgrs and babuines 10-26 kg. Xenograft average weight ranged 39-160 grs. Implant was performed to aorta and cava. Four inmunosupressive schedules were used. Results: Average survival was 7-9 days. Final Histological findings are described. Changes observed were secondary to acute tubular necrosis mixed with changes due to acute rejection. Three grafts were lost due to technical major problems. Conclusions: Although we have observed some promising results, xenotransplant is a very difficult problem to solve in the long-term. A lot of research is still needed-. abstract_id: PUBMED:8264052 Hemodynamics and aneurysm development in vascular allografts. Purpose: Mechanical and immunologic factors may play a role in the development of native arterial and biologic graft aneurysms. We developed an experimental rat aortic allograft aneurysm model in which segments of infrarenal aorta were transplanted between hypertensive and normotensive rats to study these factors in this model. Methods: Aortic allografts and autografts were inserted into spontaneously hypertensive (SHR) and normotensive Wistar Kyoto (WKY) rats. Effects of immunologic and antihypertensive therapy were evaluated. Graft diameters were followed up with magnetic resonance imaging and at harvest. Direct-pressure measurements were taken and dp/dtmax (force of ventricular contractions) was calculated before harvest. Results: Autografts remained isodiametric and maintained their histologic architecture. Aneurysmal dilation of transplanted segments occurred in SHR host allografts but not in WKY host allografts. Histologic examination of all allograft specimens noted a rejection reaction characterized by inflammatory cell infiltration and medial smooth muscle cell loss. Antigenic enhancement accelerated aneurysm development in SHR hosts but had no significant effect on WKY hosts. Rates of allograft enlargement and final allograft diameters were similar in antihypertensive treated and untreated SHR hosts. The dp/dtmax in untreated SHR hosts was greatest and differed significantly from that in the WKY rats but only marginally from that in treated SHR hosts. Conclusions: Immunologic rejection but not abnormal hemodynamics is necessary for development of allograft aneurysm in this model. abstract_id: PUBMED:8495867 Modulation of vascular antithrombin III in human cardiac allografts. The natural anticoagulant pathway involving heparan sulfate proteoglycan and antithrombin III (ATIII) was studied in serial biopsies from 90 cardiac allograft recipients. The ATIII component of this pathway was identified immunocytochemically on venous endothelium and arterial smooth muscle cells and intima of normal donor hearts and stable allografts. Unstable grafts lacked vascular ATIII and contained fibrin deposits. Neither stable nor unstable grafts had ATIII-reactive capillary endothelium. Grafts with absent vascular ATIII could (1) result in death, (2) revert to an arterial/venous ATIII distribution or (3) develop ATIII-reactive capillary endothelium. The development of ATIII-reactive capillaries was associated with a survival advantage, and such reactivity seemed to be promoted by heparin. abstract_id: PUBMED:10764835 Heparanase expression in invasive trophoblasts and acute vascular damage. Heparan sulfate proteoglycans play a pivotal role in tissue function, development, inflammation, and immunity. We have identified a novel cDNA encoding human heparanase, an enzyme thought to cleave heparan sulfate in physiology and disease, and have located the HEP gene on human chromosome 4q21. Monoclonal antibodies against human heparanase located the enzyme along invasive extravillous trophoblasts of human placenta and along endothelial cells in organ xenografts targeted by hyperacute rejection, both sites of heparan sulfate digestion. Heparanase deposition was evident in arterial walls in normal tissues; however, vascular heparan sulfate cleavage was coincident with heparanase enzyme during inflammatory episodes. These findings suggest that heparanase elaboration and control of catalytic activity may contribute to the development and pathogenesis of vascular disease and suggest that heparanase intervention might be a useful therapeutic target. abstract_id: PUBMED:11068933 Somatostatin receptors and the potential use of Sandostatin to interfere with vascular remodelling. Graft vessel disease (GVD) is a major cause of graft loss after the first year following transplantation. GVD is a complex, multifunctional process that involves immunological as well as non-immunological events such as ischaemia/reperfusion injury. An important target cell to interfere with the development of GVD is the smooth muscle cell (SMC). Somatostatin (SRIF) analogues have been shown previously to inhibit the proliferation of SMC in vitro and in vivo. We provide evidence that Sandostatin, an octapeptide SRIF analogue that is known to have anti-proliferative properties on SMC proliferation, inhibits vascular remodelling in a rat angioplasty model. Furthermore, in two allotransplantation models, Sandostatin effectively interferes with the development of signs of chronic rejection/GVD. The role of the different SRIF receptor subtypes in chronic graft rejection is currently under investigation. Answer: Yes, vascular compartments do differ in the development of chronic rejection. The study by PUBMED:11595564 found that in an experimental heart transplant model, the development of accelerated coronary artery disease (ACAD) could be influenced by the local renin-angiotensin system and the role of angiotensin-II in smooth muscle cell proliferation. The study investigated the effects of the angiotensin-II blocker candesartan and the angiotensin-converting enzyme (ACE) inhibitor enalapril on ACAD. The results showed that there was a significant reduction in the mean vascular occlusion (MVO) of intramyocardial arteries in animals treated with either candesartan or enalapril compared to controls. However, the MVO of epicardial arteries did not significantly differ from controls in the treated animals. This supports the hypothesis that there are two proliferative compartments in the development of ACAD: the compartment of small, intramyocardial arteries where ACAD can be reduced by ACE or AT(1) blockade, and the compartment of large, epicardial arteries where inhibition fails. Additionally, the study by PUBMED:7573019 investigated the role of eosinophils in chronic vascular rejection of renal allografts and found that eosinophils were involved in the pathogenesis of obliterative arteriopathy, which is characteristic of chronic vascular rejection. The study showed that eosinophil-conditioned medium had a stimulatory effect on DNA synthesis in vascular smooth muscle cells, suggesting that eosinophils may contribute to the smooth muscle proliferation seen in chronic rejection. These findings indicate that different vascular compartments may have distinct susceptibilities and responses to factors involved in chronic rejection, and that the mechanisms of chronic rejection may vary between different types of blood vessels within the same organ.